id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.12699
Estimation of high-dimensional unitary transformations saturating the Quantum Cramér-Rao bound
We propose an estimation procedure for $d$-dimensional unitary transformations. For $d>2$, the unitary transformations close to the identity are estimated saturating the quantum Cram\'er-Rao bound. For $d=2$, the estimation of all unitary transformations is also optimal with some prior information. We show through numerical simulations that, even in the absence of prior information, two-dimensional unitary transformations can be estimated with greater precision than by means of standard quantum process tomography.
J. Escandón-Monardes, D. Uzcátegui, M. Rivera-Tapia, S. P. Walborn, A. Delgado
2023-10-19T12:52:02Z
http://arxiv.org/abs/2310.12699v2
# Optimal estimation of high-dimensional unitary transformations ###### Abstract We propose an estimation procedure for \(d\)-dimensional unitary transformations. For \(d>2\), the unitary transformations close to the identity are estimated saturating the quantum Cramer-Rao bound. For \(d=2\), the estimation of all unitary transformations is also optimal with some prior information. We show through numerical simulations that, even in the absence of prior information, two-dimensional unitary transformations can be estimated with greater precision than by means of standard quantum process tomography. pacs: 03.65.Ta, 03.65.Ta, 03.65.Ta, 03.65.Ta, 03.65.Ta ## I Introduction The continuous advancement in the ability to control quantum systems and its application to the development of quantum technologies has driven the search for high-precision measurements and estimation methods. Quantum metrology offers to overcome the classical limits of precision by exploiting quantum mechanical effects such as superposition and entanglement. The enhancements provided by quantum metrology depend on the state of the probe, the quantum measurement, and the landscape of the parameters to be estimated. These are usually related through a multivariate variational problem that generally lacks analytical solutions. Despite this, quantum metrological improvements have already been demonstrated on various experimental platforms [1; 2; 3; 4; 5; 6; 7] in the single parameter case. Due to its intrinsic difficulty [8], the multiparameter case has remained less explored. In particular, the optimal measurements for different parameters are often incompatible [9] and the optimal probe states for different parameters can typically be different. Furthermore, in the multiparameter estimation, the quantum Cramer-Rao bound [10; 11], which sets a fundamental limit for the covariance matrix, is generally not achievable even asymptotically [12]. An instance of multiparameter estimation is the estimation of \(d\)-dimensional unitary transformations. Several methods to accomplish this task have been studied [13; 14], particularly standard quantum process tomography (SQPT)[15], which has been successfully implemented for reconstructing quantum gates on ion traps [16], superconducting circuits [17], among many others [18; 19]. Here, we propose a novel method for estimating \(d\)-dimensional unitary operations. Our approach requires a single target qudit, two control qudits, controlled gates, and Fourier transformations acting on the control qudits. The unknown unitary transformation acts on the target qudit. These resources allow mapping the unitary transformation to a state of both control qudits, which, after performing measurements, lead to an estimate of the coefficients that define the unitary transformation in the Weyl-Heisenberg basis. This is achieved without the need to measure the target qudit. The main characteristic of this estimation procedure is that it is optimal in the sense that it saturates the quantum Cramer-Rao bound. This result holds for any finite dimension \(d>2\) and for unitary transformations that are close to the identity, allowing to optimally estimate Hamiltonian operators. In the case \(d=2\), all unitary transformations can be optimally estimated provided that the octant which the Bloch vector points to is known in advance. In this case, our estimation procedure agrees with previous results [20; 21; 22]. We also estimate 2-dimensional unitary transformations without prior information, at the cost of losing estimation accuracy, and compare with SQPT. We simulate both procedures on Qiskit, IBM's software development platform for quantum processors [23], and study the average gate fidelity [24] as a function of the ensemble size, or number of shots, for a set of randomly generated unitary transformations. In the ideal case, that is, in the absence of error sources, our estimation procedure provides a better mean average gate fidelity than SQPT. Moreover, our estimation procedure shows a narrow standard deviation which means that all unitary transformations are estimated with similar average gate fidelity. In presence of noise affecting state generation, quantum gates, and measurements, our estimation procedure and SQPT exhibit similar mean gate infidelity, although the former with a larger standard deviation. Median gate fidelity of our procedure is larger than that of SQPT, which lays close to the inferior border of the interquartile range of our estimation procedure. Thereby, our estimation procedure provides a better estimation than SQPT in most cases. ## II Results ### Estimation procedure An arbitrary \(d\)-dimensional unitary transformation \(U\) can be expanded in the Weyl-Heisenberg basis as \[U=\sum_{m,n=0}^{d-1}u_{m,n}X^{m}Z^{n}, \tag{1}\] where \(X\) and \(Z\) are the shift and phase operators, respectively. These act onto the canonical basis \(\{|k\rangle\}\) with \(k=0,\ldots,d-1\) as \(X|k\rangle=|k\oplus 1\rangle\) and \(Z|k\rangle=\omega^{k}|k\rangle\) with \(\omega=\exp(2i\pi/d)\). The set \(\{u_{m,n}=r_{m,n}e^{i\phi_{m,n}}\}\) of \(d^{2}\) complex coefficients satisfies the unitarity constraint and characterizes \(U\). With this, a general unitary can be written \[U=r_{0,0}I+\sum_{\begin{subarray}{c}m,n=0\\ (m,n)\neq(0,0)\end{subarray}}^{d-1}r_{m,n}e^{i\phi_{m,n}}X^{m}Z^{n}. \tag{2}\] where we set \(\phi_{0,0}=0\) without loss of generality. To estimate a \(d\)-dimensional unitary operation \(U\), we propose a procedure which uses the quantum circuit shown in Fig. 1. This circuit is applied to the following initial quantum state of three qudits \[|\Phi^{0}\rangle_{012}=|\psi\rangle_{0}\otimes|00\rangle_{12}, \tag{3}\] where the target qudit is in the arbitrary state \(|\psi\rangle_{0}\), and the qudits labeled 1 and 2 are the control states. Each control qudit is subject to the action of a Fourier transform \(F|k\rangle=(1/\sqrt{d})\sum\omega^{mk}|m\rangle\), followed by the sequence of controlled shift and phase operators \(X_{02}^{(0)}\) and \(Z_{01}^{\dagger(1)}\) defined by \[X_{tc}^{(i)} = \sum_{m=0}^{d-1}X_{t}^{m}\otimes|m\ominus i\rangle_{c}\langle m \ominus i|,\] \[Z_{tc}^{(i)} = \sum_{m=0}^{d-1}Z_{t}^{m}\otimes|m\ominus i\rangle_{c}\langle m \ominus i|. \tag{4}\] The action of the previous transformations leads to the probe state \[\left|\Phi^{3}\right\rangle_{012}=\frac{1}{d}\sum_{j_{1},j_{2}=0}^{d-1}Z_{0}^{ -j_{1}-1}X_{0}^{j_{2}}\left|\psi\right\rangle_{0}\otimes\left|j_{1}\right\rangle _{1}\otimes\left|j_{2}\right\rangle_{2}. \tag{5}\] The unitary transformation \(U\) to be estimated acts on the target qudit followed by the sequence \(Z_{01}^{(0)}\) and then \(X_{02}^{\dagger(1)}\) and inverse Fourier transforms acting on each control qudit. Thereby, the initial state is transformed into the state \[|\Phi^{7}\rangle_{012}=\sum_{m,n=0}^{d-1}u_{m,n+1}(X^{m-1}Z^{n}|\psi\rangle_{ 0})\otimes|m\rangle_{1}\otimes|n\rangle_{2}, \tag{6}\] then \(X_{01}^{\dagger(-1)}\) and \(Z_{02}^{\dagger(0)}\) disentangles the target qudit from the control qudits, followed by \(X_{2}\) which correlates the indexes of the coefficients with the computational basis of both control qudits. The final state (see Appendix A for details) is \[|\Phi^{10}\rangle_{012}=|\psi\rangle_{0}\otimes|\varphi^{0}\rangle_{12}, \tag{7}\] where \[\left|\varphi^{0}\right\rangle_{12}=\sum_{m,n=0}^{d-1}u_{m,n}|m\rangle_{1} \otimes|n\rangle_{2}. \tag{8}\] The coefficients \(u_{m,n}\) entering in Eq. (2) are now in a one-to-one relation with the states in the canonical basis of the control qudits. Thus, the estimation of state \(\left|\varphi^{0}\right\rangle_{12}\) by any quantum tomographic scheme for pure states leads to the estimation of the unknown unitary transformation \(U\). Moreover, simpler measurement schemes can be used, provided that a pure state of two qudits is defined by a set of \(2d^{2}-2\) independent real parameters while a unitary transformation acting on a single qudit is characterized only by \(d^{2}-1\) real parameters. Some specific cases are studied below. ### Quantum estimation theory The classical Cramer-Rao bound states that the covariance matrix \(cov(\hat{\mathbf{t}})\) of an unbiased estimator \(\hat{\mathbf{t}}\) of a parameter vector \(\mathbf{t}\) is bounded below by the inverse of Fisher information matrix \(\mathcal{I}(\mathbf{t})\), that is, for \(n\) repetitions of the experiment, \[cov(\hat{\mathbf{t}})\geq\frac{1}{n}\mathcal{I}^{-1}(\mathbf{t}), \tag{9}\] which leads to limits for the accuracy of the estimate under various figures of merit (for recent reviews on the topic, see Refs. [8] and [25]). The entries of the Fisher information matrix are defined as \[\mathcal{I}_{ab}=\sum_{y}\frac{1}{p(y|\mathbf{t})}\left[\frac{\partial p(y| \mathbf{t})}{\partial t_{a}}\right]\left[\frac{\partial p(y|\mathbf{t})}{ \partial t_{b}}\right] \tag{10}\] where \(p(y|\mathbf{t})\) is the probability of observing the value \(y\) in an experiment for a given parameter vector \(\mathbf{t}\). In the case of quantum mechanics the probability distribution \(p(y|\mathbf{t})\) depends on the measurement performed, which leads to different Fisher information matrices. Hence, it is possible to maximize the Fisher information matrix in the space of quantum measurements. The solution to this optimization problem is the quantum Fisher information matrix \(\mathcal{F}\) such that \(\mathcal{F}\geq\mathcal{I}\), and thus we obtain the quantum Cramer-Rao bound given by \[cov(\hat{\mathbf{t}})\geq\frac{1}{n}\mathcal{I}^{-1}(\mathbf{t})\geq\frac{1}{n }\mathcal{F}^{-1}(\mathbf{t}). \tag{11}\] In the case of estimating a unitary transformation that acts onto a probe state \(|\phi\rangle\), the quantum Fisher information matrix can be calculated as [8] \[\mathcal{F}_{a,b}=2\langle\phi|\{H_{a},H_{b}\}|\phi\rangle-4\langle\phi|H_{a}| \phi\rangle\langle\phi|H_{b}|\phi\rangle, \tag{12}\] where we have \(H_{a}=i(\partial_{a}U^{\dagger})U\). ### Estimation of 2-dimensional unitary transformations Let us now consider the case of 2-dimensional unitary transformations. These can be written as [26] \[U=\exp(-i\alpha\hat{n}\cdot\hat{\sigma}), \tag{13}\] where \(\hat{\sigma}=(X,Y,Z)^{T}\) is the Pauli vector, \(\alpha\in[0,\pi/2]\), and \(\hat{n}\in\mathbb{R}^{3}\) is a real unitary vector. After carrying out the exponentiation, we obtain the representation \[U=u_{0,0}I+u_{1,0}X+u_{1,1}XZ+u_{0,1}Z, \tag{14}\] where we replaced \(Y=iXZ\), and the coefficients are given by \[u_{0,0} = \cos(\alpha), \tag{15}\] \[u_{1,0} = -i\sin(\alpha)\sin(\theta)\cos(\phi),\] (16) \[u_{1,1} = \sin(\alpha)\sin(\theta)\sin(\phi),\] (17) \[u_{0,1} = -i\sin(\alpha)\cos(\theta), \tag{18}\] with \(\theta\in[0,\pi]\) and \(\phi\in[0,2\pi[\) being the spherical coordinates for \(\hat{n}\), and \(I\) the \(2\times 2\) identity matrix. Notice that \(u_{0,0}\) is always non-negative, whereas the signs of \(u_{1,0},u_{1,1},u_{0,1}\) depend on the octant in which the vector \(\hat{n}\) points. Estimating an unknown two-dimensional unitary transformation \(U\) is thus equivalent to estimating the values of the angles \((\alpha,\theta,\phi)\). Projective measurements of control qudits lead to probabilities \[p_{0,0} = \cos^{2}(\alpha), \tag{19}\] \[p_{1,0} = \sin^{2}(\alpha)\sin^{2}(\theta)\cos^{2}(\phi),\] (20) \[p_{1,1} = \sin^{2}(\alpha)\sin^{2}(\theta)\sin^{2}(\phi),\] (21) \[p_{0,1} = \sin^{2}(\alpha)\cos^{2}(\theta). \tag{22}\] It follows that \[\cos^{2}(\alpha)=p_{0,0},\,\cos^{2}(\theta)=\frac{p_{0,1}}{1-p_{0,0}},\,\cos^{ 2}(\phi)=\frac{p_{1,0}}{p_{1,1}+p_{1,0}}. \tag{23}\] These relations allow for estimating the value of \(\alpha\), which is always in the interval \([0,\pi/2]\). However, parameters \(\theta\) and \(\phi\) remain ambiguous, since \(u_{0,1},u_{1,0},u_{1,1}\) are determined up to a sign. This ambiguity is removed when the octant pointed to by \(\hat{n}\) is known beforehand, in which case our estimation procedure characterizes the unknown unitary transformation. Furthermore, it can be shown by direct algebra that our estimation procedure fulfills the equality \(\mathcal{I}=\mathcal{F}\) where \[\mathcal{F}=4\begin{pmatrix}1&0&0\\ 0&\sin^{2}(\alpha)&0\\ 0&0&\sin^{2}(\alpha)\sin^{2}(\theta)\end{pmatrix}, \tag{24}\] and therefore our proposal saturates the quantum Cramer-Rao bound. Moreover, \(\mathcal{F}\) is diagonal, hence our circuit is optimal in the sense that the three parameters defining \(U\) can be estimated simultaneously with the highest possible precision. The quantum Fisher information matrix in Eq. (24) was also obtained in other works [20, 21, 22]. The lack of a priori information does not prevent the use of our estimation procedure. As we show through numerical simulations in section II.5, our procedure can be complemented with additional measurements and at the same time achieves better estimation accuracy than that obtained by standard process tomography. ### Estimation of higher-dimensional unitary transformations In the higher dimensional case, we consider unitary transformations that are close to the identity. To simplify the notation, we denote the coefficients \(u_{p_{x},p_{z}}\equiv u_{p}=r_{p}e^{i\phi_{p}}\), where we have defined a single index \(p=(p_{x},p_{z})\), entering in the expansion of the unitary transformation in the Weyl-Heisenberg basis. These coefficients are constrained by the conditions \[\sum_{m\in\mathbb{Z}_{d}^{2}}r_{m}^{2}=1 \tag{25}\] and \[\sum_{m\in\mathbb{Z}_{d}^{2}}r_{m}r_{p\oplus m}e^{i(\phi_{p\oplus m}-\phi_{m} )}\omega^{-m_{x}p_{z}}=0,\,\,\forall\,\,p\neq(0,0), \tag{26}\] which enforce unitarity (see Appendix B). For unitary transformations close to the identity we have that \(r_{m}/r_{0}\ll 1\) for \(m\neq(0,0)\). With this approximation, the only terms contributing in Eq. (26) are those where \(m=(0,0)\) and \(m=\ominus p\equiv(d-p_{x},d-p_{z})\). Thus, Eq. (26) becomes \[r_{p}e^{i\phi_{p}}\approx r_{\ominus p}e^{-i\phi_{p}+i\frac{2\pi}{d}p_{x}p_{z }+i\pi}, \tag{27}\] Figure 1: Quantum circuit implementation of our estimation procedure. F are d-dimensional Fourier transforms acting on control qudits 1 and 2, \(X_{tc}^{(i)}\) and \(Z_{tc}^{(i)}\) are controlled gates defined in Eq. (4), and U is the unitary transformation to be estimated. State tomography of the control system leads to complete estimation of \(U\). where we set \(\phi_{0,0}=0\) without loss of generality. From the previous equation we obtain \[r_{p}\approx r_{\ominus p} \tag{28}\] and \[\phi_{p}\approx-\phi_{\ominus p}+\frac{2\pi}{d}p_{x}p_{z}+(2n+1)\pi\;,\text{with }n\in\mathbb{Z}. \tag{29}\] These conditions show that the amplitudes and phases of the coefficients \(u_{p}\) are related in pairs. In the case \(p=\ominus p\), Eq. (29) ties the phase to a discrete set, that is, \[\phi_{p}\approx\frac{\pi}{d}p_{x}p_{z}+\frac{2n+1}{2}\pi\;,\text{with }n\in\mathbb{Z}. \tag{30}\] Notice that the last restriction on the phases only occurs when \(d\) is even, and only for \(p=(d/2,0)\), \(p=(0,d/2)\) and \(p=(d/2,d/2)\). Therefore, in the case \(d=2\), the three coefficients have a fixed phase up to a difference of \(\pi\). The constraints in Eqs. (28) and (29) allow us to recognize the \(d^{2}-1\) parameters that characterizes \(U\) in the close-to-the-identity approximation. These are, for \(d=2\), three unpaired amplitudes \(r_{p}\). For \(d>2\) odd, all the coefficients are paired, implying that the relevant parameters are \((d^{2}-1)/2\) amplitudes \(r_{p}\) and the same number of phases \(\phi_{p}\). For \(d>2\) even, we have the three unpaired amplitudes \(r_{(d/2,0)}\), \(r_{(0,d/2)}\) and \(r_{(d/2,d/2)}\), and \((d^{2}-4)/2\) other amplitudes and equal number of phases. To handle these three cases at once we introduce the following partition of the set \(\mathbb{Z}_{d}^{2}\) of indexes: \[\mathbb{Z}_{d}^{2}=S_{0}\cup S_{u}\cup S_{+}\cup S_{-}\;, \tag{31}\] where \(S_{0}=\{(0,0)\}\), \(S_{u}=\{(d/2,0),(0,d/2),(d/2,d/2)\}\), and \(S_{+}\) and \(S_{-}\) are any partition such that \(p\in S_{+}\) if and only if \(\ominus p\in S_{-}\). In this way, we can easily identify the set of parameters defining \(U\): \[\texttt{Par}_{U}=\{r_{f}\}_{f\in S_{u}}\cup\{r_{a},\phi_{a}\}_{a\in S_{+}}\;. \tag{32}\] Notice that every \(r_{a}\) and \(\phi_{a}\) with \(a\in S_{+}\) is respectively paired with \(r_{\ominus a}\) and \(\phi_{\ominus a}\), with \(\ominus a\in S_{-}\), via Eqs. (28) and (29). Here and in what follows we use indexes \(f,g\in S_{u}\) to label unpaired amplitudes and \(a,b\in S_{+}\) to label paired amplitudes and phases. Let us introduce the notation \(\left|f\right\rangle=\left|f_{x}\right\rangle_{1}\otimes\left|f_{z}\right\rangle _{2}\) for index \(f=(f_{x},f_{z})\), and similarly for \(a=(a_{x},a_{z})\). Then, using partition (31), the state \(\left|\varphi^{0}\right\rangle_{12}\) in Eq. (8) becomes \[\left|\varphi^{0}\right\rangle_{12} = r_{0}\left|0\right\rangle+\sum_{f\in S_{u}}r_{f}e^{i\phi_{f}} \left|f\right\rangle \tag{33}\] \[+\sum_{a\in S_{+}}r_{a}\left(e^{i\phi_{a}}\left|a\right\rangle+e ^{i\phi_{\ominus a}}\left|\ominus a\right\rangle\right),\] where \(r_{0}=(1-\sum_{n\neq(0,0)}r_{n}^{2})^{1/2}\). Now, consider the following two-qudit operation \[\tilde{H}\left|n\right\rangle=\begin{cases}\left|n\right\rangle&,\,\text{for }n=0 \text{ and }n\in S_{u}.\\ \frac{1}{\sqrt{2}}\left(\left|n\right\rangle+\left|\ominus n\right\rangle \right)&,\,\text{for }n\in S_{+}.\\ \frac{1}{\sqrt{2}}\left(\left|\ominus n\right\rangle-\left|n\right\rangle \right)&,\,\text{for }n\in S_{-}.\end{cases} \tag{34}\] This operation can be understood as a set of Hadamard gates each acting in a subspace labeled with paired indexes, while acting as an identity on the other subspaces. Applying \(\tilde{H}\) on \(\left|\varphi^{0}\right\rangle_{12}\) we obtain the state \[\left|\varphi^{1}\right\rangle_{12} = r_{0}\left|0\right\rangle+\sum_{f\in S_{u}}r_{f}e^{i\phi_{f}} \left|f\right\rangle \tag{35}\] \[+\sum_{a\in S_{+}}\frac{r_{a}}{\sqrt{2}}\left(e^{i\phi_{a}}+e^{i \phi_{\ominus a}}\right)\left|a\right\rangle\] \[+\sum_{a\in S_{+}}\frac{r_{a}}{\sqrt{2}}\left(e^{i\phi_{a}}-e^{i \phi_{\ominus a}}\right)\left|\ominus a\right\rangle.\] Projective measurements on the computational basis for both qudits leads to the probabilities \[p_{0} = r_{0}^{2},\] \[p_{f} = r_{f}^{2},\] \[p_{a} = r_{a}^{2}(1+\cos(\Delta_{a})),\] \[p_{\ominus a} = r_{a}^{2}(1-\cos(\Delta_{a})), \tag{36}\] where \(\Delta_{a}=\phi_{a}-\phi_{\ominus a}\) is given by the expression \[\Delta_{a}=2\phi_{a}-\frac{2\pi}{d}a_{x}a_{z}-(2n+1)\pi\;,\text{with }n\in\mathbb{Z}. \tag{37}\] These probabilities lead to the estimates for the amplitudes \[r_{f} = \sqrt{p_{f}}\] \[r_{a} = \sqrt{\frac{p_{a}+p_{\ominus a}}{2}}, \tag{38}\] and for the phases \[\phi_{a} = \pm\frac{1}{2}\arccos\left(\frac{p_{a}-p_{\ominus a}}{p_{a}+p_{ \ominus a}}\right)+\frac{\pi}{d}a_{x}a_{z} \tag{39}\] \[+(n+\frac{1}{2})\pi\;,\text{with }n\in\mathbb{Z}.\] Thus, our proposal estimates the amplitudes and phases that characterize any \(d\)-dimensional close-to-the-identity unitary gate. In any case, the phases are estimated up to a set of four candidates, as implied by Eq. (39) and in agreement with the 2-dimensional case. The discrimination of the candidates requires prior information or additional experiments. The quantum Fisher information matrix characterizing our process is given for dimension \(d\) even by the expression \[\mathcal{F}_{even}=\begin{pmatrix}4\frac{r_{f}r_{a}}{r_{0}^{2}}+4\delta_{f,g}& 8\frac{r_{f}r_{a}}{r_{0}^{2}}&0\\ 8\frac{r_{f}r_{a}}{r_{0}^{2}}&16\frac{r_{a}r_{0}}{r_{0}^{2}}+8\delta_{a,b}&0\\ 0&0&8r_{a}^{2}\delta_{a,b}\end{pmatrix}, \tag{40}\] where \(\delta_{x,y}\) is the Kronecker delta and the ordering in the block matrix \(\mathcal{F}_{even}\) is given by \((\{r_{f}\},\{r_{a}\},\{\phi_{a}\})\), with \(f\in S_{u}\) and \(a\in S_{+}\). In particular, for \(d=2\), the unitary transformation is characterized by three unpaired amplitudes, i.e., \(S_{+}\) and \(S_{-}\) are empty. Hence, the quantum Fisher information matrix reduces to the upper left block as \[\mathcal{F}_{2}=\left(4\frac{r_{f}r_{d}}{r_{0}^{2}}+4\delta_{f,g}\right). \tag{41}\] In the case of dimension \(d\) odd all amplitudes and phases are paired, hence \(S_{u}\) is empty. Thus, the quantum Fisher information matrix is given by \[\mathcal{F}_{odd}=\begin{pmatrix}16\frac{\mathrm{Tr}_{f}}{r_{0}^{2}}+8\delta _{a,b}&0\\ 0&8r_{a}^{2}\delta_{a,b}\end{pmatrix}. \tag{42}\] In this way, we have obtained the quantum Fisher information matrix \(\mathcal{F}_{d}\) for estimating close-to-the-identity unitary transformations in every dimension (see Appendix C for details). Furthermore, we show in Appendix D that the classical Fisher information matrix \(\mathcal{I}_{d}\) is equal to \(\mathcal{F}_{d}\). Thus, our estimation procedure saturates the quantum Cramer-Rao inequality for close-to-the-identity unitary transformations. Let us note that within the approximation \(r_{m}/r_{0}\ll 1\) the non-diagonal terms in \(\mathcal{F}_{even}\) are \(O((r_{m}/r_{0})^{2})\), hence they can be neglected and consequently all the Fisher information matrices are nearly diagonal. In this way, in the case of odd dimension, the amplitudes can be estimated independently of each other and with equal precision. For even dimension, the amplitudes are also estimated independently; however, unpaired amplitudes are estimated with half the precision of paired amplitudes. Lastly, the precision in the estimation of the phases is severely restricted since it is proportional to the inverse of the square of the corresponding amplitude. ### Numerical simulations In this section we study the performance of our estimation procedure for the case of qubit gates without prior information. This is achieved by measuring the quantum state in Eq. (8) in three different bases. The resulting statistics completely characterizes the unknown unitary transformation, as shown in Appendix E. We simulate our estimation procedure with Qiskit [23], IBM's software development platform for quantum processors, and compare its performance against the built-in function for SQPT. We generate a set of 200 single-qubit unitary matrices, which are randomly drawn from a uniform Haar distribution. Each unitary transformation is reconstructed 1000 times using our estimation procedure and additionally SQPT, thus obtaining an average gate fidelity for each of them. This is repeated using increasing number of shots (or ensemble sizes) to simulate the measurement results. Finally, we also perform simulations considering various error sources (see Appendix F) and error mitigation. Figure 2 shows the simulation results for our estimation procedure (green solid dots) and SQPT (red solid dots). Figures (a) and (c) show mean and median gate fidelity, respectively, as functions of the number of shots obtained in absence of error sources, that is, when the operations required by the estimation procedures are carried out perfectly. Figures (b) and (d) show results considering full noise, i.e. errors affecting single qubit gates, conditional gates, thermal relaxation, and measurements. Insets illustrate the behavior of estimation procedures in the small number of shots regime. Shaded areas show standard deviation in Figs. (a) and (b) and interquartile range in Figs. (c) and (d). In the noiseless case, according to Figs. (a) and (c), both our estimation procedure and SQPT are characterized by almost indistinguishable mean and median average gate fidelity. In addition, in regime of a large number of shots, both estimation procedures exhibit extremely narrow standard deviation and interquartile range. In the small number of shots regime, our estimation procedure has a very rapidly narrowing standard deviation and interquartile range. Therefore, our estimation procedure and SQPT lead to an average gate fidelity that is independent of the unitary transformation. Figures (a) and (c) show that our estimation procedure achieves near-unit gate fidelity for ensemble sizes as small as \(2\times 10^{3}\) clearly outperforming SQPT. The presence of noise affects our estimation procedure, decreasing the mean and median average gate fidelity with respect to their noiseless values, as exhibited in Figs. (b) and (d). The mean average gate fidelity of both estimation procedures becomes very similar, while the median gate fidelity of our estimation procedure is above that of SQPT, although standard deviation and interquartile range of our estimation procedure become wider. In Fig. 3 we study the impact of different error sources on our estimation procedure and compare it to SQPT. Figures (a) and (b) show the mean and median average gate fidelity, respectively, for the noiseless case (solid green dots), noisy control-not gate (solid red pentagons), full noise with ideal control-not gate (solid blue diamonds) and full noise (solid pink squares) for our estimation procedure, and SQPT with full noise (black crosses). All simulations consider error mitigation. The noise models and values used correspond to the ibmq_quito processor, and are provided in Appendix F. Insets depict the small ensemble regime. As expected, Fig. 3 shows that the full noise model leads to the the biggest decrease in the mean and median average gate fidelity. In the small ensemble regime, the estimation considering noisy control-not gates leads to a better mean and median gate fidelity than the estimation considering other error sources. However, as the ensemble size increases, the estimation procedure is clearly more affected by noisy control-not gates. Also, in this regime mean and median average gate fidelity become constant and the increase in the ensamble has no impact in the estimation accuracy. ## III Summary In this work, we propose a procedure for estimating \(d\)-dimensional unitary gates. Our circuit transcribes the coefficients of the gate in the Weyl-Heisenberg basis into probability amplitudes of two control qudits. For qubit gates whose Bloch vector points to a known octant, we show that our procedure saturates the quantum Cramer-Rao bound. In this case, the quantum Fisher information matrix is equivalent to the one derived in related works [20; 21; 22]. We extended the analysis to higher dimensions and proved analytically that our procedure is optimal for unitary gates close to the identity in any finite dimension. In Ref. [20] it was shown that the quantum Cramer-Rao bound can be achieved for unitary transformations close to the identity, but no explicit protocol was proposed; our procedure accomplishes this task. In addition, we show that our estimation procedure is able to estimate any unitary transformation on qubit systems without requiring a priori information. Unitarity of the estimated operator is guaranteed by construction. Numerical simulations show that our estimation procedure outperforms SQPT in a noiseless scenario for every size of the ensemble and also in noisy scenarios with a small ensemble. Furthermore, considering a noisy scenario with ideal control-not gates, our procedure still outperforms SQPT for any ensemble size. This work can be naturally extended to quantum channels [27]. Figure 2: Plots (a) and (b) show the mean and plots (c) and (d) show the median gate fidelities as functions of the number of shots for both our estimation procedure (solid green dots) and SQPT (solid red dots). The left and right plots represent the noiseless and noisy cases, respectively. Shaded areas represent standard deviation or interquartile range. The insets illustrate the low number of shots regime. ## Acknowledgements This work was supported by ANID grants No. 1200266, No. 1231940, No. 1230586, No. 3230427, No. 3230407, and ANID - Millennium Science Initiative Program - ICN17\({}_{-}\)012. JEM was supported by ANID-Subdireccion de Capital Humano Avanzado/Doctorado Nacional/2021-21211347.
2305.01335
An Enigmatic 380 kpc Long Linear Collimated Galactic Tail
We present an intriguing, serendipitously-detected system consisting of an S0/a galaxy, which we refer to as the "Kite", and a highly-collimated tail of gas and stars that extends over 380 kpc and contains pockets of star formation. In its length, narrowness, and linearity the Kite's tail is an extreme example relative to known tails. The Kite (PGC 1000273) has a companion galaxy, Mrk 0926 (PGC 070409), which together comprise a binary galaxy system in which both galaxies host active galactic nuclei. Despite this systems being previously searched for signs of tidal interactions, the tail had not been discovered prior to our identification as part of the validation process of the SMUDGes survey for low surface brightness galaxies. We confirm the kinematic association between various H$\alpha$ knots along the tail, a small galaxy, and the Kite galaxy using optical spectroscopy obtained with the Magellan telescope and measure a velocity gradient along the tail. The Kite shares characteristics common to those formed via ram pressure stripping ("jellyfish" galaxies) and formed via tidal interactions. However, both scenarios face significant challenges that we discuss, leaving open the question of how such an extreme tail formed. We propose that the tail resulted from a three-body interaction from which the lowest-mass galaxy was ejected at high velocity.
Dennis Zaritsky, Jacob P. Crossett, Yara L. Jaffé, Richard Donnerstein, Ananthan Karunakaran, Donghyeon J. Khim, Ana C. C. Lourenço, Kristine Spekkens, Ming Sun, Benedetta Vulcani
2023-05-02T11:28:40Z
http://arxiv.org/abs/2305.01335v1
# An Enigmatic 380 kpc Long Linear Collimated Galactic Tail ###### Abstract We present an intriguing, serendipitously-detected system consisting of an S0/a galaxy, which we refer to as the "Kite", and a highly-collimated tail of gas and stars that extends over 380 kpc and contains pockets of star formation. In its length, narrowness, and linearity the Kite's tail is an extreme example relative to known tails. The Kite (PGC 1000273) has a companion galaxy, Mrk 0926 (PGC 070409), which together comprise a binary galaxy system in which both galaxies host active galactic nuclei. Despite this systems being previously searched for signs of tidal interactions, the tail had not been discovered prior to our identification as part of the validation process of the SMUDGes survey for low surface brightness galaxies. We confirm the kinematic association between various H\(\alpha\) knots along the tail, a small galaxy, and the Kite galaxy using optical spectroscopy obtained with the Magellan telescope and measure a velocity gradient along the tail. The Kite shares characteristics common to those formed via ram pressure stripping ("jellyfish" galaxies) and formed via tidal interactions. However, both scenarios face significant challenges that we discuss, leaving open the question of how such an extreme tail formed. We propose that the tail resulted from a three-body interaction from which the lowest-mass galaxy was ejected at high velocity. keywords: galaxies: kinematics and dynamics galaxies: structure galaxies: formation galaxies: dwarf ## 1 Introduction Galactic tails, or more broadly galactic detrins, may be a signature of a process or event acting to transform galaxies. As such, their discovery and characterisation help us unravel how galaxies evolve. The classic example of such an analysis is that of galactic tidal features (Toomre and Toomre, 1972; Toomre, 1977), which triggered a revolution in our understanding of the role of interactions and mergers of galaxies (cf., Schweizer, 1986; Barnes and Hernquist, 1992a). Mergers and accretion are now central to the accepted hierarchical formation paradigm (e.g., Davis et al., 1985) and individual systems at various stages along a merger sequence have been identified (Hibbard and van Gorkom, 1996). Gravity is not the only force that can extract matter from a galaxy. For example, galaxies moving through environments with a sufficiently high ambient density (such as the intracluster medium of massive clusters) can lose gas due to ram pressure stripping (Gunn and Gott, 1972). Among the most convincing examples of ram-pressure stripping at play are Hi observations of cluster galaxies displaying gaseous tails and undisturbed stellar disks (Haynes et al., 1984; Cayatte et al., 1990; Chung et al., 2009; Jaffe et al., 2015) and the "jellyfish" galaxies seen in clusters (Kenney and Koopmann, 1999; Smith et al., 2010; Ebeling et al., 2014; Fumagalli et al., 2014; Poggianti et al., 2016, 2017; McPartland et al., 2016; Jaffe et al., 2018) and groups (Vulcani et al., 2021; Kolcu et al., 2022). It is expected that such hydrodynamic interactions, as well as other environmental effects, are responsible for the increased fraction of quenched galaxies in dense environments (e.g. Dressler et al., 1987; Peng et al., 2010). Extreme examples of galactic detrits garner attention because they challenge our quantitative understanding of these phenomena. For example, the recent discovery of a 250 kpc-long HI tail in the outskirts of the galaxy cluster Abell 1367 has proven difficult to explain in any scenario (Scott et al., 2022). Sometimes, a particular example elicits bold new suggestions, such as that invoking a runaway supermassive black hole (van Dokkum et al., 2023). We report the discovery of an extraordinary tail with a variety of interesting features. First, with a projected of 380 kpc (7 arcmin), it is the longest optical tail of which we are aware. This length is \(>6\) times that of the longest jellyfish tails seen in H\(\alpha\) (e.g., D100 and JO206, Yagi et al., 2016; Poggianti et al., 2017) and longer than most traced by HI emission (e.g., J0206 and FGC 1287; Ramatsoku et al., 2019; Scott et al., 2022), with the exception of a \(\sim 500\) kpc long, amorphous feature tailing a Virgo cluster galaxy pair (Koopmann et al., 2008). The only even longer galactic structures are some radio-detected, head-tail systems that can reach lengths \(>600\) kpc (Vallee & Roger, 1987) but which are clearly associated with relativistic electron jets interacting with the intracluster medium. Second, it is one sided, emanating along the disk plane of its apparent host galaxy. Third, it is highly collimated, with a length to width ratio of \(\sim 40\). Fourth, it is exceedingly close to linear in projection. Fifth, it originates from an S0/a galaxy, which we would expect to be gas-poor, yet the tail is sufficiently gas-rich to support star formation along its length. Sixth, it lies in a low-density galactic environment with no cluster or group nearby, but is in a close binary galaxy system where both galaxies have active galactic nuclei. Aside from inspiring questions regarding their origin and the impact of such phenomenon on galactic evolution, tails and other galactic detritus are used to address a range of unrelated topics, including the nature of the dark matter potential (Dubinski et al., 1999; Springel & White, 1999), the formation of dark matter-free, tidal dwarf galaxies (Mirabeel et al., 1992; Duc & Mirabel, 1994; Barnes & Hernquist, 1992; Hunsberger et al., 1996; Elmegreen et al., 1993), the nature of star formation in an environment different than that typically found within galaxies (de Grijs et al., 2003; Knierman et al., 2003; Boquien et al., 2009; Giunchi et al., 2023), and the character of the circumgalactic and intergalactic environment (Sun et al., 2007; Tonnesen & Bryan, 2010; Fossati et al., 2016; Vulcani et al., 2019; Sun et al., 2021). It stands to reason that extreme cases provide novel constraints for all of these topics to exploit. The discovery and initial characterisation of the galaxy that we have named the "Kite" on the basis of its morphology are described in SS2. Because of the possibility of chance superpositions of features enhancing the tail's appearance, length, or coherence, and to search for signs of current star formation, we obtained optical spectroscopy along the tail. The spectroscopic observations and results are presented in SS3. We present a brief discussion of various aspects of this system in SS4, highlighting areas that place the strongest constraints on possible formation scenarios and where subsequent investigation is warranted. We adopt a standard \(\Lambda\)CDM cosmology (flat, \(\Omega_{m}=0.282\), H\({}_{0}=69.7\) km s\({}^{-1}\) Mpc\({}^{-1}\); Hinshaw et al., 2013) when needed. ## 2 Discovery and Characterisation We serendipitously discovered the Kite during a search for low surface brightness (LSB) galaxies in the Legacy Survey (Dey et al., 2019) images (Systematically Measuring Ultra-Diffuse Galaxies (SMUDCS); Zaritsky et al., 2019, 2021, 2022, 2023). The list of candidate LSB galaxies returned by the search algorithm is contaminated by a number of artificial and physical sources that are not the intended targets. Briefly, SMUDGes processes the images by removing or replacing high surface brightness sources, filtering the residual images for angularly-large sources, proceeding through a variety of selection steps to winnow the number of LSB detections, and ultimately producing a list of high-confidence ultra-diffuse galaxy (UDG) candidates. The final step in the vetting process includes a visual examination of the candidate list. One of those objects is the subject of this study. Upon visual inspection, we identified the candidate located at \((\alpha,\delta)=(346.2203^{\circ},-8.8093^{\circ})\), which is object i in Figure 1, to be near the end of a linear sequence of low surface brightness features leading back to PGC 1000273 (346.183294\({}^{\circ},-8.703178^{\circ}\)), an edge-on S0/a galaxy (Nair & Abraham, 2010) with \(cz=13813\pm 25\) km s\({}^{-1}\) in the CMB frame and M\({}_{i}=-21.41\) AB mag (recessional velocity and magnitude from NED1) that we have named the "Kite" galaxy given its single-sided tail morphology (Figure 1). It has a companion galaxy, projected only 57 kpc away at a comparable redshift (\(z=0.0470\)), that is a S0/O (Mrk 0926, or alternately PGC 070409). This pair had attracted previous attention because the Kite galaxy also shows signs of nuclear activity (identified spectroscopically as "composite" by Liu et al., 2011, in a study of binary active galaxies), although neither Liu et al. (2011) nor a subsequent study (Weston et al., 2017) identified clear signs of an interaction between the galaxies. Footnote 1: The NASA/IPAC Extragalactic Database (NED) is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. The tail structure comprises both well-defined knots of emission Figure 1: The Kite galaxy and its companion, Mrk 0926. Features along the tail are labelled a-j as shown. The candidate disrupted galaxy, Kite A, is also labelled. Image obtained from the Legacy Survey public archive and is in the \(g\)-band. Bar at bottom right shows the length of 1 arcmin or the equivalent physical length at the distance of the Kite, 54 kpc. and diffuse emission strewn nearly linearly out to a projected separation from the Kite of 7 arcmin (420.0 arcsec from feature j in Figure 1 to the center of the Kite galaxy). This separation corresponds to 380 kpc at the galaxy's adopted distance (187 Mpc). The width of the tail is more difficult to constrain precisely although it is comparable to the width of the Kite galaxy itself, which is \(\sim\) 10 arcsec. This suggests a length-to-width ratio for the tail of \(\sim\) 40. The position angles on the sky between the Kite galaxy and the various letter-labelled features along the tail are presented in Table 1 and illustrate the near-linearity of the tail. Features b and c are somewhat less consistent with a linear distribution, but the dispersion in position angle even including these two is only 2.5\({}^{\circ}\) (it is 0.6\({}^{\circ}\) without those two features). Finally, a small galaxy, Kite A, is also projected onto the tail. Some of the features identified in Figure 1 also have NUV emission (1771 - 2831 A) that is detectable in the _GALEX_(Martin et al., 2005) Medium Imaging Survey (Bianchi et al., 2014) images (see Table 1). We illustrate this finding of knots near the end of the tail in Figure 2. A check mark in the NUV column of Table 1 indicates that we visually identified emission in the _GALEX_ image corresponding to the location of the optically-identified associated feature. NUV emission typically indicates the presence of relatively young (\(\lesssim\) few 100 Myr) stellar populations (e.g. Bianchi, 2011). The presence of such young stars helps differentiate these knots from other features in the image and aids in confirming that there are no similar LSB structures beyond feature j of the tail, on either side of what we have defined to be the tail, or on the opposite side of the Kite galaxy. The local environment of the Kite and Mrk 0926 is not one conducive to ram pressure stripping. The Kite and Mrk 0926 form a triplet system with the galaxy SDSS J230439.88-083712.6 (Yang et al., 2007). This third galaxy is 2.3 magnitudes fainter than the Kite galaxy and \(\sim\) 300 kpc away, with no visible signs of debris or interaction with the main pair. Given its distance, morphology, and lower mass, this galaxy is unlikely to be related to this feature. Additionally, there are no other known clusters or groups around this system within 3 Mpc and 4000 km s\({}^{-1}\)(Yang et al., 2007; Szabo et al., 2011; Wen et al., 2012). In fact, there are only three additional galaxies listed in SDSS DR7 within 1 Mpc of the Kite galaxy. With such a low local density, it unlikely that there is a sufficiently dense medium to produce the observed tail via ram pressure stripping. ## 3 Spectroscopic follow-up ### Tail Features We observed most of the features labelled in Figure 1 using the Baade Magellan telescope and the IMACS spectrograph (Dressler et al., 2011) with the f/4 camera on the nights of 25 and 26 September 2022 in the long slit mode. We selected the 1200 line grating, for a spectral dispersion of 0.2A pix\({}^{-1}\) (we then binned 2\(\times\)2 pixels) and a wavelength coverage of 6000 to 7600A, although here we focus solely on the spectral region around H\(\alpha\). We generally obtained three 15 min exposures, although in two slit positions we were limited to a single 15 minute exposure by observing conditions, which were generally variable. We reduced the data in a standard manner, extracted 1-D spectra, visually examined the spectra for emission and absorption lines, and measured the observed wavelength of the lines using a Gaussian line fitter. We present heliocentric measurements of \(cz\) in Table 1 and use spectra with multiple emission lines to estimate that the internal recessional velocity uncertainty is 14 km s\({}^{-1}\). The comparison with the SDSS velocity for the Kite galaxy itself suggests a possible systematic error as large as \(\sim\) 190 km s\({}^{-1}\), but perhaps the SDSS value includes the emission line velocity we measure, which is much closer to the SDSS value. Regardless, when comparing our measured kinematics along the tail, the internal uncertainties are the ones of interest. In all cases where we are able to measure a redshift for the targeted feature, that redshift measurement is consistent with the feature being at the distance of the Kite galaxy. We typically find H\(\alpha\) emission in the spectrum of each targeted feature, especially if we have the full 45 minutes of exposure time. However, because we employed offset pointing based on broad band optical images, a lack of H\(\alpha\) emission for any target (denoted by the cross in Table 1) does not necessarily imply a lack of H\(\alpha\) flux in that feature. In two features (a and e), we find measurable [N II] and [S II] in addition to H\(\alpha\). These are weaker than H\(\alpha\) and we do not incorporate them into the redshift measurement. In no other target than the Kite galaxy itself and Kite A do we detect enough continuum to attempt an absorption line measurement. With the exception of feature b, the measured recessional velocities show a gradual, consistent decline from the velocity of the Kite galaxy toward the tip of the tail with a total amplitude of slightly over 150 km s\({}^{-1}\). The velocity gradient demonstrates that the tail is unlikely to be oriented exactly on the plane of the sky. Hence, it is likely longer than 380 kpc. In Figure 3 we highlight the kinematic behaviour of the tail near its origin (Feature a). We find that the ionised gas is kinematically offset from the stars in the galaxy, as traced by the stellar absorption features, even at small radii. The emission is also clearly asymmetric relative to the Kite even at this level of spatial resolution, with no convincing signature of a hidden second tail with different velocity. ### Kite A Our spectrum of Kite A shows an asymmetric broad absorption line corresponding to H\(\alpha\) in the observed frame of the Kite. Because of the asymmetry our determination of the line centroid is uncertain at the level of \(\sim\) 100 km s\({}^{-1}\). As such we can confidently place Kite A in the Kite environment, but cannot with similar confidence place it in the tail. Nevertheless, it is likely that Kite A is in the tail given its precise projection on the tail and reasonably close redshift. Our best estimate of the redshift (Table 1) places it along the sequence of velocities for the other features. We speculate that the asymmetric absorption line arises because Kite A is rotating and the slit did not evenly sample both sides of the galaxy. Figure 2: Optical and NUV features toward the end of the Kite tail. Left panel shows a zoom-in view using the \(g\)-band image from Figure 1 with features labelled. Right panel shows the same area drawn from a _GALEX_ NUV image from the Medium Imaging Survey. ## 4 Discussion A natural initial interpretation of the Kite, given its one-sided, linear tail is that it is the result of ram pressure stripping (RPS). There is at least one likely RPS feature that is nearly as long (250 kpc; Scott et al. 2022) although those authors question that feature's RPS origin. Another RPS tail (D100; Yagi et al. 2007) is only 60 kpc long but shares many morphological similarities with the Kite, and yet a third is both long and appears in an edge-on galaxy (GMP 2640; Smith et al. 2010; Grishin et al. 2021). D100 has a length-to-width ratio of 30, is linear, and has star formation along its length (Cramer et al. 2019), although it is more of a polar than a planar feature relative to its host galaxy. Yagi et al. (2016) struggled to explain the origin of D100, invoking possible confinement by the ambient intracluster medium to maintain the high degree of collimation. Subsequent work (Jachym et al. 2017; Cramer et al. 2019) advanced the RPS interpretation and invoked an array of possible mechanisms to maintain the tail's narrowness (e.g., Tonnesen & Bryan 2010b; Roodiger & Bruggen 2008). However, unlike D100 or GMP 2640, the Kite is not in a dense environment. Hydrodynamic interactions in lower density environments also occur but show much more subtle features than that of the Kite (Vulcani et al. 2021b). We conclude that something other than RPS is responsible for the Kite's tail. What does this mean for the inferred origin of the Kite galaxy with its long narrow tail, and possibly for similar systems like D100 that are attributed to RPS but face difficulties when modelled in detail? If we abandon the RPS interpretation for the Kite, then the next most natural interpretation is that of tidal forces. The tail material could then come either from the Kite galaxy itself, as tidal material presumably drawn out by a close passage with Mrk 0926 or a third galaxy (presumably Kite A), or from a tidally shredded low mass satellite, whose remaining core may be Kite A. Because tidal tails tend to follow the elliptical nature of the original orbit, the projected linear nature of the tail suggests that in either scenario we are viewing the system at a particularly fortuitous orientation where the interaction happened on a plane perpendicular to the plane of the sky. This requirement may be somewhat more plausible in the Kite-Mrk 0926 interaction scenario because we at least know that we viewing the Kite galaxy itself nearly edge-on. The scenario where the tail is the result of the near destruction of a satellite galaxy presents an avenue for solving the difficulty in finding sufficient gas in an S0/a galaxy to form a star-forming tail and the lack of obvious morphological disturbance in either the Kite galaxy or Mrk 0926. However, it faces challenges of its own. First, it requires the additional condition that the satellite orbit lies in (nearly) the same plane as the Kite's disk. Second, distributing material from roughly \(r=0\) to at least 380 kpc requires a large impulse difference, \(\Delta E\), among the dwarf galaxy constituents that then implies a nearly direct collision with the Kite and a gravitational potential for the Kite that is sufficiently centrally concentrated that small differences in impact parameter translate to large \(\Delta E\). Even so, much of the material ejected with the largest velocities (that currently at the tip of the tail) must be gaseous to power the ongoing star formation and would need to find its way through the disk plane of the galaxy. A candidate for the surviving core of this galaxy is Kite A, located near Feature d at (346.2061\({}^{\circ}\), \(-8.7710^{\circ}\)). Accepting geometrical coincidences, both scenarios face additional hurdles. A close interaction between the Kite and Mrk 0926, which would be required to generate a long tidal tail, would also tend to form a bridge between the galaxies and disturb the morphology of the interacting galaxies (Toomre 1977; Barnes & Hernquist 1992a), neither of which is observed. Additionally, the interaction typically generates a second long tail from the other galaxy, but none is observed to emanate from Mrk 0926. The news is not all negative for this scenario, as giant tidal tails are often found to have star formation near their ends (Mirabel et al. 1991, 1992). Regardless of what forces created the tail, there are some general \begin{table} \begin{tabular}{l r r r r r r} \hline Feature & \(\alpha\) & \(\delta\) & PAa & NUV & H\(\alpha\) & \(cz\)b \\ & (\({}^{\circ}\)J2000) & (\({}^{\circ}\)J2000) & (\({}^{\circ}\)) & & & (km s\({}^{-1}\)) \\ \hline Kite\({}^{c}\) & 346.1833 & \(-8.7032\) &... & \(\times\) & \(\times\) & 13991 \\ a\({}^{d}\) & 346.1871 & \(-8.7157\) & 163.3 & \(\times\) & \(\checkmark\) & 14158 \\ b & 346.1885 & \(-8.7307\) & 169.4 & \(\checkmark\) & \(\checkmark\) & 14328 \\ c & 346.1954 & \(-8.7510\) & 166.0 & \(\checkmark\) &... &... \\ Kite A & 346.2062 & \(-8.7709\) & 162.0 & \(\checkmark\) & \(\times\) & \(\sim\) 14160 \\ d & 346.2075 & \(-8.7753\) & 161.6 & \(\checkmark\) & \(\checkmark\) & 14109 \\ e\({}^{e}\) & 346.2116 & \(-8.7860\) & 161.3 & \(\checkmark\) & \(\checkmark\) & 14058 \\ f & 346.2151 & \(-8.7982\) & 161.7 & \(\checkmark\) & \(\times\) &... \\ g & 346.2160 & \(-8.8009\) & 161.7 & \(\checkmark\) &... &... \\ h & 346.2166 & \(-8.8055\) & 162.2 & \(\checkmark\) & \(\checkmark\) & 13989 \\ i & 346.2201 & \(-8.8090\) & 161.0 & \(\checkmark\) & \(\times\) &... \\ j & 346.2196 & \(-8.8142\) & 162.1 & \(\checkmark\) &... &... \\ \hline \end{tabular} \end{table} Table 1: Spectroscopic Follow-up Results Figure 3: Emission and absorption features for the Kite and the nearby Feature a from Figure 1). The bottom panel shows a Gaussian smoothed zoom in on the Kite 2D spectrum. Spectral direction along the x axis and spatial along the y axis. Absorption and emission features labelled. The absorption affects the continuum symmetrically, while the emission is only present below the continuum spectrum. The H\(\alpha\) emission is offset to larger \(\lambda\) relative to the absorption. The shift corresponds to 160 km s\({}^{-1}\). The H\(\alpha\) emission extends 12 arcsec below the continuum, which corresponds to 11 kpc. The upper panel shows extracted 1D spectra centred on the continuum (dotted line) and for the region below the continuum where H\(\alpha\) emission is visible (solid line). We have normalised the continua of the two spectra to simplify comparison. H\(\alpha\) absorption is seen in both, although deeper in the continuum spectrum, while H\(\alpha\) and [N II] emission are clear in the off-continuum spectrum. Flux is in arbitrary linear units. puzzles that the tail poses. Consider that an age for the tail, \(t\), can be estimated by dividing the length of the tail by the transverse velocity at which the tip of tail receded from the Kite galaxy, \(v_{t}\). By doing so we estimate \(t=371/v_{t}\) Gyr, where \(v_{t}\) is in units of km s\({}^{-1}\). We do not know v\({}_{t}\), but for typical values of galaxy internal velocities or pairwise velocity differences (Davis & Peebles, 1983) this implies lifetimes of one to several Gyr. A measurement of the star formation history of the tail would be one test of this age estimate, but we cannot do this for the emission line regions with our current data. Given Kite A's plausible association with the tail, we attempt to use its spectral energy distribution (SED) to place constraints on the most recent star formation, which we might plausibly connect to the age of the tail. We measure aperture magnitudes, where the circular aperture is defined to include the W1 flux and is 10.5 arcsec in radius. Our measurements for the FUV, NUV, \(g\), \(r\), \(z\), W1, and W2 AB magnitudes are 21.7, 21.3, 18.9, 18.1, 17.5, 17.6, and 17.9, respectively. A value of FUV-NUV = 0.4 mag indicates the presence of a young (\(\lesssim 300\) Myr) stellar population and is nearly a dust-free indicator (E\({}_{FUV-NUV}\) = 0.11 E\({}_{B-V}\); Bianchi, 2011). We aimed for a more robust determination using PROSPECTOR (Johnson et al., 2021) to fit stellar population models to the SED, but found that the uncertainties in age, once a range of plausible star formation histories is allowed, is too large to provide a meaningful constraint on our hypothesis. Adopting an age for the tail of \(\gtrsim 1\) Gyr leads to several apparent problems. First, star formation, as inferred from the H\(\alpha\) detections, is occurring at various locations along the tail and must be producing at least some high mass stars to produce the necessary ionising radiation. Note that H\(\alpha\) flux alone in a tail is not sufficient to indicate star formation (Boselli et al., 2016). However, in our case the knot morphology and the ubiquitous NUV flux, indicative of young stars, suggests that here the H\(\alpha\) is indeed related to ongoing star formation. This star formation rate must either be maintained over the \(\gtrsim\) Gyr lifetime, or something must have triggered star formation at nearly 400 kpc from the galaxy in the last few million years. This may not be a insurmountable challenge because, as we mentioned before, such star formation is often observed in giant tidal tails (Mirabel et al., 1991, 1992) and clumping of matter along the tails is reproduced in simulations (Barnes & Hernquist, 1992; Elmegreen et al., 1993). Similarly, star formation is often observed in jellyfish galaxies (Vulcani et al., 2018; Poggianti et al., 2019). Second, the dispersion in position angles among the identified features within the Kite tail, excluding features b and c which appear slightly offset from the low surface brightness continuous tail, is 0.6\({}^{\circ}\). At 380 kpc this offset corresponds to 4 kpc. For a \(\gtrsim\)Gyr lifetime this offset implies that velocities perpendicular to the tail but on the plane of the sky can not differ by more than \(\sim\)4 km s\({}^{-1}\), which is smaller than the typical velocity dispersion in galaxies with substantial gas reservoirs. This suggests either that there is a mechanism acting to actively maintain the collimation, as invoked by Yagi et al. (2016) and Jachym et al. (2017) for D100, or the tail is much younger than we estimate, which in turn implies that the transverse velocity is much larger than the typical internal velocities in galaxies. The latter might lead one to consider more exotic models such as that of a runaway massive black hole (van Dokkum et al., 2023), although we note that the black hole model would face its own challenges here. Consider that any circumgalactic gas parcels that it may have influenced to form stars would have a velocity consistent with the halo velocity dispersion (\(\sim 100\) km s\({}^{-1}\)) rather than with the internal velocity dispersion of a putative satellite galaxy. Such a large velocity would cause the star formation clumps to drift away from the tail axis at a rate of 125 kpc/Gyr. For us to find the clumps within \(\sim\)10 kpc of the tail axis along the entire tail requires that all of these clumps formed within the last \(\sim 100\) Myr, which in turn implies a black hole speed of nearly 4000 km s\({}^{-1}\) if it is to reach a distance of 380 kpc. As large as this value seems, it is not beyond the realm of possibility (Campanelli et al., 2007; Healy & Loutso, 2022), although statistically unlikely (Schnittman & Buonanno, 2007). Finally, the linearity of the feature is also a challenge considering the likely asymmetric nature of the gravitational potential at large radii. Torques on the matter in the tail would seem likely to produce a bend in the tail over a Gyr timescale even if the original geometry allowed for projection to create the illusion of linearity. Again, a much shorter tail lifetime would alleviate such concerns. ### Our Proposal Given the various reasons to favour a short lifetime for the tail and the presence of Kite A in the tail, we propose a gravitational origin for the tail arising from the ejection of Kite A from the Kite-Mrk 0926 system. A hyperbolic orbit, where Kite A has a velocity significantly larger than the escape speed, not only addresses issues related to the lifetime of the feature but relaxes the orientation constraints because Kite A and its associated detritus would be travelling on near linear trajectories in 3-D. There is still the coincidental alignment of the orbital plane with the Kite's disk plane but perhaps that is advantageous in realising an interaction among the three galaxies that allows Kite A to be ejected. For a time since pericenter passage of 300 Myr, consistent with the youngest stellar population in Kite A, Kite A would need to have a mean transverse velocity of 460 km s\({}^{-1}\) to reach its current projected position, which is almost certainly larger than the escape speed at its current position. The tidal material would both lead and trail Kite A, resulting in a nearly linear feature with Kite A in the middle. Finally, a close interaction, which is needed to provide a sufficient kick to Kite A, might also be responsible for fuelling the central black holes in the Kite and Mrk 0926 (Liu et al., 2011; Weston et al., 2017). ### Other Contexts We close the discussion by commenting on the implication of the existence of a system such as the Kite on other interesting systems. We identified this system initially because Feature i was flagged as a potential UDG. In any of the formation scenarios envisioned, clumps along the tail are expected to be dark matter free. If such clumps are able to remain gravitationally bound and survive they will contribute to a dark matter free galaxy population (Bennet et al., 2018; van Dokkum et al., 2018). In fact, van Dokkum et al. (2022) note that their two dark matter free galaxies are part of a large linear sequence of galaxies. A potential difference between those two galaxies and the features within the Kite tail is that the former host globular clusters. We do not yet know if any of the Kite features host globular clusters. A second interesting class of system is that of the diffuse star-forming isolated stellar systems found in the Virgo cluster (Jones et al., 2022), but which might also exist elsewhere. Those authors note that these are at least 140 kpc from any nearby potential parent and are young. They have relatively high metallicities, suggesting their gas comes from more massive galaxies, which they interpret to mean a likely RPS origin. However, if the origin of the gas is as proposed here, from a galaxy like Kite A, then one would expect to measure a relatively high metallicity. ## 5 Summary We present the discovery of an extraordinary tail emanating from what we have dubbed the Kite galaxy. The tail is unusual in its physical length (a projected length of 380 kpc), its collimation (it has a length to width ratio of 40), and its linearity (all of the detected knots along the tail scatter in position angle by less than 3\({}^{\circ}\)). It is oriented parallel to the disk of the Kite galaxy. The Kite galaxy and its nearby companion, Mrk 0926, are both active galaxies, with Mrk 0926 being by far the more active. We present results from spectroscopy at various points along the tail. There is recent and ongoing star formation along the tail, as evidenced by NUV and H\(\alpha\) flux. The velocities show a moderate velocity gradient along the tail and demonstrate that the various knots are physically associated with the tail. We identify a galaxy, Kite A, along the tail that has a velocity that is consistent with it lying in the tail. This galaxy has UV emission that indicates the presence of young stars but does not show evidence for ongoing star formation. The two most commonly invoked origin scenarios for tail features, ram pressure or tidal stripping, face significant challenges that we discuss. Of the two, we have a preference for a tidal origin, but acknowledge the difficulties in making such a model work. Some of the difficulties are mitigated if the age of the tail is quite short, but this supposition leads to more exotic formation models. We propose that a three-body encounter between the Kite, Mrk 0926, and Kite A resulted in the rapid ejection of Kite A. The resulting hyperbolic orbit explains the linearity of the debris field and the tail's narrowness. Detailed simulations are necessary to assess the viability of this proposal. We briefly describe how such events may also help explain other puzzling observations. The Kite system is a record-breaking, enigmatic source that presents a variety of interesting problems to resolve. ## Acknowledgments DZ acknowledges financial support from AST-2006785 and thanks both the Astronomy Department at Columbia University for their gracious welcome during his sabbatical and Greg Bryan for discussions about this object. JPC acknowledges financial support from ANID through FONDECYT Postdoctorado Project 3210709. YLJ acknowledges financial support from ANID BASAL project No. FB210003. MS acknowledges support from the NASA grant 80NSSC22K0353, and the USRA award 9_0221, under NASA contract NNA17BF53C. BV acknowledges support from the NAR Mini Grant 2022 "Tracing filaments through cosmic time" (PI Vulcani). ACCL thanks for the financial support of the National Agency for Research and Development (ANID) / Scholarship Program / DOCTORADO BECAS CHILE/2019-21190049. KS acknowledges support from the Natural Sciences and Engineering and Research Council of Canada. This research has made use of the NASA/IPAC Extragalactic Database, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.. ## Data Availability The optical imaging presented here is publicly through the Legacy Survey ([https://datalab.noirlab.edu/ls/dataAccess.php](https://datalab.noirlab.edu/ls/dataAccess.php)) while the NUV data is available through the Mikulski Archive for Space Telescopes (MAST; [https://science.nasa.gov/astrophysics/astrophysics-data-centers/multimission-archive-at-stsci-mast](https://science.nasa.gov/astrophysics/astrophysics-data-centers/multimission-archive-at-stsci-mast)). We present the results from Magellan spectroscopy in Table 1 and the 2-D spectra will be shared upon reasonable request to the corresponding author.
2306.03072
Explore to Generalize in Zero-Shot RL
We study zero-shot generalization in reinforcement learning-optimizing a policy on a set of training tasks to perform well on a similar but unseen test task. To mitigate overfitting, previous work explored different notions of invariance to the task. However, on problems such as the ProcGen Maze, an adequate solution that is invariant to the task visualization does not exist, and therefore invariance-based approaches fail. Our insight is that learning a policy that effectively $\textit{explores}$ the domain is harder to memorize than a policy that maximizes reward for a specific task, and therefore we expect such learned behavior to generalize well; we indeed demonstrate this empirically on several domains that are difficult for invariance-based approaches. Our $\textit{Explore to Generalize}$ algorithm (ExpGen) builds on this insight: we train an additional ensemble of agents that optimize reward. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions, which generalize well and drive us to a novel part of the state space, where the ensemble may potentially agree again. We show that our approach is the state-of-the-art on tasks of the ProcGen challenge that have thus far eluded effective generalization, yielding a success rate of $83\%$ on the Maze task and $74\%$ on Heist with $200$ training levels. ExpGen can also be combined with an invariance based approach to gain the best of both worlds, setting new state-of-the-art results on ProcGen.
Ev Zisselman, Itai Lavie, Daniel Soudry, Aviv Tamar
2023-06-05T17:49:43Z
http://arxiv.org/abs/2306.03072v3
# Explore to Generalize in Zero-Shot RL ###### Abstract We study zero-shot generalization in reinforcement learning--optimizing a policy on a set of training tasks such that it will perform well on a similar but unseen test task. To mitigate overfitting, previous work explored different notions of invariance to the task. However, on problems such as the ProcGen Maze, an adequate solution that is invariant to the task visualization does not exist, and therefore invariance-based approaches fail. Our insight is that learning a policy that _explores_ the domain effectively is harder to memorize than a policy that maximizes reward for a specific task, and therefore we expect such learned behavior to generalize well; we indeed demonstrate this empirically on several domains that are difficult for invariance-based approaches. Our _Explore to Generalize_ algorithm (ExpGen) builds on this insight: We train an additional ensemble of agents that optimize reward. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions, which are guaranteed to generalize and drive us to a novel part of the state space, where the ensemble may potentially agree again. We show that our approach is the state-of-the-art on several tasks in the ProcGen challenge that have so far eluded effective generalization. For example, we demonstrate a success rate of \(82\%\) on the Maze task and \(74\%\) on Heist with \(200\) training levels. ## 1 Introduction Recent developments in reinforcement learning (RL) led to algorithms that surpass human experts in a broad range of tasks (Mnih et al., 2015; Vinyals et al., 2019; Schrittwieser et al., 2020; Wurman et al., 2022). In most cases, the RL agent is tested on the same task it was trained on, and is not guaranteed to perform well on unseen tasks. In zero-shot generalization for RL (ZSG-RL), however, the goal is to train an agent on training domains to act optimally in a new, previously unseen test environment (Kirk et al., 2021). A standard evaluation suite for ZSG-RL is the ProcGen benchmark (Cobbe et al., 2020), containing 16 games, each with levels that are procedurally generated to vary in visual properties (e.g., color of agents in BigFish, Fig. 1(a), or background image in Jumper, Fig. 1(c)) and dynamics (e.g., wall positions in Maze, Fig. 1(d), and key positions in Heist, Fig. 1(e)). Previous studies on generalization focused on identifying various _invariance properties_ in the tasks, and designing corresponding _invariant policies_, through various regularization and augmentation techniques (Igl et al., 2019; Cobbe et al., 2019; Wang et al., 2020; Lee et al., 2019; Raileanu et al., 2021; Raileanu and Fergus, 2021; Cobbe et al., 2021; Sonar et al., 2021; Bertran et al., 2020; Li et al., 2021). For example, a policy that is invariant to the color of agents is likely to generalize well in BigFish. More intricate invariances include the order of observations in a trajectory (Raileanu and Fergus, 2021), and the length of a trajectory, as reflected in the value function (Raileanu et al., 2021). Can ZSG-RL be reduced to only finding invariant policies? As a counter-argument, consider the following thought experiment2. Imagine Maze, but with the walls and goal hidden in the observation (Fig. 2f). Arguably, this is the most task-invariant observation possible, such that a solution can still be obtained in reasonable time. An agent with memory can be trained to optimally solve all training tasks: figuring out wall positions by trying to move ahead and observing the resulting motion, and identifying based on its movement history in which training maze it is currently in. Obviously, such a strategy will not generalize to test mazes. Indeed, as depicted in Figure 1, performance in tasks like Maze and Heist, where the strategy for solving any particular training task must be _indicative_ of that task, has largely not improved by methods based on invariance (e.g. UCB-DrAC and IDAAC). Interestingly, decent zero-shot generalization can be obtained even without a policy that generalizes well. As described by Ghosh et al. (2021), an agent can overcome test-time errors in its policy by treating the perfect policy as an _unobserved_ variable. The resulting decision making problem, termed the _epistemic POMDP_, may require some exploration at test time to resolve uncertainty. Ghosh et al. (2021) further proposed the LEEP algorithm based on this principle, which trains an ensemble of agents and essentially chooses randomly between the members when the ensemble does not agree, and was the first method to present substantial generalization improvement on Maze. In this work, we follow the epistemic POMDP idea, but ask: _how to improve exploration at test time?_ Our approach is based on a novel discovery: when we train an agent to _explore_ the training domains, using a maximum entropy objective (Hazan et al., 2019; Mutti et al., 2021), we observe that the learned exploration behavior generalizes surprisingly well--much better than the generalization observed when training the agent to maximize reward. Intuitively, this can be explained by the fact that reward is a strong signal that leads to a specific behavior that the agent can'memorize' during training, while exploration is naturally more varied, making it harder to memorize and overfit. Exploration by itself, however is not useful for solving new tasks. Our algorithm, _Explore to Generalize_ (ExpGen), additionally trains an ensemble of reward seeking agents. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions _using the exploration policy_, which are guaranteed to generalize and drive us to a novel part of the state space, where the ensemble may potentially agree again. ExpGen is simple to implement, and can be used with any reward maximizing RL algorithm. With vanilla PPO, ExpGen significantly improves the state-of-the-art (SOTA) on several ProcGen games for which previous methods fail (see Fig. 1). ExpGen also significantly improves upon LEEP, due to its effective test-time exploration strategy. For example, on Maze with 200 training tasks, our method obtains \(82\%\) success on test tasks, whereas the previous state of the art achieved \(66\%\). Figure 1: Normalized test Performance for ExpGen, LEEP, IDAAC, DAAC, and PPO, on five ProcGen games. ExpGen shows state-of-the-art performance on test levels of Maze, Heist and Jumper; games that are notoriously challenging for other leading approaches. The scores are normalized as proposed by (Cobbe et al., 2020). Figure 2: (a),(b),(c),(d) and (e) displays screenshot of ProcGen games. (f) Imaginary maze with goal and walls removed (see text for explanation). Related Work Generalization in RLThe recent survey by Kirk et al. (2021) provides an extensive review of generalization in RL. Here, we provide a brief survey: One approach to generalization is by artificially increasing the number of training tasks, using either procedural generation (Cobbe et al., 2019, 2020), or augmentations (Kostrikov et al., 2020, Ye et al., 2020, Lee et al., 2019a), task interpolation (Yao et al., 2021) or various regularization technique, such as dropout (Igl et al., 2020) and batch normalization (Farebrother et al., 2018, Igl et al., 2020). Raileanu and Fergus (2021) as well as Raileanu and Fergus (2021), Cobbe et al. (2021), who investigate the advantages of decoupling policy and value functions for generalization, while Jiang et al. (2021) propose automatic curriculum learning of levels. A different approach is to add inductive bias to the neural network policy or learning algorithm. Approaches such as Tamar et al. (2016), Vlastelica et al. (2021), Boutilier et al. (2020) embed a differentiable planning or learning algorithm into the neural network. Kansky et al. (2017), Toyer et al. (2018), Rivlin et al. (2020) combine learning with classical graph planning to generalize across various planning domains. These approaches require some knowledge about the problem structure (e.g., a relevant planning algorithm), while our approach does not require any task-specific knowledge. Another line of work aims to learn policies or features that are invariant across the different training tasks (Sonar et al., 2021, Bertran et al., 2020, Li et al., 2021, Igl et al., 2019, Stooke et al., 2021, Mazoure et al., 2020). Most relevant to our work is the LEEP algorithm of Ghosh et al. (2021), which trains an ensemble of policies, each on different subsets of the training environments, with a loss function that encourages agreement between the ensemble members. Effectively, the KL loss in LEEP encourages random actions when the agents in the ensemble do not agree, which is related to our method. However, random actions can be significantly less effective in exploring a domain than a policy that is explicitly trained to explore, such as a maximum-entropy policy. Consequentially, we observe that our approach leads to significantly better performance at test time. State Space Maximum Entropy ExplorationMaximum entropy exploration (maxEnt, Hazan et al. 2019, Mutti et al. 2021) is an unsupervised learning framework that trains policies that maximize the entropy of their state-visitation frequency, leading to a behavior that continuously explores the environment state space. Recently, maximum entropy policies have gained attention in RL Liu and Abbeel (2021, 2020), Yarats et al. (2021), Seo et al. (2021), Hazan et al. (2019), Mutti et al. (2021) mainly in the context of unsupervised pre-training. There, the agent is allowed to train for a long period without access to environment rewards, and only during test the agent gets exposed to the reward signal and performs a limited fine-tuning adaptation learning. Importantly, these works expose the agent to the same environments during pre-training and test phases, with the only distinction being the lack of extrinsic reward during pre-training. To the best of our knowledge, our observation that maxEnt policies generalize well in the zero-shot setting is novel. ## 3 Problem Setting and Background We describe our problem setting and background on maxEnt exploration. #### Reinforcement Learning (RL) In Reinforcement Learning an agent interacts with an unknown, stochastic environment and collects rewards. This is modeled by a partially observed Markov Decision Process (POMDP) (Bertsekas, 2012), which is the tuple \(M=(S,A,O,P_{init},P,\Sigma,r,\gamma)\), where \(S\in\mathbb{R}^{|S|}\) and \(A\in\mathbb{R}^{|A|}\) are the state and actions spaces, \(O\) is the observation space, \(P_{init}\) is an initial state distribution, \(P\) is the transition kernel, \(\Sigma\) is the observation function, \(r:S\times A\rightarrow\mathbb{R}\) is the reward function, and \(\gamma\in[0,1)\) is the discount factor. The agent starts from initial state \(s_{0}\sim P_{init}\) and at time \(t\) performs an action \(a_{t}\) on the environment that yields a reward \(r_{t}=r(s_{t},a_{t})\), and an observation \(o_{t}=\Sigma(s_{t},a_{t})\in O\). Consequently, the environment transitions into the next state according to \(s_{t+1}\sim P(\cdot|s_{t},a_{t})\). Let the history at time \(t\) be \(h_{t}=\{o_{0},a_{0},r_{0},o_{1},a_{1},r_{1}\ldots,s_{t}\}\), the sequence of states, actions and rewards. The agent's next action is outlined by a policy \(\pi\), which is a stochastic mapping from the history to a probability over available actions \(\pi(a|h_{t})=P(a_{t}=a|h_{t})\). A history-dependent policy (and not a Markov policy) is required both due to partially observed states, epistemic uncertainty (Ghosh et al., 2021), and also for optimal maxEnt exploration (Mutti et al., 2022). ### Zero-Shot Generalization for RL We assume a prior distribution over POMDPs \(P(M)\), defined over some space of POMDPs. For a given POMDP, an optimal policy maximizes the expected discounted return \(\mathbb{E}_{\pi,M}[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})]\), where the expectation is taken over the policy \(\pi(h_{t})\), and the states transition probability \(s_{t}\sim P\) of POMDP \(M\). Our generalization objective in this work is to maximize the discounted cumulative reward taken _in expectation over the POMDP prior_, also termed the _population risk_: \[\mathcal{L}_{pop}(\pi)=\mathbb{E}_{M\sim P(M)}\left[\mathbb{E}_{\pi,M}\left[ \sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\right]\right] \tag{1}\] Seeking a policy that performs well in expectation over any POMDP from the prior corresponds to zero-shot generalization. We assume access to \(N\) training POMDPs \(M_{1},\ldots,M_{N}\) sampled from the prior, \(M_{i}\sim P(M)\). Our goal is to use \(M_{1},\ldots,M_{N}\) to learn a policy that performs well on objective 1. A common approach is to optimize the _empirical risk_ objective: \[\mathcal{L}_{emp}(\pi)=\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}_{\pi,M_{i}}\left[ \sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\right]=\mathbb{E}_{M\sim\hat{P}(M )}\left[\mathbb{E}_{\pi,M}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}) \right]\right], \tag{2}\] where the empirical POMDP distribution can be different from true distribution, i.e. \(\hat{P}(M)\neq P(M)\). In general, a policy that optimizes the empirical risk (2) may perform poorly on the population risk (1) - this is known as overfitting in statistical learning theory (Shalev-Shwartz and Ben-David, 2014), and has been analysed recently also for RL (Tamar et al., 2022). ### Maximum Entropy Exploration In the following we provide the definitions for the state distribution and the maximum entropy exploration objective. For simplicity, we discuss MDPs - the fully observed special case of POMDPs where \(O=S\), and \(\Sigma(s,a)=s\). A policy \(\pi\), through its interaction with an MDP, induces a \(t\)-step state distribution \(d_{t,\pi}(s)=p(s_{t}=s|\pi)\) over the state space \(S\). Let \(d_{t,\pi}(s,a)=p(s_{t}=s,a_{t}=a|\pi)\) be its \(t\)-step state-action counterpart. For the infinite horizon setting, the stationary state distribution is defined as \(d_{\pi}(s)=lim_{t\rightarrow\infty}d_{t,\pi}(s)\), and its \(\gamma\)-discounted version as \(d_{\gamma,\pi}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}d_{t,\pi}(s)\). We denote the state marginal distribution as \(d_{T,\pi}(s)=\frac{1}{T}\sum_{t=0}^{T}d_{t,\pi}(s)\), which is a marginalization of the t-step state distribution over a finite time \(T\). The objective of maximum entropy exploration is given by: \[\mathcal{H}(d(\cdot))=-\mathbb{E}_{s\sim d}[\log(d(s))], \tag{3}\] where \(d\) can be regarded as either the stationary state distribution \(d_{\pi}\)Mutti and Restelli (2020), the discounted state distribution \(d_{\gamma,\pi}\)Hazan et al. (2019) or the marginal state distribution \(d_{T,\pi}\)Lee et al. (2019), Mutti and Restelli (2020). In our work we focus on the finite horizon setting and adapt the marginal state distribution \(d_{T,\pi}\) in which \(T\) equals the episode horizon \(H\), i.e. we seek to maximize the objective: \[\mathcal{L}_{\mathcal{H}}(\pi)=\mathbb{E}_{M\sim\hat{P}(M)}\left[\mathcal{H}( d_{H,\pi})\right]=\mathbb{E}_{M\sim\hat{P}(M)}\left[\mathcal{H}\left( \frac{1}{H}\sum_{t=0}^{H}d_{t,\pi}(s)\right)\right], \tag{4}\] which yields a policy that equally visits all states during the episode. Existing works on maximum entropy exploration propose algorithms that rely on estimating the density of the states visitation distribution Hazan et al. (2019), Lee et al. (2019). More recently, a branch of algorithms that rely on non-parametric entropy estimation Liu and Abbeel (2021), Mutti et al. (2021), Seo et al. (2021) has emerged, circumventing the burden of density estimation. Here, we follow this common thread and adapt the non-parametric approach; we estimate the entropy using the particle-based \(k\)-nearest neighbor (\(k\)-NN estimator) Beirlant et al. (1997), Singh et al. (2003), as explained in the next section. ## 4 The Generalization Ability of Maximum Entropy Exploration In this section we present an empirical observation - policies trained for maximum entropy exploration (maxEnt policy) generalize well. First, we explain the training procedure of our maxEnt policy, then we show empirical results supporting this observation. ### Training State Space Maximum Entropy policy To tackle objective (4), we estimate the entropy using the particle-based \(k\)-nearest neighbor \(k\)-NN estimator Beirlant et al. (1997), Singh et al. (2003), as explained below: Let \(X\) be a random variable over the support \(\chi\subset\mathbb{R}^{m}\) with a probability mass function \(p\). Given the probability of this random variable, its entropy is obtained by \(\mathcal{H}_{X}(p)=-\mathbb{E}_{x\sim p}[\log(p)]\). Without access to its distribution \(p\), the entropy can be estimated using \(N\) samples \(\{x_{i}\}_{i=1}^{N}\) by the \(k\)-NN estimator Singh et al. (2003): \[\hat{\mathcal{H}}_{X}^{k,N}(p)\approx\frac{1}{N}\sum_{i=1}^{N}\log\left(\left\| x_{i}-x_{i}^{k-NN}\right\|_{2}\right), \tag{5}\] where \(x_{i}^{k-NN}\) is the \(k-NN\) sample of \(x_{i}\) from the set \(\{x_{i}\}_{i=1}^{N}\). To estimate the distribution \(d_{H,\pi}\) over the states \(S\), we consider each trajectory as \(H\) samples of states \(\{s_{t}\}_{t=1}^{H}\) and take \(s_{t}^{k-NN}\) to be the \(k-NN\) of the state \(s_{t}\) within the trajectory, as proposed by previous works (APT Liu and Abbeel (2021), RE3 Seo et al. (2021), APS Liu and Abbeel (2021)), \[\hat{\mathcal{H}}^{k,H}(d_{H,\pi})\approx\frac{1}{H}\sum_{t=1}^{H}\log\left( \left\|s_{t}-s_{t}^{k-NN}\right\|_{2}\right). \tag{6}\] Next, similar to previous works, since this sampled estimation of the entropy (Eq. 6) is a sum of functions that operate on each state separately, it can be considered as an expected reward objective Figure 3: **Generalization ability of Maximum Entropy vs. extrinsic reward: (a) Score of maximum entropy. (b) Score of extrinsic reward. Training for maximum entropy exhibits a small generalization gap in Maze, Jumper and Miner. Average and standard deviation are obtained using \(4\) seeds.** \(\hat{\mathcal{H}}^{k,H}(d_{H,\pi})\approx\frac{1}{H}\sum_{t=1}^{H}r_{I}(s_{t})\) with the intrinsic reward function: \[r_{I}(s_{t}):=\log(\left\lVert s_{t}-s_{t}^{k-NN}\right\rVert_{2}). \tag{7}\] This formulation enables us to deploy any RL algorithm to approximately optimize objective (4). Specifically, in our work we use the policy gradient algorithm PPO Schulman et al. (2017), where at every time step \(t\) the state \(s_{t}^{k-NN}\) is chosen from previous states \(\{s_{i}\}_{i=1}^{t-1}\) of the same episode. We found that using \(k=1\) provides a good approximation in our experiments. Next, calculating the \(L_{2}\) norm of the \(k-NN\) (Eq. 7) at every time step \(t\) can be computationally demanding. To improve computational efficiency, we introduce the following approximation: instead of taking the full observation as the state \(s_{i}\) (i.e. \(64\times 64\) RGB image), we sub-sample (denoted \(\downarrow\)) the observation by applying average pooling of \(3\times 3\) to obtain an image \(s_{i}^{\downarrow}\) of size \(21\times 21\). Additionally, we take the \(L_{0}\) norm from the sub-sampled states instead of the \(L_{2}\) norm in Eq. 7 as we hypothesized this to yield a more informative reward signal, corresponding to the movement of objects in the scene: \[r_{I}(s_{t}):=\left\lVert s_{t}^{\downarrow}-s_{t}^{k-NN,\downarrow}\right\rVert _{0}. \tag{8}\] We emphasize that we do not modify the termination condition of each game. However, a maxEnt policy will learn to _avoid_ termination, as this increases the sum of intrinsic rewards. In Figure 4 we display the states visited by a maxEnt policy on Maze. ### Generalization Performance of State Space Maximum Entropy Policy The generalization gap describes the difference between the reward accumulated during train and test stages for a policy, \(\mathcal{L}_{emp}(\pi)-\mathcal{L}_{pop}(\pi)\), where we approximate the population score by testing on a large population of tasks withheld during training. We can evaluate the generalization gap for either an extrinsic reward, or for an intrinsic reward, such as the reward that elicits maxEnt exploration 8. In the latter case, the generalization gap captures how well the agent's exploration strategy generalizes. We found that agents trained for maximum entropy exploration exhibit a smaller generalization gap compared with the standard approach of training solely with extrinsic reward. Intuitively, this can be attributed to the extrinsic reward serving as an 'easy' signal to learn from, and overfit to in the training environments. To assess the generalization quality of the maxEnt policy, we train agents on \(200,500,1000\) and \(5000\) instances of ProcGen's Maze, Jumper and Miner environments using the intrinsic reward (Eq. 8). The policies are equipped with a memory unit (GRU, Cho et al., 2014) to allow learning of deterministic policies that maximize the entropy (Mutti et al., 2022)3. Footnote 3: An extensive discussion on the importance of memory for the maxEnt objective is in Appendix A.2 The train and test return scores are shown in Fig. (a)a. In all three environments, we demonstrate a small generalization gap, as test performance on unseen levels closely follows the performance achieved during training. When considering Maze trained on \(200\) levels, we observe a small generalization gap of \(1.7\%\), meaning test performance closely follows train performance. For Jumper and Miner the maxEnt policy exhibits a small generalization gap of \(8.5\%\) and \(4.3\%\), respectively. In addition, We verify that the train results are close to optimal by comparing with a hand designed approximately optimal exploration policy. For example, on Maze we use the well known maze exploring strategy _wall follower_, also known as the left/right-hand rule, Hendrawan (2020); see Appendix A.1 for more details. Next, we evaluate the generalization gap of agents trained to maximize the extrinsic reward4. The results for this experiment, shown in Fig. (b)b, illustrate that the generalization gap for extrinsic reward is more prominent. For comparison, when trained on \(200\) levels, the figure shows a large generalization gap for Maze (\(38.8\%\)) and Jumper (\(27.5\%\)), while Miner exhibits a moderate generalization gap of \(13.1\%\). Figure 4: maxEnt on Maze. Explore to Generalize (ExpGen) Our main insight is that, given the generalization property of the entropy maximization policy established above, an agent can apply this behavior in a test MDP and expect effective exploration _at test time_. In the following, we pair this insight with the epistemic POMDP idea, and propose to play the exploration policy when the agent faces epistemic uncertainty, hopefully driving the agent to a different state where the reward-seeking policy is more certain. This can be seen as an adaptation of the seminal _explicit explore or exploit_ idea (Kearns and Singh, 2002), to the setting of ZSG-RL. ### Algorithm Our framework comprises two parts: an entropy maximizing network and an ensemble of networks that maximize an extrinsic reward to evaluate epistemic uncertainty. The first step entails training a network equipped with a memory unit to obtain a policy \(\pi_{\mathcal{H}}\) that maximizes entropy, as described in section 4.1. Next, we train an ensemble of memory-less policy networks \(\{\pi_{r}^{j}\}_{j=1}^{m}\) to maximize extrinsic reward. Following Ghosh et al. (2021), we shall use the ensemble to assess epistemic uncertainty. Different from Ghosh et al. (2021), however, we do not change the RL loss function, and use an off-the-shelf RL algorithm (PPO; Schulman et al., 2017)). At test time, we couple these two components into a combined agent \(\boldsymbol{\pi}\) (detailed as pseudo-code in Algorithm 1). We consider domains with a finite action space, and say that the policy \(\pi_{r}^{i}\) is certain at state \(s\) if its action \(a_{i}\!\sim\!\pi_{r}^{i}(\cdot|s_{t})\) is in consensus with the ensemble: \(a_{i}=a_{j}\) for the majority of \(k\) out of \(m\), where \(k\) is a hyperparameter of our algorithm. When the networks \(\{\pi_{r}^{j}\}_{j=1}^{m}\) are not in consensus, the agent \(\boldsymbol{\pi}\) takes a sequence of \(n_{\pi_{\mathcal{H}}}\) actions from the entropy maximization policy \(\pi_{\mathcal{H}}\), which encourages exploratory behavior. Agent meta-stabilitySwitching between two policies may result in a case where the agent repeatedly toggles between two states - if, say, the maxEnt policy takes the agent from state \(s_{1}\) to a state \(s_{2}\), where the ensemble agrees on an action that again moves to state \(s_{1}\). To avoid such "meta-stable" behavior, we randomly choose the number of maxEnt steps \(n_{\pi_{\mathcal{H}}}\) from a Geometric distribution, \(n_{\pi_{\mathcal{H}}}\sim Geom(\alpha)\). ## 6 Experiments We evaluate our algorithm on the ProcGen benchmark, which employs a discrete 15-dimensional action space and generates RGB observations of size \(64\times 64\times 3\). Our experimental setup follows ProcGen's 'easy' configuration, wherein agents are trained on 200 levels and subsequently tested on \(25M\) steps from random levels (Cobbe et al., 2020). All agents are implemented using the IMPALA convolutional architecture (Espeholt et al., 2018), and trained using PPO (Schulman et al., 2017). For the maximum entropy agent \(\pi_{\mathcal{H}}\) we incorporate a single GRU (Cho et al., 2014) at the final embedding of the IMPALA convolutional architecture. For all games we use the same parameter \(\alpha=0.5\) of the Geometric distribution, and form an ensemble of 10 networks. For further information regarding our experimental setup and specific hyperparameters, please refer to Appendix C.2. The code will be made available online upon publication. ### Generalization Performance We compare our algorithm to six leading algorithms: vanilla PPO (Schulman et al., 2017), PLR (Jiang et al., 2021), an algorithm utilizing automatic curriculum-based learning, UCB-DrAC (Raileanu et al., 2021), which incorporates data augmentation to learn policies invariant to different input transformations, PPG (Cobbe et al., 2021), which decouples the optimization of policy and value function during learning, IDAAC (Raileanu and Fergus, 2021), the previous state-of-the-art algorithm on ProcGen that decouples policy learning from value function learning and employs adversarial loss to enforce invariance to spurious features. Lastly, we evaluate our algorithm against LLEP (Ghosh et al., 2021), the only algorithm that, to our knowledge, managed to improve upon the performance of vanilla PPO on Maze and Heist. The evaluation matches the train and test setting detailed by the contending algorithms and their performance is provided as reported by their authors. For evaluating LLEP, we use the original implementation provided by the authors. Tables 2 and 1 show the train and test scores, respectively, for ten ProcGen games which exhibit the most noticeable generalization gap (based on Figure 13 in (Cobbe et al., 2020)). The tables show that ExpGen has a notable gain over the baselines on Maze, Heist and Jumper, while on other games, invariance-based approaches perform better (for example, IDAAC leads on BigFish, Plunder and Climber, whereas PPG leads on CaveFlyer, and UCB-DrAC leads on Dodgeball). These results correspond to our observation that for some domains, invariance cannot be used to completely resolve epistemic uncertainty. We emphasize that _ExpGen substantially outperforms LLEP on all games_, showing that our improved exploration at test time is significant. In Appendix B.1 we compare ExpGen with LLEP trained for \(50M\) environments steps, showing a similar trend. Combining ExpGen with invariance:Note that the advantage of ExpGen is _complementary_ to the advantage of methods such as IDAAC. Indeed, while we used vanilla PPO for training the ensemble in ExpGen, we could have equally used IDAAC to potentially improve performance on games where ExpGen does not help. An even simpler approach is to train both an IDAAC agent and an ExpGen agent, and choose between them based on performance on a validation set. Taking the maximum test performance of IDAAC and ExpGen, we see that on most of the 10 ProcGen environments above, the generalization gap is relatively small compared to vanilla PPO (Cobbe et al., 2020). A notable exception is Dodgeball, where all current methods still fail. Ablation Study:One may wonder if the ensemble in ExpGen is necessary, or whether the observation that the maxEnt policy generalizes well can be exploited using a single policy. We investigate the effect of combining the intrinsic and extrinsic rewards into a single reward as a weighted sum: \[r_{\mathrm{total}}=\beta r_{I}+(1-\beta)r_{\mathrm{ext}}, \tag{9}\] and train for \(\beta=0.1,0.3,0.5,0.7,0.9\) on Maze. Figure 5 shows the train and test scores over \(50M\) steps for different values of discount factor \(\gamma\). We obtain the best test score for \(\gamma=0.5\) and \(\beta=0.1\), illustrating an improvement compared with the PPO baseline. When comparing with ExpGen, the combined reward exhibits inferior performance with slightly higher variance. In Appendix D, we also provide an ablation study of ensemble size and draw comparisons to variants of our algorithm. Figure 5: Test performance of PPO trained using a reward that combined intrinsic and extrinsic rewards weighted by \(\beta\) (\(r_{total}\) in 9), for different values of discount factor \(\gamma\). All networks are randomly initialized and trained on \(200\) maze levels. The figure shows an improvement over the PPO baseline for \(\gamma=0.5\). In all cases, ExpGen outperforms the combined reward agent. ## 7 Discussion and Limitations We observed that policies trained to explore, using maximum entropy RL, exhibited generalization of their exploration behavior in the zero-shot RL setting. Based on this insight, we proposed ExpGen - a ZSG-RL algorithm that takes a maxEnt exploration step whenever an ensemble of policies trained for reward maximization does not agree on the current action. We demonstrated that this simple approach performs well on several challenging ZSG-RL domains from the ProcGen benchmark. One burning question is _why does maxEnt exploration generalize so well?_ An intuitive argument is that the maxEnt policy in an MDP is _invariant_ to the reward. Thus, if for every training MDP there are many different rewards, each prescribing a different behavior, the maxEnt policy has to be invariant to this variability. In other words, the maxEnt policy contains _no information_ about the rewards in the data, and generalization is well known to be bounded by the mutual information between the policy and the training data (Bassily et al., 2018). More interesting, however, is whether the maxEnt policy is also less sensitive to variations in the dynamics of the MDPs. We leave this as an open theoretical problem. Another consideration is safety. In some domains, a wrong action can lead to a disaster, and in such cases exploration at test time should be hedged. One possibility is to add to ExpGen's policy ensemble an ensemble of advantage functions, and use it to weigh the action agreement (Rotman et al., 2020). Intuitively, the ensemble should agree that unsafe actions have a low advantage, and not select them at test time. Finally, we point out that while our work made significant progress on generalization in several ProcGen games, the performance on Dodgeball remains low for all methods we are aware of. An interesting question is whether performance on Dodgeball can be improved by combining invariance-based techniques with exploration at test time, or whether Dodgeball represents a different class of problems that requires a completely different approach. \begin{table} \begin{tabular}{l|c c c c c|c|c} \hline \hline Game & PPO & PLR & UCB-DrAC & PPG & IDAAC & LEEP & ExpGen \\ \hline BigFish & \(2.9\pm 1.1\) & \(10.9\pm 2.8\) & \(9.2\pm 2.0\) & \(11.2\pm 1.4\) & \(\textbf{18.5\pm 1.2}\) & \(4.9\pm 0.9\) & \(6.0\pm 0.5\) \\ Ninja & \(6.1\pm 0.2\) & \(\textbf{7.2\pm 0.4}\) & \(6.6\pm 0.4\) & \(6.6\pm 0.1\) & \(6.8\pm 0.4\) & \(4.4\pm 0.5\) & \(6.2\pm 0.2\) \\ Plunder & \(7.8\pm 1.6\) & \(8.7\pm 2.2\) & \(8.3\pm 1.1\) & \(14.3\pm 2.0\) & \(\textbf{23.3\pm 1.4}\) & \(4.4\pm 0.3\) & \(4.7\pm 0.3\) \\ CaveFlyer & \(5.5\pm 0.5\) & \(6.3\pm 0.5\) & \(5.0\pm 0.8\) & \(\textbf{7.0\pm 0.4}\) & \(5.0\pm 0.6\) & \(4.9\pm 0.2\) & \(5.9\pm 0.3\) \\ Jumper & \(5.8\pm 0.3\) & \(5.8\pm 0.5\) & \(6.2\pm 0.3\) & \(5.9\pm 0.1\) & \(6.3\pm 0.2\) & \(5.4\pm 1.2\) & \(\textbf{6.7\pm 0.2}\) \\ Climber & \(5.4\pm 0.5\) & \(6.3\pm 0.8\) & \(6.3\pm 0.6\) & \(2.8\pm 0.4\) & \(\textbf{8.3\pm 0.4}\) & \(2.6\pm 0.9\) & \(5.5\pm 0.4\) \\ Dodgeball & \(2.2\pm 0.4\) & \(1.8\pm 0.5\) & \(\textbf{4.2\pm 0.9}\) & \(2.3\pm 0.3\) & \(3.2\pm 0.3\) & \(1.9\pm 0.2\) & \(1.9\pm 0.3\) \\ Heist & \(2.4\pm 0.5\) & \(2.9\pm 0.5\) & \(3.5\pm 0.4\) & \(2.8\pm 0.4\) & \(3.5\pm 0.2\) & \(4.5\pm 0.3\) & \(\textbf{7.4\pm 0.2}\) \\ Maze & \(5.6\pm 0.1\) & \(5.5\pm 0.8\) & \(6.3\pm 0.1\) & \(5.1\pm 0.3\) & \(5.6\pm 0.3\) & \(6.6\pm 0.2\) & \(\textbf{8.2\pm 0.1}\) \\ Miner & \(7.8\pm 0.3\) & \(9.6\pm 0.6\) & \(9.2\pm 0.6\) & \(7.4\pm 0.2\) & \(\textbf{9.5\pm 0.4}\) & \(1.1\pm 0.1\) & \(8.0\pm 0.7\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Test score** of ProcGen games trained on 200 levels for 25M environment steps. We compare our algorithm to PPO, PLR, UCB-DrAC, PPG, IDAAC and LEEP. The mean and standard deviation are computed over 10 runs with different seeds. \begin{table} \begin{tabular}{l|c c c c c|c|c} \hline \hline Game & PPO & PLR & UCB-DrAC & PPG & IDAAC & LEEP & ExpGen \\ \hline BigFish & \(8.9\pm 2.0\) & \(7.8\pm 1.0\) & \(12.8\pm 1.8\) & \(19.9\pm 1.7\) & \(\textbf{21.8\pm 1.8}\) & \(8.9\pm 0.9\) & \(7.0\pm 0.4\) \\ Ninja & \(7.3\pm 0.2\) & \(5.4\pm 0.5\) & \(8.0\pm 0.4\) & \(8.9\pm 0.2\) & \(\textbf{8.9\pm 0.3}\) & \(4.6\pm 0.2\) & \(7.4\pm 0.3\) \\ Plunder & \(9.4\pm 1.7\) & \(4.1\pm 1.3\) & \(10.2\pm 1.76\) & \(16.4\pm 1.9\) & \(\textbf{24.6\pm 1.6}\) & \(4.9\pm 0.2\) & \(4.4\pm 0.3\) \\ CaveFlyer & \(7.3\pm 0.7\) & \(6.4\pm 0.1\) & \(5.8\pm 0.9\) & \(\textbf{9.5\pm 0.2}\) & \(6.2\pm 0.6\) & \(4.9\pm 0.3\) & \(6.8\pm 0.8\) \\ Jumper & \(8.6\pm 0.1\) & \(3.6\pm 0.5\) & \(8.2\pm 0.1\) & \(8.7\pm 0.1\) & \(\textbf{8.7\pm 0.2}\) & \(5.7\pm 0.1\) & \(7.9\pm 0.2\) \\ Climber & \(7.6\pm 0.6\) & \(6.2\pm 0.8\) & \(8.6\pm 0.6\) & \(10.2\pm 0.2\) & \(\textbf{10.2\pm 0.7}\) & \(3.5\pm 0.3\) & \(7.3\pm 0.3\) \\ Dodgeball & \(6.4\pm 0.6\) & \(2.0\pm 1.1\) & \(\textbf{7.3\pm 0.8}\) & \(5.5\pm 0.5\) & \(4.9\pm 0.3\) & \(3.3\pm 0.1\) & \(3.9\pm 0.4\) \\ Heist & \(6.1\pm 0.8\) & \(1.2\pm 0.4\) & \(6.2\pm 0.6\) & \(7.4\pm 0.4\) & \(4.5\pm 0.3\) & \(7.1\pm 0.2\) & \(\textbf{9.4\pm 0.1}\) \\ Maze & \(9.2\pm 0.1\) & \(4.1\pm 0.5\) & \(8.5\pm 0.3\) & \(9.0\pm 0.2\) & \(6.4\pm 0.5\) & \(9.4\pm 0.3\) & \(\textbf{9.6\pm 0.1}\) \\ Miner & \(11.3\pm 0.3\) & \(9.7\pm 0.4\) & \(\textbf{12.0\pm 0.3}\) & \(11.3\pm 1.0\) & \(11.5\pm 0.5\) & \(1.9\pm 0.6\) & \(9.0\pm 0.8\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Train score** of ProcGen games trained on 200 levels for 25M environment steps. We compare our algorithm to PPO, PLR, UCB-DrAC, PPG, IDAAC and LEEP. The mean and standard deviation are computed over 10 runs with different seeds. AcknowledgmentsThe research of DS was Funded by the European Union (ERC, A-B-C-Deep, 101039436). The research of EZ and AT was Funded by the European Union (ERC, Bayes-RL, 101041250). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency (ERCEA). Neither the European Union nor the granting authority can be held responsible for them. DS also acknowledges the support of Schmidt Career Advancement Chair in AI.
2301.08180
The throughput in multi-channel (slotted) ALOHA: large deviations and analysis of bad events
We consider ALOHA and slotted ALOHA protocols as medium access rules for a multi-channel message delivery system. Users decide randomly and independently with a minimal amount of knowledge about the system at random times to make a message emission attempt. We consider the two cases that the system has a fixed number of independent available channels, and that interference constraints make the delivery of too many messages at a time impossible. We derive probabilistic formulas for the most important quantities like the number of successfully delivered messages and the number of emission attempts, and we derive large-deviation principles for these quantities in the limit of many participants and many emission attempts. We analyse the rate functions and their minimizers and derive laws of large numbers for the throughput. We optimize it over the probability parameter. Furthermore, we are interested in questions like ``if the number of successfully delivered messages is significantly lower than the expectation, was the reason that too many or too few sending attempts were made?''. Our main tools are basic tools from probability and the theory of (the probabilities of) large deviations.
Wolfgang König, Charles Kwofie
2023-01-19T17:18:33Z
http://arxiv.org/abs/2301.08180v1
# The throughput in multi-channel (slotted) ALOHA: ###### Abstract. We consider ALOHA and slotted ALOHA protocols as medium access rules for a multi-channel message delivery system. Users decide randomly and independently with a minimal amount of knowledge about the system at random times to make a message emission attempt. We consider the two cases that the system has a fixed number of independent available channels, and that interference constraints make the delivery of too many messages at a time impossible. We derive probabilistic formulas for the most important quantities like the number of successfully delivered messages and the number of emission attempts, and we derive large-deviation principles for these quantities in the limit of many participants and many emission attempts. We analyse the rate functions and their minimizers and derive laws of large numbers for the throughput. We optimize it over the probability parameter. Furthermore, we are interested in questions like "if the number of successfully delivered messages is significantly lower than the expectation, was the reason that too many or too few sending attempts were made?". Our main tools are basic tools from probability and the theory of (the probabilities of) large deviations. _MSC 2020._ 60F10, 60G50; _Keywords and phrases._ Communication networks, medium access, ALOHA, slotted ALOHA, optimizing throughput, large deviations. ## 1. Introduction and main results ### Introduction Protocols for medium access control (MAC) are fundamental and ubiquitous in any telecommunication system. Here we are particularly interested in _multi-channel systems_, where a fixed number of channels is available. In order to keep the complexity of the algorithm of the channel choices by the transmitters low, we make a well-known probabilistic ansatz and assume that each transmitter chooses randomly and independently a channel for each transmission. This makes the system get along with a minimum of infrastructure, i.e, with a minimum knowledge about the occupancy of the channels. In other words, we consider an ALOHA-based multi-channel protocol, see [10]. More specifically, we concentrate in this paper on _slotted ALOHA_, where message transmissions are possible only in specific micro time slots. It is our purpose to study random events that comprise the transmission of many messages from many transmitters in a large number of (very short) time-slots, forming a fixed time interval, in the limit of many such slots. In each of the slots, each transmitter chooses with a certain probability, independently over all transmitters and over all slots, whether to make a transmission attempt in that slot or not. This probability must be very small, i.e., on the scale of the inverse of the number of transmitters. This leads to a huge number of random decisions that have to be drawn in each time slot, with a tiny probability each, which leads to a huge amount of data with high imprecision. In this paper, we give a probabilistic analysis of the main quantities, like numbers of attempts and of successes, per micro time slot in the limit of many such time slots, coupled with many message emission attempts. In particular, we comprehensively characterize the main quality parameter, the _throughput_. We are going to find neat descriptions of the entire (joint) distributions of these quantities and of their limits. In particular, we introduce techniques from the probabilistic theory of (the probabilities of) large deviations. Using this theory, we analyse events that have a very low probability in this limit, like the event that the number of successes is significantly lower than its expectation. Furthermore, we give an explicit assessment of the most likely reason for this. In this way, we go far beyond calculating (limiting) expectations, but we handle the numbers of message attempts and transmission successes per slot as stochastic processes with a rich structure. In our system, we have a fixed upper bound \(\kappa\) for the number of messages that can be successfully delivered in a given micro time slot. Our main system parameter is the probability parameter \(p\), the _medium access probability (MAP)_, with which each of the messages tries randomly to gain access to the system. If \(p\) is too large, then it is likely that the system exceeds the upper bound \(\kappa\), which results into failures of many message transmissions. On the other hand, if \(p\) is too small, then a part of the possible capacity is not exhausted, and the system underachieves. One of our goals is to quantify an optimal choice of \(p\). The main quantity for this criterion is the _throughput_, the number of successfully transmitted messages per time unit. But we analyse also other quantities like the number of message attempts. In the _multi-channel (MC) models_ that we consider in this paper, we assume a total interference isolation between the channels, i.e., we neglect possible interferences between them. Here each channel in a given micro time slot is able to successfully transmit one message, if no more than one emission attempt is made through this channel. The higher the number of emission attempts is, the higher is the number of successes (but also the number of unsuccessful messages, which we could also analyse with our ansatz, but abstained from); hence an optimization over the probability parameter is only of limited interest, unless there is a substantial price that is paid per unsuccessful transmission. Closely related to multi-channel systems are systems with entirely unlimited interference between all of them. Here the success of the transmission of the messages is regulated by means of the _signal-to-interference ratio (SIR)_. In a simplified setting, the transmission of message \(i\) in a given time slot is successful if and only if \[\frac{1}{\sum_{j\in\Gamma\setminus\{i\}}1}\geq\tau,\] where \(\tau\in(0,\infty)\) is a technical constant, and \(I\) is the index set of messages that attempt to transmit in this slot (which depends on various quantities, like the number of message emission attempts in that slot, which may be random). Since we are working in a spaceless model, there is no distance and therefore no path-loss function involved, and we give the same signal strength power \(1\) to each transmission attempt. Putting \(\kappa=1+\lfloor\frac{1}{\tau}\rfloor\in\mathbb{N}\), we see that any transmission attempt in the slot is successful if and only if no more than \(\kappa\) attempts are made in the slot; otherwise interference makes all these attempts unsuccessful. This is the second of the two model functionalities that we are going to study; we call it an _interference-based (IB)_ model. Mathematically, it shows great similarities to multi-channel models, but the most important difference is that a high number of emission attempts leads to many unsuccessful attempts and is therefore working against a high throughput; hence an optimization over the probability parameter is of high interest and not an easy task. While the derivation of the expected throughput in the multi-channel ALOHA model and its optimization over \(p\) is easy (with the well-known result that the maximal throughput is equal to \(\kappa/\mathrm{e}\) with \(\kappa\) the number of channels), for the interference-based model, we can offer an explicit formula for the expectation, but only approximate characterisations of the maximization over \(p\), which get sharp in the limit as \(\kappa\to\infty\). We would like to point out that, from a mathematical-practical point of view, it might have advantages to let each transmitter decide, for the entire time interval under consideration, whether or not an attempt is made during that interval, and then to randomly and uniformly distribute the attempts over the time slots of this interval. We call the first mode of attempt decisions _local_ and the latter _global_. We will be studying both in this paper, since we believe that both have their right and their advantages. On the level of expectations, there will be no difference noticeable between the main quantities of interest, but in the large-deviation behavior. Summarizing, the main new contributions of the present paper are the following. 1. describing the relevant quantities in terms of their entire joint distribution (rather than only expectations), 2. describing limiting events of large deviations asymptotically in terms of explicit rate functions, 3. comparing local and global random assignments of transmission slots, 4. optimizing the throughput over the MAP for the interference-based model, 5. analysis of large deviation probabilities of conditional events (e.g., of a low number of successes). The remainder of this paper is organized as follows. We introduce our models in Section 1.2 and the most important quantities and questions in Section 1.3. Our results are presented and commented in Section 1.4, and some comments on the literature are made in Section 1.5. Section 2 brings all the proofs of the large-deviation principles, and Section 3 the proofs of the other results. ### Description of the models Let us introduce the models that we are going to analyse. We consider a reference time interval, which we pick as \([0,1]\). We have a large parameter \(N\in\mathbb{N}\), which models a large number of network participants and a large number of time slots. The reference time interval is divided in to \(N\) slots \([\frac{i-1}{N},\frac{i}{N})\) for \(i\in[N]=\{1,\ldots,N\}\); every message delivery starts at the beginning of one of these slots and terminates before its elapsure. With a fixed parameter \(b\in(0,\infty)\), we assume that \(bN\) participants (we waive the integer-part brackets) are in the system, i.e., at any time \(bN\) devices would like to emit one message each. Access to the medium is under some random rule, for which we consider two variants, a rule that is _local in time_ and one that is _global in time_; both have a parameter \(p\in(0,\infty)\). **Access rules:** 1. Under the _local rule_ each of the \(bN\) participants chooses at any time slot randomly with probability \(\frac{p}{N}\) to emit a message during this slot, independently over all \(bN\) participants and all \(N\) time slots. 2. Under the _global rule_ each of the \(bN\) participants chooses randomly with probability \(p\) whether to emit a message during some of the \(N\) time slots, and then all those who choose that option are randomly and uniformly distributed over the \(N\) time slots. Under Rule (G), any participant has only at most one chance during \([0,1]\), while under Rule (L), every message has an unbounded number of trials and can be successful several times urine \([0,1]\). Hence, under (G), \(p\) needs to be in \((0,1]\), while under (L), it can be any positive number, assuming that \(N\) is large (and we assume this). We assume that each participant has an unbounded number of packages to be sent, i.e., it makes successively an unbounded number of emission attempts. Rule (G) has a two-step random strategy, as first each message randomly decides whether to attempt a transmission, and then picks randomly a microscopic time slot. Here the number of random variables that need to be sampled is much smaller than under Rule (L), and the probability parameter is of finite order in \(N\), in contrast to Rule (L). We therefore see substantial practical advantages in Rule (G) over Rule (L). Now we describe the criteria for successful delivery of the messages that are choosen to be emitted under either Rule (L) or (G). We consider two scenarios, the _multi-channel scenario_ and the _interference-based scenario_; both come with a parameter \(\kappa\in\mathbb{N}\): **Success rules:** 1. In the _multi-channel scenario_, the are \(\kappa\) channels available, and in each slot each of the emission attempts choose randomly and uniformly one of the \(\kappa\) channels, independent over all the other participants and time slots. A transmission attempt is successful in this slot if no other participant chooses the channel that it picked. All other attempts are unsuccessful. 2. In the _interference-based scenario_, in any given time slot, all transmission attempts are successful if their number does not exceed \(\kappa\); otherwise all attempts in that slot are unsuccessful. In the case of a successful attempt of transmission of a message, we say that the participant has gained access to the medium. As we explained in Section 1.1, Scenario (MC) describes slotted ALOHA with \(\kappa\) channels and total absence of infrastructure, while Scenario (IB) describes the influence of interference constraints. Note that Model (B) in [12] is contained in Scenario (MC). We are going to couple each of the two scenarios (MC) and (IB) with each of the two Rules (L) and (G) and obtain four different protocols. Scenario (MC), coupled with Rule (L), is equal to Model (B) in [12]. ### Quantities and questions of interest There are three parameters in our simple models: * \(p\in(0,\infty)\) the emission attempt probability parameter, * \(b\in(0,\infty)\) the rate of messages that would like to be transmittted during \([0,1]\), * \(\kappa\in\mathbb{N}\) the threshold for the success criterion. We consider \(\kappa\) (given by technical conditions) and \(b\) (given by the appearance of participants) as given quantities that cannot be controled. However, the parameter \(p\) can be picked by the system operator and can be adapted to \(b\) and \(\kappa\); it is decisive for the success of the system. Part of our investigations will be on an optimal choice of \(p\) given \(\kappa\) and \(b\). The quantities that we are interested in are the following. * \(A_{N}=\) the number of message sending attempts, * \(S_{N}=\) number of successfully sent messages, * (only for Scenario (IB)) \(R_{N}=\) number of successful slots, that is, slots in which all messages are successfully transmitted. These three quantities are defined on probability spaces whose probability measures are denoted by \(\mathbb{P}^{{{N}\choose{\rm D}}}_{{\rm D},{\rm E}}\) with \({\rm D}\in\{{\rm L},{\rm G}\}\) and \({\rm E}\in\{{\rm MC},{\rm IB}\}\), respectively. The most important quantity is the _throughput_, the number of successfully sent messages per time unit, which is equal to \(S_{N}/N\) in our model. But we find it also important to consider the number of unsuccessful sending attempts, in order to be able to say something about the frustration of the participants of the system. In both scenarios, in order to maximize the number of successes, one would like to pick the probability parameter \(p\) in such a way that the expected number of transmission attempts per slot is close to \(\kappa\), i.e., \(p\approx\kappa/b\). However, if the number of attempts fluctuates upwards, then the success is damaged, in (IB) even maximally damaged; hence the optimal choice of \(p\) should be a bit lower. Part of our analysis is devoted to finding the optimal value of this parameter. ### Our results In this section we describe and comment on our results: Section 1.4.1 on large-deviations, Section 1.4.2 on laws of large numbers, Section 1.4.3 on the optimal choice of the probability parameter \(p\), and Section 1.4.4 on the question where the event of having few successes most likely comes from. We denote the Poisson distribution with parameter \(\alpha\in(0,\infty)\) on \(\mathbb{N}_{0}\) by \(\operatorname{Poi}_{\alpha}=(\mathrm{e}^{-\alpha}\frac{\alpha^{k}}{k!})_{k\in \mathbb{N}_{0}}\), and the binomial distribution on \(\{0,1,\ldots,N\}\) with parameters \(N\in\mathbb{N}\) and \(p\in(0,1)\) by \(\operatorname{Bin}_{N,p}(k)={{N}\choose{k}}p^{k}(1-p)^{N-k}\). Furthermore, we denote the entropy of a probability measure \(\mu\) on some discrete set \(\mathcal{X}\) with respect to another one, \(\nu\), by \(H(\mu|\nu)=\sum_{k\in\mathcal{X}}\mu_{k}\log\frac{\mu_{k}}{\nu_{k}}\). Recall that \(\mu\mapsto H(\mu|\nu)\) is non-negative, strictly convex and is zero only for \(\mu=\nu\). By \(\mathcal{M}_{1}(\mathcal{X})\) we denote the set of probability measures on \(\mathcal{X}\). #### 1.4.1. Large-deviation principles Our first main result is on the asymptotics as \(N\to\infty\) of the joint distribution of \((S_{N},A_{N},R_{N})\), in the sense of a large-deviation principle. First we turn to (IB). **Theorem 1.1** (LDP for \(\frac{1}{N}(A_{N},S_{N},R_{N})\) for Scenario (IB)).: _Fix the model parameters \(b,p>0\) and \(\kappa\in\mathbb{N}\), where we assume \(p\leq 1\) for \({\rm D}={\rm G}\). Then for both \({\rm D}\in\{{\rm L},{\rm G}\}\), the tuple \(\frac{1}{N}(A_{N},S_{N},R_{N})\) satisfies a large-deviation principle (LDP) under \(\mathbb{P}^{{}^{(N)}}_{\mathrm{D,IB}}\) with rate function given by_ \[I_{\mathrm{L,IB}}(a,s,r)=\inf\Big{\{}H(\mu|\mathrm{Poi}_{bp})\colon\mu\in \mathcal{M}_{1}(\mathbb{N}_{0}),\sum_{k\in\mathbb{N}_{0}}f(k)\mu_{k}=(a,s,r) \Big{\}} \tag{1.1}\] _where \(f(k)=(k,k\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\{k\leq\kappa\}, \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\{k\leq\kappa\})\), while_ \[I_{\mathrm{G,IB}}(a,s,r)=I_{\mathrm{L,IB}}(a,s,r)+(b-a)\log\frac{1-\frac{a}{b} }{1-p}+a-bp. \tag{1.2}\] The proof is in Section 2.1 for Rule (L) and in Section 2.2 for Rule (G). An alternate proof under Rule (L) is described in Section 2.3; this leads to a very different formula for the rate function. The stated LDP says that for any open, respectively closed, set \(G,F\subset[0,b]\times[0,b]\times[0,1]\), \[\limsup_{N\to\infty}\frac{1}{N}\log\mathbb{P}^{{}^{(N)}}_{ \mathrm{D,IB}}\Big{(}\frac{1}{N}\big{(}A_{N},S_{N},R_{N}\big{)}\in F\Big{)} \leq -\inf_{F}I_{\mathrm{D,IB}},\] \[\liminf_{N\to\infty}\frac{1}{N}\log\mathbb{P}^{{}^{(N)}}_{ \mathrm{D,IB}}\Big{(}\frac{1}{N}\big{(}A_{N},S_{N},R_{N}\big{)}\in G\Big{)} \geq -\inf_{G}I_{\mathrm{D,IB}}.\] This can be symbolically summarized by saying that for any \((a,s,r)\) \[\mathbb{P}^{{}^{(N)}}_{\mathrm{D,IB}}\big{(}\frac{1}{N}(A_{N},S_{N},R_{N}) \approx(a,s,r)\big{)}\approx\mathrm{e}^{-NI_{\mathrm{D,IB}}(a,s,r)},\qquad N \to\infty.\] See [2] for an account on the theory of (the probabilities of) large deviations. **Remark 1.2** (LDP for \(S_{n}\)).: _A standard corollary of Theorem 1.1 is an LDP for the number \(S_{N}\) of successes, which follows directly from the contraction principle (which says that \((\varphi(X_{N}))_{N\in\mathbb{N}}\) satisfies an LDP if \((X_{N})_{N\in\mathbb{N}}\) does and \(\varphi\) is continuous, and it gives a formula for the rate function). Indeed \(\frac{1}{N}S_{N}\) satisfies an LDP under \(\mathbb{P}^{{}^{(N)}}_{\mathrm{D,IB}}\) with rate function for D=L_ \[s\mapsto\inf_{a,r}I_{\mathrm{L,IB}}(a,s,r)=\inf\Big{\{}H(\mu|\mathrm{Poi}_{bp })\colon\mu\in\mathcal{M}_{1}(\mathbb{N}_{0}),\sum_{k\in[\kappa]}k\mu(k)=s\Big{\}}.\] _This formula is further analysed as a by-product in the proof of Theorem 1.15. A conclusion is that the probability to have less than \(N(s_{\mathrm{IB}}(p,\kappa)-\varepsilon)\) successes decays exponentially fast with rate \(\inf\{H(\mu|\mathrm{Poi}_{bp})\colon\mu\in\mathcal{M}_{1}(\mathbb{N}_{0}),\sum _{k\in[\kappa]}k\mu(k)\leq s_{\mathrm{IB}}(p,\kappa)-\varepsilon\}\), which is a positive number. Certainly, the analogous statement holds also for Rule (G). Furthermore, we can also apply the contraction principle to obtain an LDP for \(R_{N}\) or for the pair \((A_{N},S_{N})\). \(\Diamond\)_ **Remark 1.3** (Higher precision).: _With more of technical work, we could also prove the following, stronger assertion. Fix \(a,s\in[0,b]\) satisfying \(s\leq a\) and fix \(r\in[0,1]\). Pick sequences \(a_{N},s_{N},r_{N}\in\frac{1}{N}\mathbb{N}_{0}\) such that \(a_{N}\to a\), \(s_{N}\to s\) and \(r_{N}\to r\) as \(N\to\infty\). Then for \(\mathrm{D}\in\{\mathrm{G,L}\}\),_ \[I_{\mathrm{D,IB}}(a,s,r)=-\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}^{{}^{(N)}}_ {\mathrm{D,IB}}\big{(}A_{N}=Na_{N},S_{N}=Ns_{N},R_{N}=Nr_{N}\big{)}. \tag{1.3}\] \(\Diamond\) **Remark 1.4** (Difference of the rate functions).: _In the proof in Section 2.2 it will turn out that, under Rule (L), \(A_{N}\) has the distribution of \(N\) independent \(\mathrm{Bin}_{bN,p/N}\)-distributed random variables, while unter Rule (G), \(A_{N}\) is \(\mathrm{Bin}_{bN,p}\)-distributed. Given \(A_{N}\), the distribution of \((S_{N},R_{N})\) is the same under both rules. The last term on the right-hand side of (1.2) (i.e., the difference of the two rate functions) is equal to the difference of the two rate functions for \(\frac{1}{N}A_{N}\). These two rate functions are_ \[J_{\mathrm{L}}(a) = pb-a+a\log\frac{a}{pb}, \tag{1.4}\] \[J_{\mathrm{G}}(a) = a\log\frac{a}{p}+(b-a)\log\frac{b-a}{1-p}-b\log b, \tag{1.5}\] _and the last term in (1.2) is equal to \(J_{\mathrm{G}}(a)-J_{\mathrm{L}}(a)\). Note that_ \[J_{\mathrm{G}}^{\prime}(a)=\log\frac{a}{b-a}+\log\frac{1-p}{p},\qquad J_{ \mathrm{G}}^{\prime\prime}(a)=\frac{b}{a(b-a)},\] _and \(J_{\mathrm{L}}^{\prime}(a)=\log\frac{a}{bp}\) and \(J_{\mathrm{L}}^{\prime\prime}(a)=\frac{1}{bp}\). Hence, \(J_{\mathrm{L}}^{\prime\prime}(bp)<J_{\mathrm{G}}^{\prime\prime}(bp)\) and therefore, for \(a\) in a neighbourhood of the minimal site bp outside \(bp\), we see that \(J_{\mathrm{L}}(a)<J_{\mathrm{G}}(a)\). This shows that under Rule (G) the number of attempts has a smaller variance (even on the exponential scale) than under Rule (L), which we consider as a structural advantage of (G) over (L). \(\Diamond\)_ **Remark 1.5** (Analysis of rate function).: _On the first view, the formula in (1.1) seems to be rather involved, but in the proof of Theorem 1.15 we will find the minimizing \(\mu\) for \(\inf_{r}I_{\mathrm{L,IB}}(a,s,r)\) and will characterize it using standard variational analysis. \(\Diamond\)_ **Remark 1.6** (Alternative rate function).: _Our proof of Theorem 1.1 in Sections 2.1 and 2.2 is based on Sanov's theorem and the contraction principle and leads to an entropy description of the rate function. In Section 2.3 we give an alternate proof of Theorem 1.1 using Cramer's theorem, leading to a representation of the rate function involving Legendre transforms of logarithms of moment-generating functions. This representation appears in (2.8). \(\Diamond\)_ Now we turn to our LDP for the multi-channel case. Recall that Model (B) in [12] is contained in what we called Scenario (MC). **Theorem 1.7** (LDP for \(\frac{1}{N}(A_{N},S_{N})\) for Scenario (MC)).: _Fix the model parameters \(b,p>0\) and \(\kappa\in\mathbb{N}\) channels, where we assume \(p\leq 1\) for Rule \(\mathrm{D}=\mathrm{G}\). Then the tuple \(\frac{1}{N}(A_{N},S_{N})\) satisfies an LDP under \(\mathbb{P}_{\mathrm{D,MC}}^{(N)}\) for \(\mathrm{D}\in\{\mathrm{L},\mathrm{G}\}\) with rate function (for \(\mathrm{D}=\mathrm{L}\))_ \[I_{\mathrm{L,MC}}(a,s)=\inf\Big{\{}H(\nu|M)\colon\nu\in\mathcal{M}_{1}(\Xi), \sum_{(i,j)\in\Xi}\nu(i,j)i=a,\sum_{(i,j)\in\Xi}\nu(i,j)j=s\Big{\}}, \tag{1.6}\] _where \(\Xi=\{(i,j)\in\mathbb{N}_{0}^{2}\colon j\leq i\text{ and }j\leq\kappa\}\) and the reference probability measure \(M\) on \(\Xi\) is given as_ \[M(i,j)=\mathrm{Poi}_{bp/\kappa}^{\otimes\kappa}\Big{(}\sum_{k\in[\kappa]}X_{k} =i,\sum_{k\in[\kappa]}1\!\!1_{\{X_{k}=1\}}=j\Big{)}, \tag{1.7}\] _and, for \(\mathrm{D}=\mathrm{G}\), with rate function given as_ \[\begin{split} I_{\mathrm{G,MC}}(a,s)&=\kappa\inf \Big{\{}H(\mu|\mathrm{Poi}_{bp/\kappa})\colon\mu\in\mathcal{M}_{1}(\mathbb{N}_ {0}),\sum_{g\in\mathbb{N}_{0}}\mu(g)g=\frac{a}{\kappa},\mu(\{1\})=\frac{s}{ \kappa}\Big{\}}\\ &\quad+a-bp+(b-a)\log\frac{1-\frac{a}{b}}{1-p}.\end{split} \tag{1.8}\] The proof is in Section 2.4 for Rule (L) and in Section 2.5 for Rule (G). **Remark 1.8** (Interpretation).: _The reference measure \(M\) has the interpretation of a channel-choice distribution. Indeed, the Poisson-distributed variables \(X_{k}\), \(k\in[\kappa]\), with parameter \(bp\) stand for the number of participants that choose the \(k\)-th channel for the transmission attempt; then \(M(i,j)\) is the probability that in total \(i\) attempts are made and \(j\) successes are earned. \(\Diamond\)_ **Remark 1.9** (Contraction principle).: _The analogous assertions of Remark 1.2 about an LDP for \(S_{N}\), e.g., hold certainly also for Scenario (MC). \(\Diamond\)_ **Remark 1.10** (Difference of the two rate functions).: _The difference of the two rate functions in (1.8) is the same as in (1.2), but the reason is different from the reason in Scenario (IB) (see Remark 1.4). It comes out by some explicit manipulation of the distribution of \((A_{N},S_{N})\), for which cannot offer an easy interpretation. \(\Diamond\)_ **Remark 1.11**.: _Like for Scenario (IB), we could prove, with more technical work, the following also in Scenario (MC). Fix \(a,s\in[0,b]\) satisfying \(s\leq a\) and pick sequences \(a_{N},s_{N}\in\frac{1}{N}\mathbb{N}_{0}\) such that \(a_{N}\to a\) and \(s_{N}\to s\) as \(N\to\infty\). Then for \(D\in\{L,G\}\),_ \[I_{\rm D,MC}(a,s)=-\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}_{\rm D,MC}^{(N)} \big{(}A_{N}=Na_{N},S_{N}=Ns_{N}\big{)} \tag{1.9}\] #### 1.4.2. Laws of large numbers It is a standard conclusion from the LDP that, if the rate function has a unique minimizer at \((a_{p},s_{p},r_{p})\), a law of large numbers (LLN) follows, i.e., \(\frac{1}{N}(A_{N},S_{N},R_{N})\to(a_{p},s_{p},r_{p})\) in probability with exponential decay of the probability of being outside a neighbourhood of \((a_{p},s_{p},r_{p})\). Hence, the following statement implies two LLNs. **Corollary 1.12** (LLN for the throughput in Scenario (IB)).: _The two rate functions \(I_{\rm G,IB}\) and \(I_{\rm L,IB}\) are both strictly convex and possess the same unique minimizer \((a_{\rm IB}(p,\kappa),s_{\rm IB}(p,\kappa),r_{\rm IB}(p,\kappa))\) given by_ \[a_{\rm IB}(p,\kappa) = pb=\mathbb{E}_{{\rm Poi}_{bp}}(X), \tag{1.10}\] \[s_{\rm IB}(p,\kappa) = {\rm e}^{-bp}\sum_{i=0}^{\kappa}i\frac{(bp)^{i}}{i!}=\mathbb{E}_ {{\rm Poi}_{bp}}[X\mbox{\rm 1}\{X\leq\kappa\}]=bp\,{\rm e}^{-bp}\sum_{i=0}^{ \kappa-1}\frac{(bp)^{i}}{i!},\] (1.11) \[r_{\rm IB}(p,\kappa) = {\rm e}^{-bp}\sum_{i=0}^{\kappa}\frac{(bp)^{i}}{i!}={\rm Poi}_{bp }([0,\kappa]). \tag{1.12}\] Proof.: Just recall that the map \(\mu\mapsto H(\mu|{\rm Poi}_{bp})\) is strictly convex and has the unique minimizer \(\mu={\rm Poi}_{bp}\); hence the unique minimizing \((a,s,r)\) must be compatible with that, i.e., equal to \(\sum_{k\in\mathbb{N}_{0}}f(k){\rm Poi}_{bp}(k)\). In particular, the throughput in Scenario (IB) is equal to the \({\rm Poi}_{bp}\)-expectation of \(X\mbox{\rm 1}\{X\leq\kappa\}\), and the typical rate of successful micro time slots is \({\rm Poi}_{bp}([0,\kappa])\). In the same way, we see the analogous statement for (MC): **Corollary 1.13** (LLN for the throughput in Scenario (MC)).: _The two rate functions \(I_{\rm G,MC}\) and \(I_{\rm L,MC}\) are both strictly convex and possess the same unique minimizer_ \[\big{(}a_{\rm MC}(p,\kappa),s_{\rm MC}(p,\kappa)\big{)}=\big{(}pb,pbe^{-bp/ \kappa}\big{)}.\] In particular, the throughput in Scenario (MC) is equal to \(bp{\rm e}^{-bp/\kappa}\). #### 1.4.3. Optimal \(p\) A natural and important question is about that value of \(p\) that maximizes the expected throughput per micro slot, \(s_{\text{IB}}(p,\kappa)\), respectively \(s_{\text{MC}}(p,\kappa)\). Since \(p\) is restricted to \([0,1]\) under Rule (G), we will consider only Rule (L), where we can optimize over all \(p\in(0,\infty)\). For Scenario (MC), the answer is easily derived by differentiating: the optimal \(p\) is equal to \(\kappa/b\), and the optimal throughput is equal to \(\kappa/\text{e}\). Scenario (IB) is more interesting. It is clear that the optimal value of \(p\) should be such that \(bp\) is smaller than \(\kappa\), since otherwise the number of attempts per time slot is larger than the success threshold. But the question is how much below one should go in order not to underachieve more than necessary. **Lemma 1.14** (Optimal \(p\)).: _For any \(\kappa\in\mathbb{N}\), there is precisely one \(p_{*}\in(0,\infty)\) that maximizes the map \((0,\infty)\ni p\mapsto s_{\text{\rm IB}}(p,\kappa)\). It is characterised by_ \[\frac{(a_{*})^{\kappa}}{(\kappa-1)!}=\sum_{i\in\mathbb{N}_{0}\,:\,i\leq\kappa- 1}\frac{(a_{*})^{i}}{i!},\qquad a_{*}=bp_{*}, \tag{1.13}\] _and it satisfies \(bp_{*}<\kappa-1\) and \(bp_{*}\sim\kappa\) as \(\kappa\to\infty\). More precisely, we even have \(bp_{*}\geq(\kappa-\sqrt{\kappa})^{1-\kappa^{-1/2}}\) for any \(\kappa\). Furthermore, \(p\mapsto s_{\text{\rm IB}}(p,\kappa)\) strictly increases in \([0,p_{*}]\) and strictly decreases in \([p_{*},\infty)\)._ The proof of Lemma 1.14 is in Section 3.1. #### 1.4.4. Conditioning the number of attempts on the number of successes In this section we discuss an interesting question in the interferenced-based scenario, where too many messages lead to a serious descrease of throughput: what is the most likely reason for a deviation event of the form that the throughput is below the theoretically optimal one? Have there been too many message emission attempts, such that the interference canceled many, or did the system underachieve, i.e., had fewer attempts than could be handled successfully? This question can be answered with the help of large-deviation theory, combined with an analysis of the rate functions. We handle this only for the Rule (L), where we can work with any value of \(p\in(0,\infty)\). In order to formalize this question, we write \(\mathbb{P}^{(N,p)}_{\text{L,IB}}=\mathbb{P}^{(N,p)}\) for the probability measure in Scenario (IB) with parameter \(p\) and \(\mathbb{E}^{(N,p)}\) for the corresponding expectation. Picking some \(0<s\leq a\), then it follows from Remark 1.3 that \[\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}^{(N,p)}\big{(}A_{N}=\lfloor aN \rfloor\,\big{|}\,S_{N}=\lfloor Ns\rfloor\big{)}=-\inf_{r}I^{(p)}_{\text{L,IB }}(a,s,r)+\inf_{\widetilde{a},r}I^{(p)}_{\text{L,IB}}(\widetilde{a},s,r),\] where we wrote \(I^{(p)}_{\text{L,IB}}\) for the rate function \(I_{\text{L,IB}}\) defined in (1.1). From this, we see that \[\lim_{N\to\infty}\mathbb{E}^{(N,p)}\Big{(}\frac{A_{N}}{N}\Big{|}S_{N}=\lfloor N s \rfloor\Big{)}=\underset{a}{\text{argmin}}\Big{(}\inf_{r}I^{(p)}_{\text{L,IB }}(a,s,r)\Big{)}.\] (The latter can also be derived from Theorem 1.1 instead from the unproved Remark 1.3.) Given \(s\), we now define \(a_{p}(s)\) as a minimizer of the map \(a\mapsto\inf_{r}I^{(p)}_{\text{L,IB}}(a,s,r)\), i.e., the typical rate of sending attempts, conditional on having \(\approx sN\) successes. It will turn out that \(a_{p}(s)\) is well-defined at least in a neighbourhood of \(a_{p}(s_{p})\) if \(p\) is close enough to \(p_{*}=p_{*}(\text{L,IB})\), where we now abbreviate \(s_{p}=s_{\text{L,IB}}(p,\kappa)\) for the minimizer that we established in Corollary 1.12, and \(p^{*}\) is the maximizing \(p\) for \((0,\infty)\ni p\mapsto s_{p}\) characterized by (1.13). In terms of these quantities, the question now reads: Given \(s<s_{p}\), is it true that \(a_{p}(s)<a_{p}(s_{p})\)? **Theorem 1.15**.: _Fix \(\kappa\). Then, for any \(p\in(0,\infty)\) and for any \(s\) in some neighbourhood of \(s_{p}\), we have_ \[p<p^{*} \implies \Big{[}s<s_{p}\Rightarrow a_{p}(s)<a_{p}(s_{p})\Big{]}\quad\text{ and}\quad\Big{[}s>s_{p}\Rightarrow a_{p}(s)>a_{p}(s_{p})\Big{]}, \tag{1.14}\] \[p>p^{*} \implies \Big{[}s<s_{p}\Rightarrow a_{p}(s)>a_{p}(s_{p})\Big{]}\quad\text{ and}\quad\Big{[}s>s_{p}\Rightarrow a_{p}(s)<a_{p}(s_{p})\Big{]}. \tag{1.15}\] _Furthermore, for \(p=p_{*}\), for any \(s\in[0,p_{*}b]\setminus\{s_{p_{*}}\}\), we have \(a_{p^{*}}(s)>a_{p^{*}}(s_{p^{*}})\)._ The proof is in Section 3.2. Theorem 1.15 says that, for non-optimal \(p\), if \(s\) sufficiently close to the optimal \(s_{p}\), then the attempt number \(a_{p}(s)\) deviates to the same side of \(a_{p}(s_{p})\) as \(s\) is with respect to \(s_{p}\), while in the optimal \(p_{*}\), the typical attempt number for non-optimal success number is always larger than the optimal one. The latter means that, for the optimal choice \(p=p_{*}\), the event of non-optimal throughput away comes with overwhelming probability from too many attempts. Apparently, here the conditional probability for having too many attempts is much larger than the one for having too few. ### Literature remarks A wide range of multiple access protocols have been extensively discussed in the literature; see for example [14, 1, 15, 16]. See [1, 10, 11] for an explanation of the advantages and disadvantages of multi-channel ALOHA protocols from a operational point of view and a description of transmit-reference modulation (TR Modulation) for handling the problem of synchronizing simultaneous message transmissions in such systems. [16] gives some probabilistic analysis of a few concrete ALOHA variants, but fails to give tractable formulas; Model (B) there is identical to our Scenario (MC) under Rule (L). In [17, 18], additional functionalities are investigated as a possible improvement of the throughput by means of an additional exploration phase. A systematic probabilistic analysis of the performance of ALOHA protocols has been started for the single-channel pure ALOHA in the 1950s; see [1, 10] and some of the above mentioned references. The throughput is identified there as \(\lambda\mathrm{e}^{-2\lambda}\), which also coincides with our result for \(s_{\mathrm{ALOHA}}(\lambda,1)\) in the special case \(\kappa=1\). In [10], [15], [14] and [15] one can also read about the more popular and better known single-channel version of ALOHA, namely the _slotted ALOHA_, which offers the higher throughput \(\lambda\mathrm{e}^{-\lambda}\). The multi-channel case of this model has also been studied, e.g., in [13], where the throughput \(\lambda\mathrm{e}^{-\lambda/\kappa}\) has been calculated. In the present paper, we re-derive this value and combine it with a large-deviation analysis with explicit rate functions. To the best of our knowledge, in _continuous_ time there are no results for the multi-channel model in the literature yet that are similar to those of the present paper, with the recent exception [14], where the ALOHA and the _Carrier Sense Multiple Access (CSMA) protocol_ are analysed and similar results are derived as in the present paper for slotted ALOHA in discrete time. The difference is that in the interference constraint is valid in any fixed time interval, but not only in all the determined micro time slots. Hence, [14] does not find a description in terms of independent random variables, but in terms of a Markov renewal process. ## 2. Proofs of the LDPs ### Proof of Theorem 1.1 for Rule (L) In this section, we prove the LDP for Scenario (IB) under Rule (L). Recall that we write \([k]=\{1,\ldots,k\}\) for \(k\in\mathbb{N}\). For \(i\in[bN]\) and \(j\in[N]\), we let \(X_{i}^{(j)}\in\{0,1\}\) be the indicator on the event that the \(i\)-th participant chooses to attempt to send a message in the \(j\)-th time slot. All these random variables are independent Bernoulli random variables with parameter \(p/N\). Let \[A_{N}^{(j)}=\sum_{i\in[bN]}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3. use once more the exponential Chebyshev inequality and then Holder's inequality to obtain, for any \(L>0\), \[\mathbb{E}[\mathrm{e}^{C|A_{N}^{(1)}-A^{(1)}|}] \leq 1+\mathrm{e}^{2CK}\mathbb{P}(A_{N}^{(1)}\neq A^{(1)})\] \[\qquad+\mathrm{e}^{-LK}\Big{(}\sqrt{\mathbb{E}[\mathrm{e}^{2(C+L)A _{N}^{(1)}}]\mathbb{E}[\mathrm{e}^{2CA^{(1)}}]}+\sqrt{\mathbb{E}[\mathrm{e}^{2 (C+L)A^{(1)}}]\mathbb{E}[\mathrm{e}^{2CA_{N}^{(1)}}]}\Big{)}\] \[\to 1+2\mathrm{e}^{-LK}\mathrm{e}^{bp(\mathrm{e}^{2(C+L)}-1)} \mathrm{e}^{bp(\mathrm{e}^{2C}-1)},\qquad N\to\infty,\] as an explicit calculation for the exponential moments of \(A_{N}^{(1)}\) and \(A^{(1)}\) shows. We pick now \(L=1\) and make \(K\to\infty\) to see that the right-hand side converges to one, which concludes the proof of (2.1). It remains to show that \(\langle f,\widetilde{\mu}_{N}\rangle\) satisfies an LDP with rate function given in (1.1). Then Sanov's theorem implies that \((\widetilde{\mu}_{N})_{N\in\mathbb{N}}\) satisfies an LDP on \(\mathcal{M}_{1}(\mathbb{N}_{0})\) with rate function \(\mu\mapsto H(\mu|\mathrm{Poi}_{bp})\). If the map \(\mu\mapsto\langle f,\mu\rangle\) would be continuous in the weak topology on \(\mathcal{M}_{1}(\mathbb{N}_{0})\), then the contraction principle immediately would give the assertion. However, clearly \(f\) is not bounded, hence the map \(\mu\mapsto\langle f,\mu\rangle\) is not continuous in the weak topology on \(\mathcal{M}_{1}(\mathbb{N}_{0})\). Hence we cannot directly apply the contraction principle. Clearly, the second and third argument in the function are bounded. A sufficient cutting argument for the first argument is given by proving that \[\lim_{C\to\infty}\limsup_{N\to\infty}\frac{1}{N}\log\mathbb{P}_{N}\Big{(}\sum_ {j=1}^{N}A^{(j)}>CN\Big{)}=-\infty. \tag{2.2}\] A proof of (2.2) is easily derived using the exponential Chebyshev inequality as above and that \(A^{(1)},\ldots,A_{N}^{(N)}\) are independent \(\mathrm{Poi}_{bp}\)-distributed random variables and that \(\mathbb{E}[\mathrm{e}^{CA^{(1)}}]=\mathrm{e}^{pb(\mathrm{e}^{C}-1)}\) for any \(C\). Hence, modulo elementary technical details, the proof of Theorem 1.1 for Rule (L) follows from this. ### Proof of Theorem 1.1 under Rule (G) In this section, we prove the LDP for Scenario (IB) under Rule (G). We want to identify the large deviation behaviour of the probability distribution of the triple \((A_{N},S_{N},R_{N})\) under \(\mathbb{P}_{\mathrm{G},\mathrm{IB}}^{(N)}\). We have it already under \(\mathbb{P}_{\mathrm{L},\mathrm{IB}}^{(N)}\). We are going to identify the former distribution now explicitly in terms of the latter. For any \(a,s,r\in\mathbb{N}_{0}\) we have the following; \[\begin{split} d_{N}&=\mathbb{P}_{\mathrm{G},\mathrm{ IB}}^{(N)}\big{(}A_{N}=a,S_{N}=s,R_{N}=r\big{)}\\ &=\mathbb{P}_{\mathrm{G},\mathrm{IB}}^{(N)}(A_{N}=a)\mathbb{P}_{ \mathrm{G},\mathrm{IB}}^{(N)}(S_{N}=s,R=r|A_{N}=a)\\ &=\mathbb{P}_{\mathrm{L},\mathrm{IB}}^{(N)}(A_{N}=a)\mathbb{P}_{ \mathrm{L},\mathrm{IB}}^{(N)}(S_{N}=s,R_{N}=r|A_{N}=a)\frac{\mathbb{P}_{ \mathrm{G},\mathrm{IB}}^{(N)}(A_{N}=a)}{\mathbb{P}_{\mathrm{L},\mathrm{IB}}^ {(N)}(A_{N}=a)},\end{split} \tag{2.3}\] where we used that \(\mathbb{P}_{\mathrm{G},\mathrm{IB}}^{(N)}(S_{N}=s,R_{N}=r|A_{N}=a)=\mathbb{P}_{ \mathrm{L},\mathrm{IB}}^{(N)}(S_{N}=s,R_{N}=r|A_{N}=a)\), since the success rules are the same for the local and the global access rules. Hence \[d_{N}=\mathbb{P}_{\mathrm{L},\mathrm{IB}}^{(N)}(A_{N}=a,S_{N}=s,R_{N}=r)\frac{ \mathbb{P}_{\mathrm{G},\mathrm{IB}}^{(N)}(A_{N}=a)}{\mathbb{P}_{\mathrm{L}, \mathrm{IB}}^{(N)}(A_{N}=a)}. \tag{2.4}\] Hence, the two rate functions \(I_{\mathrm{L},\mathrm{IB}}\) and \(I_{\mathrm{G},\mathrm{IB}}\) differ only by the exponential rate of the quotient. The latter is easily identified. Indeed, observe that \(A_{N}\) is \(\mathrm{Bin}_{bN,p}\) distributed under \(\mathbb{P}_{\mathrm{G},\mathrm{IB}}^{(N)}\), hence, if \(a_{N}\in\frac{1}{N}\mathbb{N}_{0}\) satisfies \(a_{N}\to a\), then Stirling's formula \((N!=(N/\mathrm{e})^{N}\mathrm{e}^{o(N)}\) for \(N\to\infty)\) shows that \[J_{\mathrm{G}}(a):=-\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}_{\mathrm{G}, \mathrm{IB}}^{{}^{(N)}}\big{(}A_{N}=Na_{N}\big{)}=a\log\frac{a}{p}+(b-a)\log \frac{b-a}{1-p}-b\log b. \tag{2.5}\] Furthermore, under \(\mathbb{P}_{\mathrm{L},\mathrm{IB}}^{{}^{(N)}}\), \(A_{N}\) is distributed as the sum of \(N\) independent \(\mathrm{Bin}_{bN,p/N}\)-distributed random variables. We showed in Section 2.1 (see (2.1)) that \(A_{N}\) is exponentially equivalent with a sum of \(N\) independent \(\mathrm{Poi}_{bp}\)-distributed random variables, hence \(A_{N}\) satisfies an LDP with the same rate function, more precisely, \[J_{\mathrm{L}}(a):=-\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}_{\mathrm{L}, \mathrm{IB}}^{{}^{(N)}}\big{(}A_{N}=Na_{N}\big{)}=pb-a+a\log\frac{a}{pb}. \tag{2.6}\] Hence, \(\frac{1}{N}(A_{N},S_{N},R_{N})\) under \(\mathbb{P}_{\mathrm{G},\mathrm{IB}}^{{}^{(N)}}\) satisfies an LDP with rate function \[I_{\mathrm{G},\mathrm{IB}}(a,s,r)=I_{\mathrm{L},\mathrm{IB}}(a,s,r)-J_{ \mathrm{G}}(a)+J_{\mathrm{L}}(a),\] and this is equal to right hand side of (1.2). ### Alternate proof of Theorem 1.1 under Rule (L) In this section, we indicate an alternative proof of the LDP of Theorem 1.1 in Scenario (IB) under Rule (L) with an alternate representation of the rate function that is very different from (1.1); see (2.8). Indeed, it does not involve any entropy, but is instead based on formulas that appear in connection with Cramer's theorem, i.e., Legendre transforms of the logarithm of moment generating functions. We use the notation of Section 2.1. Recall that \(A_{N}^{{}^{(j)}}\) is the number of emission attempts in the \(j\)-th micro time slot, \((\frac{j-1}{N},\frac{j}{N}]\). Then \(A_{N}^{{}^{(1)}},\ldots,A_{N}^{{}^{(N)}}\) are i.i.d., and each of them is \(\mathrm{Bin}_{bN,p/N}\)-distributed. Fix \(a,s,r\in\mathbb{N}_{0}\) and consider the event \(\{A_{N}=a,S_{N}=s,R_{N}=r\}\). This is the event that in precisely \(r\) time slots the corresponding \(A_{N}^{{}^{(j)}}\) is \(\leq\kappa\) (these time slots are successful) and in all the other \(N-r\) time slots it is \(>\kappa\) (these slots are unsuccessful), and that the total sum of all the \(A_{N}^{{}^{(j)}}\) with \(A_{N}^{{}^{(j)}}\) is equal to \(s\). By permutation symmetry of the time slots, we may assume that all the first \(r\) time slots are successful and the remainning ones are not. The total number of distinctions of the \(N\) slots into \(r\) successful and \(N-r\) unsuccessful ones is \(\binom{N}{r}\). Hence, by independence of the \(A_{N}^{{}^{(j)}}\)'s and after relabeling, we have \[\begin{split}\mathbb{P}_{\mathrm{L},\mathrm{IB}}^{{}^{(N)}}& (A_{N}=a,S_{N}=s,R_{N}=r)\\ &=\binom{N}{r}\mathbb{P}\Big{(}A_{N}^{{}^{(j)}}\leq\kappa\; \forall j\in[r],\sum_{j\in[r]}A_{N}^{{}^{(j)}}=s\Big{)}\\ &\qquad\times\mathbb{P}\Big{(}A_{N}^{{}^{(j)}}>\kappa\;\forall j \in[N-r],\sum_{j\in[N-r]}A_{N}^{{}^{(j)}}=a-s\Big{)}\\ &=\binom{N}{r}\mathrm{Bin}_{bN,p/N}([0,\kappa])^{r}\,\mathsf{P}_{ \leq\kappa}^{{}^{(N)}}\Big{(}\frac{1}{r}\sum_{j\in[r]}A_{N}^{{}^{(j)}}=\frac{ s}{r}\Big{)}\\ &\qquad\times\mathrm{Bin}_{bN,p/N}((\kappa,\infty))^{N-r}\, \mathsf{P}_{>\kappa}^{{}^{(N)}}\Big{(}\frac{1}{N-r}\sum_{j\in[N-r]}A_{N}^{{}^{ (j)}}=\frac{a-s}{N-r}\Big{)},\end{split} \tag{2.7}\] where \(\mathsf{P}_{\leq\kappa}^{{}^{(N)}}\) is the expectation with respect to independent \(\mathrm{Bin}_{bN,p/N}\)-distributed variables, conditioned on being \(\leq\kappa\), and \(\mathsf{P}_{>\kappa}^{{}^{(N)}}\) is defined analogously. Now the remainder of the proof is clear. We replace \(a,s,r\in\mathbb{N}\) by \(a_{N}N,s_{N}N,r_{N}N\in\mathbb{N}\) with \(a_{N}\to a\), \(s_{N}\to s\) and \(r_{N}\to r\) for some \(a,s,r\in(0,\infty)\) and we find easily the large-\(N\) exponential asymptotics of the binomial term and the two probability powers, and for the two probabilities involving the sums of \(A_{N}^{{(j)}}\)'s, we can use Cramer's theorem. Here are some details: We again use the Poisson limit theorem to see that \(\operatorname{Bin}_{bN,p/N}([0,\kappa])^{r_{N}N}=\operatorname{Poi}_{pb}([0, \kappa])^{rN}\mathrm{e}^{o(N)}\) as \(N\to\infty\) and the analogous statement for the other probability term. Furthermore, we leave to the reader to check that the average of the \(A_{N}^{{(j)}}\) under \(\operatorname{\mathsf{P}}_{\leq\kappa}^{{(N)}}\) satisfy the same LDP as the average of independent \(\operatorname{Poi}_{bp}\)-distributed random variables, conditioned on being \(\leq\kappa\) and analogously with \(>\kappa\) instead of \(\leq\kappa\). (This is implied by a variant the exponential equivalence that we proved in Section 2.1: see (2.1).) The latter do satisfy an LDP, according to Cramer's theorem, with rate function equal to the Legendre transform of \(y\mapsto\log\mathbb{E}_{\leq\kappa}[\mathrm{e}^{yX_{1}}]\), where \(\mathbb{E}_{\leq\kappa}\) is the expectation with respect to \(\operatorname{\mathsf{P}}_{\leq\kappa}\), and \(X_{1}\) is a corresponding random variable. Hence we have that \(\frac{1}{r_{N}N}\sum_{j\in[r_{N}N]}A_{N}^{{(j)}}\) satisfies an LDP under \(\operatorname{\mathsf{P}}_{\leq\kappa}^{{(N)}}\) on the scale \(N\) with rate function \[x\mapsto=rJ_{\leq\kappa}(x),\qquad\text{where}\qquad J_{\leq\kappa}(x)=\sup_{ y\in\mathbb{R}}\Big{(}xy-\log\mathbb{E}_{\leq\kappa}[\mathrm{e}^{yX_{1}}]\Big{)},\] and an analogous assertion for the other probability term (last line of (2.7)). Note that Stirling's formula gives that \(-\lim_{N\to\infty}\frac{1}{N}\log\binom{N}{r_{N}N}=r\log r+(1-r)\log(1-r)\). Substitution all this in the last two lines of (2.7), we obtain that \(\frac{1}{N}(A_{N},S_{N},R_{N})\) satisfies under Rule (L) in Scenario (IB) an LDP on the scale \(N\) with rate function equal to \[\widetilde{I}_{\mathrm{L},\mathrm{IB}}(a,s,r) =r\log r+(1-r)\log(1-r)+rJ_{\leq\kappa}(\tfrac{s}{r})-r\log \operatorname{Poi}_{bp}([0,\kappa])\] \[\qquad+(1-r)J_{>\kappa}(\tfrac{a-s}{1-r})-(1-r)\log \operatorname{Poi}_{bp}((\kappa,\infty)).\] This can be rewritten as follows. Introducing \(I_{\leq\kappa}(x)=\sup_{z\in\mathbb{R}}(xz-\log\sum_{i=0}^{\kappa}\mathrm{e}^ {zi}/i!)\), we see, after making the substitution \(\mathrm{e}^{z}=bp\mathrm{e}^{y}\), i.e., \(y=z-\log(pb)\), that \[rJ_{\leq\kappa}(\tfrac{s}{r})-r\log\operatorname{Poi}_{bp}([0,\kappa])=rbp-s \log(bp)+rI_{\leq\kappa}(\tfrac{s}{r}),\] and an analogous formula for the last term, resulting in \[\widetilde{I}_{\mathrm{L},\mathrm{IB}}(a,s,r)=rI_{\leq\kappa}(\tfrac{s}{r})+( 1-r)I_{>\kappa}(\tfrac{a-s}{1-r})+bp-a\log(bp)+r\log r+(1-r)\log(1-r). \tag{2.8}\] Certainly, this function must coincide with \(I_{\mathrm{L},\mathrm{IB}}\) defined in (1.1), but this is admittedly hard to see. ### Proof of Theorem 1.7 under Rule (L) We are now proving the LDP of Theorem 1.7 in Scenario (MC) under the Rule (L). We recall some of the notation from Section 2.1: for \(i\in[bN]\) and \(j\in[N]\), we let \(X_{i}^{{(j)}}\in\{0,1\}\) be the indicator on the event that the \(i\)-th participant chooses to attempt to send a message in the \(j\)-th time slot. All these random variables are independent Bernoulli random variables with parameter \(p/N\). Let \(A_{N}^{{(j)}}=\sum_{i\in[bN]}1\!\!1\{X_{i}^{{(j)}}=1\}\), then \(A_{N}^{{(j)}}\) is the number of transmission attempts. Clearly, \(A_{N}^{{(j)}}\) is binomially distributed with parameters \(bN\) and \(p/N\), and the collection of them over \(j\in[N]\) is independent. Furthermore, \(A_{N}=\sum_{j=1}^{N}A_{N}^{{(j)}}\). Let us identify the distribution of the number \(S_{N}^{{(j)}}\) of successes in the \(j\)-th slot given that there are \(a=A_{N}^{{(j)}}\) attempts. We observe that the vector of numbers \((Z_{1},\dots,Z_{\kappa})\) of message transmission attempts \(Z_{k}\) in the \(k\)-th channel is multinomially distributed with parameter \(a=\sum_{k\in[\kappa]}Z_{k}\) and \(\kappa\). This means, for any \(\alpha\in(0,\infty)\), that \[\begin{split}\mathbb{P}_{\text{L,MC}}^{{}^{(N)}}\big{(}S_{N}^{{}^{( j)}}=s|A_{N}^{{}^{(j)}}=a\big{)}&=\sum_{z_{1},\cdots,z_{k}\in\mathbb{I}_{0}: \ \sum_{k}z_{k}=a}\kappa^{-a}\binom{a}{(z_{k})_{k}}\\ &=\frac{a!}{\kappa^{\alpha}}\alpha^{-a}\mathrm{e}^{\alpha\kappa} \sum_{z_{1},\cdots,z_{k}\in\mathbb{I}_{0}:\ \sum_{k}z_{k}=a}\prod_{j\in[\kappa]}\bigg{(}\frac{\alpha^{z_{k}}}{z_{k}!} \mathrm{e}^{-\alpha}\bigg{)}\\ &=\frac{1}{\mathrm{Poi}_{\alpha\kappa}(a)}\mathrm{Poi}_{\alpha}^{ \otimes\kappa}\Big{(}\sum_{k\in[\kappa]}X_{k}=a,\sum_{k\in[\kappa]}\openone_{ \{X_{k}=1\}}=s\Big{)},\end{split} \tag{2.9}\] where \(X_{1},\ldots,X_{\kappa}\) are independent \(\mathrm{Poi}_{\alpha}\)-distributed variables. We obtain for the joint distribution of \(A_{N}^{{}^{(j)}}\) and \(S_{N}^{{}^{(j)}}\) that \[\mathbb{P}_{\text{L,MC}}^{{}^{(N)}}\big{(}A_{N}^{{}^{(j)}}=a,S_{N}^{{}^{(j)}}= s\big{)}=\frac{\mathrm{Bin}_{bN,p/N}(a)}{\mathrm{Poi}_{\alpha\kappa}(a)}\mathrm{Poi }_{\alpha}^{\otimes\kappa}\Big{(}\sum_{k\in[\kappa]}X_{k}=a,\sum_{k\in[\kappa ]}\openone_{\{X_{k}=1\}}=s\Big{)},\qquad(a,s)\in\Xi. \tag{2.10}\] We now pick \(\alpha=bp/\kappa\) and observe that the quotient on the right-hand side then converges towards one as \(N\to\infty\), according to the Poisson limit theorem. Furthermore, the last term was introduced in (1.7) under the name \(M(a,s)\). Hence, the pair \((A_{N},S_{N})\) is equal to the sum of \(N\) independent copies of a pair with distribution \(M_{N}\) that converges pointwise towards \(M\) as \(N\to\infty\). Analogously to the corresponding part in Section 2.1 (see around (2.1)), one shows that \(\frac{1}{N}(\widetilde{A}_{N},\widetilde{S}_{N})\) and \(\frac{1}{N}(A_{N},S_{N})\) are exponentially equivalent, where the former is \(\frac{1}{N}\) times a sum of \(N\) independent random vectors \((A^{{}^{(1)}},S^{{}^{(1)}}),\ldots,(A^{{}^{(N)}},S^{{}^{(N)}})\) with distribution \(M\) each. Hence both satisfy the same LDP, if any of them satisfies some. Indeed, \(\frac{1}{N}(\widetilde{A}_{N},\widetilde{S}_{N})\) does satisfy the LDP of Theorem 1.7 under Rule (L), as is seen in the same way as in Section 2.1. One uses that the empirical measure \(\widetilde{\mu}_{N}=\frac{1}{N}\sum_{j=1}^{N}\delta_{(A^{{}^{(j)}},S^{{}^{(j) }})}\) satisfies an LDP with rate function \(\mu\mapsto H(\mu|M)\) and that \(\frac{1}{N}(\widetilde{A}_{N},\widetilde{S}_{N})=\sum_{(i,j)\in\Xi} \widetilde{\mu}_{N}(i,j)(i,j)\) is a function of \(\widetilde{\mu}_{N}\) that is, after applying some cutting procedure, continuous. Then the contraction principle implies that \(\frac{1}{N}(\widetilde{A}_{N},\widetilde{S}_{N})\) satisfies the LDP of Theorem 1.7 under Rule (L). ### Proof of Theorem 1.7 under Rule (G) In this section, we prove the LDP for \(\frac{1}{N}(A_{n},S_{N})\) in Scenario (MC) under Rule (G). We are able to use the identification of their distribution from Section 2.4 here for a different choice of parameters. Indeed, recall that \(A_{N}\) is \(\mathrm{Bin}_{bN,p}\)-distributed. Given that \(A_{N}=a\) attempts are made during the entire time interval \([0,1]\), each of the \(a\) attempts makes a random and uniform choice among \(N\) time slots and \(\kappa\) channels altogether. Furthermore, in each channel in each slot, the success criterion is that no more than one choice is made here. This means that the distribution of \(S_{N}\) given \(\{A_{N}=a\}\) is the same as in (2.9) with \(\kappa N\) instead of \(\kappa\). Again, we choose \(\alpha=bp/\kappa\). Hence, for any \((a,s)\in\Xi\), \[\mathbb{P}_{\text{G,IB}}^{{}^{(N)}}\big{(}A_{N}=a,S_{N}=s\big{)}=\frac{ \mathrm{Bin}_{bN,p}(a)}{\mathrm{Poi}_{bpN}(a)}\mathrm{Poi}_{bp/\kappa}^{ \otimes\kappa N}\Big{(}\sum_{i=1}^{\kappa N}X_{i}=a,\sum_{i=1}^{\kappa N} \openone_{\{X_{i}=1\}}=s\Big{)}. \tag{2.11}\] We use this now for \((a,s)\) replaced by \((a_{N}N,s_{N}N)\in\mathbb{N}^{2}\) with \(a_{N}\to a\) and \(s_{N}\to s\) for some \((a,s)\in\Xi\) and see that the quotient on the right-hand side behaves like \[\lim_{N\to\infty}\frac{1}{N}\log\frac{\mathrm{Bin}_{bN,p}(a_{N}N)} {\mathrm{Poi}_{bpN}(a_{N}N)} =\lim_{N\to\infty}\frac{1}{N}\log\frac{(bN/\mathrm{e})^{bN}p^{aN} (1-p)^{(b-a)N}(aN)!\mathrm{e}^{bpN}}{(aN)!((b-a)N/\mathrm{e})^{(b-a)N}(bpN)^{aN}}\] \[=-\Big{[}a-bp+(b-a)\log\frac{1-\frac{a}{b}}{1-p}\Big{]},\] using also Stirling's formula. The second term on the right-hand side of (2.11) is the dsitribution of the sum of \((X_{i},\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}_{\{X_{i}=1\}})\) of \(\kappa N\) independent, \(\mathrm{Poi}_{bp/\kappa}\)-distributed random variables \(X_{1},\ldots,X_{\kappa N}\). This is a two-dimensional functional of their empirical measure \(\mu_{\kappa N}\), and the latter satisfies an LDP with speed \(\kappa N\) with rate function equal to \(\mu\mapsto H(\mu|\mathrm{Poi}_{bp/\kappa})\). This functional is not a continuous one, since the identity map is not bounded, but in Section 2.1 (see (2.2)) we saw how to perform a suitable cutting argument. Hence, we know that the pair \(\frac{1}{\kappa N}\sum_{i=1}^{\kappa N}(X_{i},\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}_{\{X_{i}=1\}})=((\mu_{\kappa N},\mathrm{id}),(\mu_{ \kappa N},\delta_{\{1\}}))\) satisfies, according to the contraction principle, an LDP with speed \(N\) with rate function \[\mu\mapsto\kappa\inf\Big{\{}H(\mu|\mathrm{Poi}_{bp/\kappa})\colon\mu\in\mathcal{ M}_{1}(\mathbb{N}_{0}),\sum_{g\in\mathbb{N}_{0}}\mu(g)g=\frac{a}{\kappa},\mu( \{1\})=\frac{s}{\kappa}\Big{\}}.\] (The prefactor \(\kappa\) comes from the change of scales from \(\kappa N\) to \(N\) in the LDP, and the \(\kappa\) in the two denominators comes from the normalization of \(\sum_{i=1}^{\kappa N}\) by \(\kappa N\) instead of \(N\).) Hence summarizing everything together ends the proof of Theorem 1.7 under Rule (G). ## 3. Optimizing and conditioning In this section we prove Lemma 1.14 and Theorem 1.15. ### Optimizing \(p\mapsto s_{p}\) In this section, we prove Lemma 1.14, that is, we analyse the maximizer of the map \((0,\infty)\ni p\mapsto s_{\mathrm{IB}}(p,\kappa)\), the optimal throughput for Scenario (IB) under Rule (L). We abbreviate \(s_{p}=s_{\mathrm{IB}}(p,\kappa)\). The analytic function \(g(a)=s_{a/b}=a\mathrm{e}^{-a}\sum_{i=0}^{\kappa-1}\frac{a^{i}}{i!}\) is positive in \((0,\infty)\) with limits \(0\) at \(a\downarrow 0\) and \(a\to\infty\), hence it has at least one maximizer \(a_{*}\), which is characterised by \(g^{\prime}(a_{*})=0\). We see that (with \(f_{\leq}(a)=\sum_{i=0}^{\kappa}\frac{a^{i}}{i!}\)) \[\frac{\mathrm{d}}{\mathrm{d}p}s_{p}=b\,\mathrm{e}^{-a_{p}}\big{(}f^{\prime}_{ \leq}(a_{p})+a_{p}f^{\prime\prime}_{\leq}(a_{p})-a_{p}f^{\prime}_{\leq}(a_{p}) \big{)}=b\mathrm{e}^{-bp}\Big{[}\sum_{i\leq\kappa-1}\frac{(bp)^{i}}{i!}-\frac{ (bp)^{\kappa}}{(\kappa-1)!}\Big{]},\qquad p>0. \tag{3.1}\] Hence, (1.13) characterizes the minimizer(s) \(p_{*}\), but at this stage we do not yet know how many minimizers exist. Using elementary calculus, we see that a solution \(a_{*}\) to (1.13) exists since the polynomial \(f(a)=-(\kappa-1)!b\mathrm{e}^{-a}\frac{\mathrm{d}}{\mathrm{d}p}s_{p}=a^{\kappa }-\sum_{i\leq\kappa-1}a^{i}\frac{(\kappa-1)!}{i!}\) starts with \(f(0)<0\) and satisfies \(f(a)\to\infty\) as \(a\to\infty\). Note that, for any \(a>0\), we have \[f(a) \geq a^{\kappa}-\sum_{i\leq\kappa-1}a^{i}(\kappa-1)^{\kappa-1-i}=a ^{\kappa}-(\kappa-1)^{\kappa-1}\sum_{i\leq\kappa-1}\Big{(}\frac{a}{\kappa-1} \Big{)}^{i}=a^{\kappa}+\frac{(\kappa-1)^{\kappa}-a^{\kappa}}{a-(\kappa-1)}\] \[=\frac{a^{\kappa}(a-\kappa)+(\kappa-1)^{\kappa}}{a-(\kappa-1)},\] and the latter is positive for any \(a>\kappa-1\). Hence, we even have that \(a_{*}\leq\kappa-1\). Furthermore, there is only one solution, since \(f^{\prime}(a)=\kappa a^{\kappa-1}-\sum_{i\leq\kappa-1}a^{i}\frac{(\kappa-1)!}{i! }+a^{\kappa-1}\) for any \(a\), and for any solution \(a_{*}\) we see that \(f^{\prime}(a_{*})=(\kappa+1)a_{*}^{\kappa-1}-a_{*}^{\kappa}=a_{*}^{\kappa-1}[ \kappa+1-a_{*}]\), which is positive. Hence, \(f\) has precisely one zero in \([0,\infty)\). It is negative left of \(a_{*}\) and positive right of it. Accordingly, \(p\mapsto s_{p}\) is increasing in \([0,p_{*}]\) and decreasing in \([p_{*},\infty)\). We obtain a lower bound for \(a_{*}\) by \[f(a)\leq a^{\kappa}-a^{i}\frac{(\kappa-1)!}{i!}<a^{i}\Big{(}a^{\kappa-i}-(i+1) ^{\kappa-i-1}\Big{)},\qquad a>0,i\in\{0,\ldots,\kappa-1\}.\] This upper bound is zero for \(a=(i+1)^{1-1/(\kappa-i)}\), hence \(a_{*}\geq\max_{i=0}^{\kappa-1}(i+1)^{1-1/(\kappa-i)}\). Taking \(i=\kappa-\sqrt{\kappa}\) gives \(a_{*}\geq(\kappa-\sqrt{\kappa})^{1-\kappa^{-1/2}}=\kappa(1+o(1))\) as \(\kappa\to\infty\). This finishes the proof of Lemma 1.14. ### Conditioning on successes In this section, we prove Theorem 1.15. Recall that we conceive the maximal throughput per micro slot, \(s=s_{p}\), as a function of \(p\). Recall from Lemma 1.14 that the maximal \(p^{*}\) for \(p\mapsto s_{p}\) is characterized by \[\frac{a_{p}^{\kappa}}{(\kappa-1)!}=\sum_{i=0}^{\kappa-1}\frac{a_{p}^{i}}{i!}, \qquad a_{p}=bp. \tag{3.2}\] Furthermore recall that \(a_{p}(s)\) denotes the minimising \(a\) for the map \(a\mapsto\inf_{r}I_{\mathrm{L},\mathrm{IB}}^{(p)}(a,s,r)\), and note that \(a_{p}=a_{p}(s_{p})=bp\). Here we answer the question of the reason for few number of successes. The following lemma implies Theorem 1.15. **Lemma 3.1**.: _For any \(p\in(0,\infty)\), we have \(a_{p}^{\prime}(s_{p})<0\) for \(p<p_{*}\) and \(a_{p}^{\prime}(s_{p})>0\) for \(p>p_{*}\). In particular, for \(s\) in a neighbourhood of \(s_{p}\), (1.14) and (1.15) hold._ _Furthermore, for \(p=p^{*}\), we have \(a_{p^{*}}(s)>a_{p^{*}}(s_{p^{*}})=bp_{*}\) for any \(s\in[0,b]\setminus\{bp_{*}\}\)._ Proof.: Let us first analyse \(\inf_{r}I_{\mathrm{L},\mathrm{IB}}^{(p)}(a,s,r)\) for fixed \(a,s\in(0,\infty)\) satisfying \(a>s\). We benefit from the representation in (1.2): We have that \[\inf_{r}I_{\mathrm{L},\mathrm{IB}}^{(p)}(a,s,r) =\inf_{r}\inf\{H(\mu|\mathrm{Poi}_{pb})\colon\langle f,\mu\rangle =(a,s,r)\}\] \[=\inf\Big{\{}H(\mu|\mathrm{Poi}_{pb})\colon\sum_{k=0}^{\infty}k \mu_{k}=a,\sum_{k=0}^{\kappa}k\mu_{k}=s\Big{\}}\] \[=\inf\Big{\{}H(\mu|\mathrm{Poi}_{pb})\colon\langle\mu,\mathrm{id} \rangle=a,\langle\mu,\mathrm{id}|_{\leq\kappa}\rangle=s\Big{\}},\] where \(\mathrm{id}\) is the identity function on \(\mathbb{N}_{0}\) and \(\mathrm{id}|_{\leq\kappa}(k)=k\openone_{[0,\kappa]}(k)\); and we used the notation \(\langle\mu,f\rangle\) for the integral of a function \(f\) with respect to a measure \(\mu\). Now we apply standard variational calculus. Consider a minimizer \(\mu\) of the last formula. A standard argument shows that \(\mu_{k}>0\) for any \(k\). Fix some compactly supported \(\gamma\colon\mathbb{N}_{0}\to\mathbb{R}\) satisfying \(\gamma\bot\openone,\gamma\bot\mathrm{id}\) and \(\gamma\bot\mathrm{id}|_{\leq\kappa}\). Then, for any \(\varepsilon\in\mathbb{R}\) with sufficiently small \(|\varepsilon|\), the measure \(\mu+\varepsilon\gamma\) is admissible. From minimality, we deduce that \[0=\partial_{\varepsilon}|_{\varepsilon=0}H(\mu+\varepsilon\gamma|\mathrm{Poi}_{ pb})=\sum_{k}\Big{(}\gamma_{k}\log\frac{\mu_{k}}{q_{k}}+\mu_{k}\frac{\gamma_{k}}{ \mu_{k}}\Big{)}=\Big{\langle}\gamma,\log\frac{\mu}{q}\Big{\rangle},\] where we put \(q_{k}=\mathrm{Poi}_{p\!b}(k)\). Hence, \(\log\frac{\mu}{q}\) is a linear combination of \(\mathrm{1\!l}\), \(\mathrm{id}\) and \(\mathrm{id}|_{\leq\kappa}\). That is, there are \(A,B,C\in\mathbb{R}\) such that \[\mu_{k}=q_{k}\mathrm{e}^{A}\mathrm{e}^{Bk}\times\begin{cases}\mathrm{e}^{Ck}& \text{for $k\leq\kappa$},\\ 1&\text{for $k>\kappa$},\end{cases}\qquad k\in\mathbb{N}_{0}. \tag{3.3}\] We note that \(A,B\) and \(C\) are well-defined functions of \(a\) and \(s\), since \(\mathrm{1\!l}\), \(\mathrm{id}\) and \(\mathrm{id}|_{\leq\kappa}\) are linearly independent. Now using that \(\langle\mu,\mathrm{1\!l}\rangle=1\) and \(\langle\mu,\mathrm{id}\rangle=a\) and \(\langle\mu,\mathrm{id}|_{\leq\kappa}\rangle=s\), and introducing the notation \[\varphi(B,C):=\log\Big{(}\sum_{k=0}^{\kappa}q_{k}\mathrm{e}^{(B+C)k}+\sum_{k> \kappa}q_{k}\mathrm{e}^{Bk}\Big{)},\qquad B,C\in\mathbb{R}, \tag{3.4}\] we see that \(B=B(a,s)\) and \(C=C(a,s)\) are characterised by \[a = \frac{\sum\limits_{k\leq\kappa}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! First we show that the denominator is positive: \[\partial_{C}^{2}\varphi(B,C) =\frac{\sum\limits_{k\leq\kappa}k^{2}q_{k}\mathrm{e}^{(B+C)k}\big{(} \underset{k\leq\kappa}{\sum}q_{k}\mathrm{e}^{(B+C)k}+\underset{k>\kappa}{\sum}q _{k}\mathrm{e}^{Bk}\big{)}-\big{(}\underset{k\leq\kappa}{\sum}kq_{k}\mathrm{e}^ {(B+C)k}\big{)}^{2}}{\big{(}\underset{k\leq\kappa}{\sum}q_{k}\mathrm{e}^{(B+C)k }+\underset{k>\kappa}{\sum}q_{k}\mathrm{e}^{Bk}\big{)}^{2}}\] \[\geq\frac{\big{(}\underset{k\leq\kappa}{\sum}k^{2}q_{k}\mathrm{e}^ {(B+C)k}\big{)}\big{(}\underset{k\leq\kappa}{\sum}q_{k}\mathrm{e}^{(B+C)k} \big{)}-\big{(}\underset{k\leq\kappa}{\sum}kq_{k}\mathrm{e}^{(B+C)k}\big{)}^{ 2}}{\big{(}\underset{k\leq\kappa}{\sum}q_{k}\mathrm{e}^{(B+C)k}+\underset{k> \kappa}{\sum}q_{k}\mathrm{e}^{Bk}\big{)}^{2}}>0,\qquad B,C\in\mathbb{R},\] as a standard symmetrisation shows. Next we consider the numerator in (3.10): \[\partial_{B}\partial_{C}\varphi(0,C)=\Big{(} \sum_{k\leq\kappa}q_{k}\mathrm{e}^{Ck}+\sum_{k>\kappa}q_{k}\Big{)} ^{-2} \tag{3.11}\] \[\Big{[} \sum_{k\leq\kappa}k^{2}q_{k}\mathrm{e}^{Ck}\Big{(}\sum_{k\leq \kappa}q_{k}\mathrm{e}^{Ck}+\sum_{k>\kappa}q_{k}\Big{)}-\Big{(}\sum_{k\leq \kappa}kq_{k}\mathrm{e}^{Ck}+\sum_{k>\kappa}kq_{k}\Big{)}\sum_{k\leq\kappa}kq_ {k}\mathrm{e}^{Ck}\Big{]}.\] No we use the facts that \(\sum_{k\leq\kappa}q_{k}+\sum_{k>\kappa}q_{k}=1\) (since \((q_{k})_{k\in\mathbb{N}_{0}}\) is a probability distribution) and \(\sum_{k\in\mathbb{N}_{0}}kq_{k}=bp=a_{p}=a_{p}(s_{p})\) (see Corollary 1.12; \((q_{k})_{k\in\mathbb{N}_{0}}=\mathrm{Poi}_{pb}\) has expectation \(pb\)). Furthermore, note that \(C(a_{p}(s_{p}),s_{p})=0\) by optimality (which can be seen in the same way as the fact that \(B(a_{p}(s),s)=0\) above). Then we get \[a_{p}^{\prime}(s_{p}) =\partial_{B}\partial_{C}\varphi(0,0)=\sum_{k\leq\kappa}k^{2}q_{k }-bp\sum_{k\leq\kappa}kq_{k}\] \[=bp\mathrm{e}^{-bp}\Big{[}\sum_{k\leq\kappa-1}(k+1)\frac{(bp)^{k} }{k!}-bp\sum_{k\leq\kappa-1}\frac{(bp)^{k}}{k!}\Big{]}=p\frac{\mathrm{d}}{ \mathrm{d}p}s_{p},\] as we see from (3.1). Recall that \(p_{*}\) is the unique maximizer for \(p\mapsto s_{p}\). According to Lemma 1.14, this (and therefore \(a_{p}^{\prime}(s_{p})\)) is positive if \(p<p^{*}\) and negative if \(p>p^{*}\). This implies all assertions of Lemma 3.1 for \(p\neq p^{*}\). Now we consider the case \(p=p^{*}\) characterised in (3.2). Here it will not be successful to rely on the characterisation of \(a_{p}(s)\) by \(0=B(a_{p}(s),s)\) and to consider the derivative with respect to \(s\) in \(s=s_{p_{*}}\) only, since \(\partial_{B}\partial_{C}\varphi(0,0)=0\) for \(p=p^{*}\). Instead, we use (3.5) and explicitly look at the difference \[a_{p^{*}}(s)-a_{p^{*}}(s_{p^{*}}) =\partial_{B}\varphi(0,C)-bp^{*}=\frac{\sum_{k\leq\kappa}q_{k} \mathrm{e}^{Ck}[k-a_{p^{*}}]+\sum_{k>\kappa}q_{k}[k-a_{p^{*}}]}{\sum_{k\leq \kappa}q_{k}\mathrm{e}^{Ck}+\sum_{k>\kappa}q_{k}} \tag{3.12}\] \[=\frac{\sum_{k\leq\kappa}q_{k}[\mathrm{e}^{Ck}-1][k-a_{p^{*}}]}{ \sum_{k\leq\kappa}q_{k}\mathrm{e}^{Ck}+\sum_{k>\kappa}q_{k}},\] with \(C=C(a_{p^{*}}(s),s)\). We used in the last step that \(\sum_{k>\kappa}kq_{k}=a_{p^{*}}-\sum_{k\leq\kappa}kq_{k}\) and \(\sum_{k\in\mathbb{N}_{0}}q_{k}=1\). Note that \(C<0\) for \(s<s_{p^{*}}\) and \(C>0\) for \(s>s_{p^{*}}\). Indeed, a similar calculation as in (3.8) shows that \[\frac{\mathrm{d}}{\mathrm{d}s}\inf_{r,a}I_{\mathrm{L,IB}}^{(p^{*})}(a,s,r)=\frac {\mathrm{d}}{\mathrm{d}s}\Big{[}sC(a_{p^{*}}(s),s)-\varphi\big{(}0,C(a_{p^{*}}(s ),s)\big{)}\Big{]}=C(a_{p^{*}}(s),s),\qquad s\in(0,\infty).\] Now note that \(s_{p^{*}}\) is defined as the minimizer of the function \(s\mapsto\inf_{r,a}I_{\mathrm{L,IB}}^{(p^{*})}(a,s,r)\); hence it is decreasing left of the minimal point and increasing right of it. Write \(g(C)=\sum_{k=0}^{\kappa}q_{k}[\mathrm{e}^{Ck}-1][k-a_{p^{*}}]\) for the numerator of the right-hand side of (3.12). Clearly \(g(0)=0\). Recall that \(\partial_{B}\partial_{C}\varphi(0,0)=0\) hence the derivative of 3.12 with respect to \(C\) is \(0\). Clearly the derivative of (3.12) is \(0\) only if \(g^{\prime}(0)=0\). Hence observe that, for any \(C<0\), \[g^{\prime}(C)=\sum_{k=0}^{\kappa}kq_{k}\mathrm{e}^{Ck}(k-a_{p_{*}})<\mathrm{e} ^{Ca_{p_{*}}}\sum_{k\leq a_{p_{*}}}kq_{k}(k-a_{p_{*}})+\mathrm{e}^{Ca_{p_{*}}} \sum_{k\colon a_{p_{*}}<k\leq\kappa}kq_{k}(k-a_{p_{*}})=0.\] Hence, \(g\) is strictly decreasing in \((-\infty,0]\) and hence positive in \((-\infty,0)\). An analogous argument shows that \(g^{\prime}(C)>0\) for \(C>0\): \[g^{\prime}(C)=\sum_{k=0}^{\kappa}kq_{k}\mathrm{e}^{Ck}(k-a_{p_{*}})>\sum_{k \leq a_{p_{*}}}kq_{k}(k-a_{p_{*}})+\sum_{k\colon a_{p_{*}}<k\leq\kappa}kq_{k} (k-a_{p_{*}})=0.\] Hence \(g\) is strictly increasing and positive in \((0,\infty)\). This implies that \(a_{p_{*}}(s)>a_{p_{*}}(s_{p_{*}})\) for any \(s\neq s_{p_{*}}\) and finishes the proof of the lemma. **Acknowledgment.** The support of the Deutsche Akademische Auslandsdienst (DAAD) via the Project _Berlin-AIMS Network in Stochastic Analysis_ (Project-ID 57417853) is gratefully acknowledged.
2308.02818
Shape-dependent friction scaling laws in twisted layered material interfaces
Static friction induced by moir\'e superstructure in twisted incommensurate finite layered material interfaces reveals unique double periodicity and lack of scaling with contact size. The underlying mechanism involves compensation of incomplete moir\'e tiles at the rim of rigid polygonal graphene flakes sliding atop fixed graphene or h-BN substrates. The scaling of friction (or lack thereof) with contact size is found to strongly depend on the shape of the slider and the relative orientation between its edges and the emerging superstructure, partially rationalizing scattered experimental data. With careful consideration of the flake edge orientation, twist angle, and sliding direction along the substrate, one should therefore be able to achieve large-scale superlubricity via shape tailoring.
Weidong Yan, Xiang Gao, Wengen Ouyang, Ze Liu, Oded Hod, Michael Urbakh
2023-08-05T08:23:13Z
http://arxiv.org/abs/2308.02818v1
# Shape-dependent friction scaling laws in twisted layered material interfaces ###### Abstract Static friction induced by moire superstructure in twisted incommensurate finite layered material interfaces reveals unique double periodicity and lack of scaling with contact size. The underlying mechanism involves compensation of incomplete moire tiles at the rim of rigid polygonal graphene flakes sliding atop fixed graphene or \(h\)-BN substrates. The scaling of friction (or lack thereof) with contact size is found to strongly depend on the shape of the slider and the relative orientation between its edges and the emerging superstructure, partially rationalizing scattered experimental data. With careful consideration of the flake edge orientation, twist angle, and sliding direction along the substrate, one should therefore be able to achieve large-scale superlubricity via shape tailoring. **Keywords: friction scaling law, layered materials, twist angle, moire superlattice, interlayer potential, superlubricity.** The scaling up of structural superlubricity, a phenomenon of ultra-low friction and wear emerging in incommensurate layered material junctions, requires the study of the contact size dependence of static and kinetic friction in van der Waals (vdW) interfaces [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Previous experimental studies of two-dimensional (2D) contacts suggested various scaling laws of friction respect to the contact area (\(F\propto A^{\gamma}\)) with broad scattering of the measured scaling exponent, ranging from 0 (no scaling) to 0.5 [2, 7, 11, 12, 13, 14, 15, 16]. Complementary theoretical and computational studies attributed the observed scattered scaling behavior to the dependence of friction on the shape and relative orientation of the sliding contact [17, 18, 19], which dictate the specific arrangement of incomplete moire tiles along the rim of the slider [1, 2, 18, 20]. Notably, a friction scaling exponent of 0.5 was also found for amorphous 2D contacts [17, 21, 22, 23], and no scaling was found for triangular gold clusters in contact with hexagonal lattice surfaces [19]. Furthermore, different scaling exponents for the sliding energy barrier with contact length have also been predicted for quasi-one-dimensional double-walled nanotubes (DWNTs) depending on the inter-wall lattice commensurability [24, 25, 26]. In this Letter, we investigate the size dependence of the friction in twisted incommensurate interfaces formed between rigid nanoscale graphene flakes of various shapes and either graphene or \(h\)-BN rigid substrates. We discover unique double periodicity of the static friction, induced by moire superstructures, with contact size and lack of size scaling for twisted incommensurate polygonal flakes. Notably, we demonstrate that the frictional scaling strongly depends on the relative orientation between the slider edges and the emerging superstructures. Our model systems consist of rigid nanoscale graphene flakes of various shapes [circular, square, triangular, and hexagonal, see Fig. 1 (a)] deposited on a fixed graphene or \(h\)-BN substrate. The polygonal flakes are cut out of an infinite hexagonal lattice with either armchair or zigzag edges (See Supplemental Material (SM) Coordinates file). Interlayer interactions are described by the dedicated anisotropic interlayer potential (ILP) [27, 28, 29] with refined parameters [30]. To avoid substrate edge effects and spurious interactions between image flakes, periodic boundary conditions are applied in the lateral directions with a sufficiently large supercell, providing a distance larger than 40 A (more than twice the force-field cutoff of 16 A) between the flake and its periodic images. The flakes are twisted by an angle \(\theta\) with respect to the underlying substrate lattice and are rigidly shifted along the armchair direction of the substrate. We note that for twisted flakes, the sliding direction has no observable effect on the scaling exponent of the static friction with contact size (see SM Sec. 1[31]). The interlayer potential energy profile, and the corresponding total resistive force experienced by the flake are recorded along the sliding path. The static friction force for the rigid sliding process is defined as the maximal resistive force experienced by the flake along the sliding path. More simulation details can be found in SM Sec. 2[31]. By neglecting in-plane elastic deformation effects, we are able to isolate the effects of moire tile incompleteness arising in incommensurate finite contacts of different shapes on the frictional scaling laws. Our simulations show that for the systems considered, the calculated static friction forces obtained for rigid model systems are in good agreement with those obtained for flexible interfaces (see SM Sec. 3[31]). This is in agreement with previous results demonstrating that the rigid flake assumption reproduces well experimental friction results in supported nanoscale graphitic interfaces, where elasticity effects are suppressed [39]. Notably, for the system considered, elasticity effects on the static friction are expected to be significant only at the 10-100 \(\upmu\)m length scale [40, 41] (see SM Sec. 3). Hence, our rigid simulation protocol, which is computationally more efficient, allows us to consider large contact area interfaces without compromising the accuracy [42]. Figure 1: (a) Model systems of circular, square, triangular, and hexagonal graphene flakes deposited on a fixed graphene substrate with a \(5^{\mathrm{o}}\) twist angle. The flakes are rigidly shifted along the armchair direction of the substrate. The color scheme for the flakes (see color bar in panel c), designating the local registry index [39, 43, 44], highlights the moiré superlattices emerging in the twisted interfaces. The cyan colored spheres represent carbon atoms. (b) Illustration of moiré tile compensation at the opposite sides of a rectangular flake (blue semi-circles), occurring when the edge length incorporates approximately an integer number of short moiré periods, \(a_{s}\). (c) Illustration of moiré tile compensation at the same side of a rectangular flake (blue semi-circles), occurring when the edge length incorporates approximately an integer number of long moiré periods, \(a_{L}\). The color scheme on the surface of the flakes in Fig. 1 designates interlayer lattice registry patterns, obtained via the local registry index (LRI) approach [39, 43, 44, 45], which highlight the moire superlattices appearing in the twisted interfaces. The period of the moire superstructures, \(a_{m}\), is given by [46, 47]: \[a_{m}=\frac{(1+\delta)a_{\mathrm{gr}}}{\sqrt{2(1+\delta)(1-\cos\theta)+\delta^{ 2}}}, \tag{1}\] and its angle with respect to the zigzag direction of the substrate lattice is given by: \[\psi=\tan^{-1}\left[\frac{(1+\delta)\sin\theta}{(1+\delta)\cos\theta-1}\right] \tag{2}\] where, \(a_{\mathrm{gr}}=2.4602\) A is the period of the hexagonal graphene lattice, \(\theta\) is the twist angle, and \(\delta=a_{\mathrm{sub}}/a_{\mathrm{gr}}-1\) is the mismatch between the lattice constants of the interfacing layers (\(a_{\mathrm{sub}}\) is the lattice constant of the substrate). For the case of a twisted rigid graphitic flake residing on a fixed graphene surface, we have \(\delta=0\), yielding \(a_{m}=\frac{a_{\mathrm{gr}}}{2\sin(\theta/2)}\) and \(\psi=\frac{\pi}{2}+\frac{\theta}{2}\). Naturally, for a given twist angle, all flake types present the same bulk moire superstructure, which is expected to be manifested by similar moire induced frictional characteristics. However, different flake shapes exhibit different incomplete rim moire superlattices along their circumference, which may induce shape and edge orientation dependent frictional scaling behavior with increasing contact size. Figure 2 presents the calculated static friction force as a function of flake size (radius or side length) normalized by the moire period of incommensurate, \(5^{\circ}\) twisted homogeneous graphitic contacts of different shapes. Notably, regardless of the flake shape, the static friction exhibits undulations with the period of the order of moire supercell dimension, consistent with previous predictions [1, 2, 48, 49, 50]. However, the larger scale behavior, dictated by the incomplete rim moire tiles, shows different scaling with contact size for circular shaped flakes compared to that for the polygonal ones. The friction force scaling behavior obtained for the former [Fig. 2(a)] matches well previous results [1, 51], showing an increase with the fourth root of the contact area (\(A^{1/4}\)). For the polygonal shaped flakes [Fig. 2(b)-(d)], on top of the moire-level friction undulations, the static friction force exhibits an additional periodic behavior on an order of magnitude larger length-scale. Surprisingly, in contrast with the case of the circular shaped flakes, no overall increase of friction with contact area is observed for the polygonal shaped ones. Notably, this finding seems to contradict previous predictions of linear scaling of the friction with side length in hexagonal shaped flakes [1]. This linear scaling, however, stems from the fact that in Ref. [1], a twist angle dependent cut was imposed, where the flakes edges were chosen parallel to the moire superlattice axes (see SM Sec. 4[31]). To understand the origin of the double-periodic modulation behavior, we compare the size dependence of the sliding potential energy barrier along the sliding path and the corresponding variations of the global registry index (GRI) [52], a simple geometric measure of interlayer lattice registry (see SM Sec. 5 [31]). The excellent agreement between the two measures indicates that the conditions for vanishing sliding energy barriers and static friction forces have a geometric origin. This can be further quantified by considering local registry index [39, 43, 44] maps [see Fig. 1(b)-(c)] that reveal the central role played by the moire superstructures along the flake sides. Specifically, two different conditions can be fulfilled in order for the static friction to vanish, which are easiest to demonstrate for the case of square flakes. The first condition is the compensation of incomplete moire tiles on the front and back sides of the sliding flake, which occurs when the distance between these sides is approximately an integer multiple of moire periods in the direction perpendicular to those sides [see Fig. 1(b)], \(a_{s}=\frac{\sqrt{3}a_{m}}{2\cos(\theta/2)}\). This leads to the static friction force short periodicity observed in Fig. 2(c). The second condition corresponds to the incorporation of approximately an integer number of moire superstructures on either the front or back sides of the square flake, leading to "self-compensation" with a longer period [see Fig. 1(c)], written as: \[a_{L}=\frac{\sqrt{3}a_{m}}{2\sin(\theta/2)} \tag{3}\] While similar conditions apply also for other regular polygonal structures, especially those with parallel sides, circular flakes lack straight sides and thus do not exhibit the larger friction oscillation period corresponding to the self-compensation effect. A more quantitative analysis of the discovered double-periodic behavior and size scaling (or lack of) can be obtained via an analytical model that assumes that the interaction between the flake and the substrate is described by a moire induced periodic potential. Treating the flake as a continuum surface, the potential experienced by an infinitesimal surface area of the flake can be approximated as [53, 54, 55, 56, 57] (see SM Sec. 4 [31]): \[\mathrm{d}U=\pm\frac{2}{9}U_{0}\left[2\cos\frac{2\pi x}{\sqrt{3}a_{m}}\cos \frac{2\pi y}{a_{m}}+\cos\frac{4\pi x}{\sqrt{3}a_{m}}\right]\mathrm{d}x \mathrm{d}y, \tag{4}\] where \(U_{0}\) is the amplitude of the potential energy landscape corrugation per unit area, and the plus or minus signs apply for graphene or \(h\)-BN substrates, respectively. Integrating Eq. (4) over the entire flake area \(S_{\mathrm{flake}}(x_{0},y_{0})\), where \((x_{0},y_{0})\) is the geometric center of the flake, yields the shape and position dependent interaction energy between the flake and the substrate, \(E(x_{0},y_{0})=\int_{S_{\mathrm{flake}}(x_{0},y_{0})}\mathrm{d}U\). Considering that the contribution of complete bulk moire superstructures to the total potential variations during sliding vanishes [51], the changes in the corresponding integrated interlayer energy originate entirely from the incomplete rim moire tiles. The derivative of the total energy with respect to \(x_{0}\) (or \(y_{0}\)) gives the resistive force in the armchair (or zigzag) directions, for a given flake displacement, the maximum of which along the sliding path is defined as the static friction force \(F_{\mathrm{g}}\). For circular flakes, this model yields a static friction force of the following form [41, 51, 58] (see SM Sec. 4[31]): \[F_{\mathrm{s}}^{\mathrm{Circ}}(R)=\frac{\alpha a_{m}\pi RU_{0}}{a_{\mathrm{sub}} }\left|J_{1}\left(\frac{4\pi R}{\sqrt{3}a_{m}}\right)\right|, \tag{5}\] where, \(J_{1}(\cdot)\) is the Bessel function of the first kind, \(\alpha\) is a coefficient that depends on the sliding direction [\(\alpha\approx 0.7823\) for the scan line chosen in this study, see Eq. (S5.11)], and \(R\) is the radius of the flake. The dashed line in Fig. 2(a) presents \(F_{\mathrm{s}}^{\mathrm{Circ}}(R)\) calculated according to Eq. (5), showing excellent agreement with the simulation results (open red circles). In this case, only the short periodicity (of the order of the moire superstructure dimensions) prevails with an envelope that scales asymptotically as \(A^{1/4}\) as expected [see Sec. 4 of the SM [31] for a detailed derivation]. For the polygonal shaped flakes, somewhat more involved static friction expressions are obtained (see SM Sec. 4[31]). For example, for square shaped flakes at small twist angles one gets: \[F_{\mathrm{s}}^{\mathrm{Sq}}(L)\approx\left|\frac{2\sqrt{3}a_{m}^{2}U_{0}}{9 \pi a_{\mathrm{sub}}\sin_{2}^{\theta}\cos_{2}^{\theta}}\sin\left(\frac{2\pi L \cos\left(\frac{\theta}{2}\right)}{\sqrt{3}a_{m}}\right)\sin\left(\frac{2\pi L \sin\left(\frac{\theta}{2}\right)}{\sqrt{3}a_{m}}\right)\right|. \tag{6}\] Corresponding expressions for triangular and hexagonal flakes are presented in SM Sec. 4[31]. For all polygonal shaped flakes considered, excellent agreement is found between the theoretical model results (black dashed lines in Fig. 2) and the simulation results (open red circles). A qualitatively different behavior of the short period oscillations is found for the triangular flake [Fig. 2(b)], where the absence of parallel sides leads to less efficient cancellation of incomplete moire superlattices. As a result, there is no full elimination of static friction force in the lower envelope of the short period oscillations. The corresponding oscillation amplitude for the square [Fig. 2(c)] and hexagonal [Fig. 2(d)] flakes, with parallel sides, does lead to efficient compensation and vanishing static friction force. Notably, the model prediction for the asymptotic \((L/a_{m}\gg 1)\) behavior of the envelopes of \(F_{\mathrm{s}}(L)\) for the polygonal flakes reads as follows: \[F_{\mathrm{s}}^{\mathrm{env}}(L)\propto\left|\sin\left(\frac{2\pi L\sin \left(\frac{\theta}{2}\right)}{\sqrt{3}a_{m}}\right)\right|, \tag{7}\] where \(L\) is the side length of the flake. This expression clearly demonstrates that for the polygonal shaped flakes considered, the static friction force does not overall grow with the flake side length. Moreover, they present the same long period of \(\frac{\sqrt{3}a_{m}}{2\sin\left(\theta/2\right)}\) [see Fig. 2(b)-(d)], reflecting the universal self-compensation of incomplete moire tiles at the sides with angle of \(\theta/2\) to the moire lattice directions. We note that when considering friction as a function of contact area, the long modulation periods become shape-dependent. Naturally, this arises from pure geometric considerations relating the side length to the regular polygon area, \(L=2\sqrt{A\cdot\tan(\pi/n)/n}\). Furthermore, to verify that our findings are not limited to regular polygonal shaped flakes, we performed additional simulations, accompanied by theoretical model predictions, for irregular shaped flakes (see SM Sec. 6[31]). The results show that the predicted static friction force long-period modulations and the lack of frictional scaling with system size are robust and expected also for irregular polygonal shaped flakes. Nonetheless, the introduction of curved edges, whose curvature varies with flake size results in frictional scaling with an exponent of \(\nicefrac{{1}}{{4}}\), reminiscent of the case of circular flakes (see SM Sec. 7[31]). The qualitative nature of the double-periodic behavior remains unchanged with increasing twist angle, as long as the moire supestructure dimensions are substantially larger than the lattice constant and smaller than the side length of the flake. Due to the moire superlattice size reduction, both periodicities and the friction amplitude decrease with increasing twist angle (see Fig. 3 for square flakes results). As may be expected, the short periodicity, \(a_{s}\), which is directly related to the moire superstructure dimensions, scales as \(a_{m}\propto\sin^{-1}\left(\frac{\theta}{2}\right)\). As per Eqs. (3) and (6), the scaling of the long periodicity, \(a_{L}\), is \(a_{m}^{2}\) and that of the friction amplitude is \(a_{m}^{3}\) (see SM Sec. 4[31]). Figure 2: Size dependence of the static friction of (a) circular, (b) triangular, (c) square, and (d) hexagonal \(5^{\circ}\) twisted rigid graphene flakes sliding along the armchair direction of a fixed graphene surface. Red circles represent simulation results and black dashed-lines correspond to the theoretical predictions [Eq. (5) in panel (a), Eq. (S5.15) in panel (b), Eq. (6) in panel (c), and Eq. (S5.19) in panel (d)] obtained using \(U_{0}=5.85\) meV/Å\({}^{2}\). The blue solid lines represent the envelopes of friction curves obtained from the theoretical expressions [Eq. (S5.14) in panel (a) and Eq. (7) in panels (b)-(d)]. \(R,~{}L\) and \(a_{m}\) are the radius and side length of the flake and the period of the moiré superlattices, respectively. The revealed double-periodic frictional behavior and the lack of frictional size-scaling for polygonal structures is not limited to homogeneous graphitic interfaces. To demonstrate this we repeated our calculations for the heterogeneous interface of graphene and hexagonal boron nitride (_h_-BN). The intrinsic lattice mismatch (\(\delta\approx\)1.8%) of the two materials gives interfacial incommensurability also in the aligned configuration with a moire supestructure period of \(a_{m}\approx\) 13.9 nm, leading to ultralow friction at any twist angle [29, 59, 60, 61]. This allows us to study also aligned contacts while avoiding high-friction commensurate states. Figure 4 compares friction results for aligned and 1\({}^{\circ}\) twisted circular and square shaped graphitic flakes sliding along the armchair direction of the underlying rigid _h_-BN substrate. Similar to the case of homogeneous circular interfaces, the circular shaped heterogeneous junctions exhibit periodic oscillations with an envelope scaling of \(F_{\mathrm{S}}\propto A^{1/4}\) [matching Eq. (5)] for both the aligned (a) and twisted (b) configurations. The heterogeneous square interfaces exhibit qualitatively different frictional size scaling for the aligned (c) and twisted (d) configurations. The \(\theta=1^{\circ}\) twisted system presents double-periodic behavior, similar to that of the homogeneous square interface, with quantitative differences that originates from the large rotation (\(\psi=44.9^{\circ}\)) of the moire supertruture (see SM Sec. 4[31]). The aligned square interface, whose sides are parallel to the moire supertruture and lack of self-compensation of incomplete moire tiles, Figure 3: Size dependence of the static friction of square rigid graphene flakes sliding along the armchair direction of a fixed graphene surface at twist angles of 5\({}^{\circ}\) (blue circle), 10\({}^{\circ}\) (red square), 15\({}^{\circ}\) (green triangular), and 20\({}^{\circ}\) (black diamond), respectively. Open circles, dashed lines, and solid lines represent results of MD simulations, theoretical predictions [Eq. (6)], and the envelope curves [Eq. (7)] obtained using the same parameters as in Fig. 2. \(L\) is the side length of the flake. exhibits only the short-period oscillations with an enveloped that scales as \(F_{\mathrm{s}}\propto A^{1/2}\), reminiscent of previous results [50]. We note that the analytical model predicts only double periodicity for the three equilateral polygonal shapes investigated. Nonetheless, additional periodicities may occur for asymmetric polygonal flakes due to the combined effect of different periods associated with the various flake sides. Interestingly, physical origin of these periodic behaviors is different than that predicted for the interwall sliding barrier of DWNTs, where the long period appears when the transitional vectors of inner and outer tube walls have a common deviser (or close to) [24-26]. To put our results into context, we note that existing experimental measurements suggested a power law scaling of the friction (mainly kinetic) with contact area, \(F_{k}\propto A^{\gamma}\), but with relatively wide scatter of the reported data [1-3,7,11,14-16,18,50,59,62]. Theoretical studies further attributed different values Figure 4: Size dependence of the static friction of (a, b) circular and (c, d) square graphene flakes sliding in the (a, c) aligned or (b, d) \(1^{\circ}\) twisted configurations along the armchair direction of an \(h\)-BN substrate. Red circles represent simulation results and black dashed-lines correspond to the theoretical model predictions [Eq. (5) in panels (a)-(b), Eq. (S5.22) in panel (c), and Eq. (S5.8) in panel (d)] obtained using \(U_{0}=4.5\) meV/Å\({}^{2}\). The blue solid lines represent the envelopes of the friction curves obtained from the theoretical expressions [Eq. (S5.14) in panels (a) and (b), and Eq. (S5.22) in panel (c)]. \(R,L\) and \(a_{m}\) are the circle radius, side length of the square flake, and the period of the moiré superlattices, respectively. The latter being \(a_{m}=13.9\) nm and \(a_{m}=9.9\) nm for \(\theta=0^{\circ}\) and \(1^{\circ}\), respectively. of \(\gamma\) to the shape of the sliding nanoflakes and its relative orientation with respect to the underlying layered material substrate [15, 18, 19]. Our results show that the scaling of static friction with contact area in layered interfaces strongly depends on the shape of the slider and the specific orientation in which it is cut with respect to the emerging interfacial moire superstructures. This may lead to various scaling behaviors with \(\gamma=0\) for twisted polygonal flakes with edges that do not coincide with the moire superlattice, \(0.25\) for circular shaped flakes, and \(0.5\) when the edges of polygonal flakes are parallel to the moire superstructure. Since the static friction forces obtained in our rigid flake calculations are in good agreement with those obtained for flexible interfaces [42] (see SM Sec. 3[31]), and since the latter serve as an upper limit for the corresponding kinetic friction of these systems, we expect strong dependence of the kinetic friction scaling on these factors, as well. This, in turn, may partially rationalize the wide scattering of results observed in experiments measuring the size dependence of friction. Other factors including edge chemical contamination, poor control over the twist angle [3, 63, 64] (see also SM Sec. 4.1[31]), as well as elastic effects [40] may further contribute to the experimentally observed data scattering. Therefore, when setting to explore the size dependence of friction in layered interfaces, one should carefully consider the shape of the studied contacts, their oreintation, twist angle, and sliding direction. This should allow for unveiling the predicted novel tribological phenomena including multiple-periodicities and lack of size scaling, thus opening the way for obtaining large-scale superlubricity via shape tailoring. ## Acknowledgements W. O., and Z. L. would like to acknowledge supports from the National Natural Science Foundation of China (Nos. 12102307, 12172260 and 11890673), the Key Research and Development Program of Hubei Province (2021BAA192), the Natural Science Foundation of Hubei Province (2021CFB138), the Fundamental Research Funds for the Central Universities (2042023kf0233) and the starting-up fund of Wuhan University. X. G. acknowledges the postdoctoral fellowships of the Sackler Center for Computational Molecular and Materials Science and the Ratner Center for Single Molecule Science at Tel Aviv University. M. U. acknowledges the financial support of the Israel Science Foundation, Grant No. 1141/18 and the ISF-NSFC joint Grant No. 3191/19. O. H. is grateful for the generous financial support of the Israel Science Foundation under Grant No. 1586/17, the Heineman Chair in Physical Chemistry, Tel Aviv University Center for Nanoscience and Nanotechnology, and the Naomi Foundation for generous financial support via the 2017 Kadar Award.
2305.15340
Bayesian calibration of differentiable agent-based models
Agent-based modelling (ABMing) is a powerful and intuitive approach to modelling complex systems; however, the intractability of ABMs' likelihood functions and the non-differentiability of the mathematical operations comprising these models present a challenge to their use in the real world. These difficulties have in turn generated research on approximate Bayesian inference methods for ABMs and on constructing differentiable approximations to arbitrary ABMs, but little work has been directed towards designing approximate Bayesian inference techniques for the specific case of differentiable ABMs. In this work, we aim to address this gap and discuss how generalised variational inference procedures may be employed to provide misspecification-robust Bayesian parameter inferences for differentiable ABMs. We demonstrate with experiments on a differentiable ABM of the COVID-19 pandemic that our approach can result in accurate inferences, and discuss avenues for future work.
Arnau Quera-Bofarull, Ayush Chopra, Anisoara Calinescu, Michael Wooldridge, Joel Dyer
2023-05-24T16:52:32Z
http://arxiv.org/abs/2305.15340v1
# Bayesian calibration of differentiable ###### Abstract Agent-based modelling (abms) is a powerful and intuitive approach to modelling complex systems; however, the intractability of abms' likelihood functions and the non-differentiability of the mathematical operations comprising these models present a challenge to their use in the real world. These difficulties have in turn generated research on approximate Bayesian inference methods for abms and on constructing differentiable approximations to arbitrary abms, but little work has been directed towards designing approximate Bayesian inference techniques for the specific case of differentiable abms. In this work, we aim to address this gap and discuss how generalised variational inference procedures may be employed to provide misspecification-robust Bayesian parameter inferences for differentiable abms. We demonstrate with experiments on a differentiable abm of the COVID-19 pandemic that our approach can result in accurate inferences, and discuss avenues for future work. ## 1 Introduction Agent-based models (abms) are growing in popularity as a modelling paradigm for complex systems in various fields, such as economics (Baptista et al., 2016; Paulin et al., 2019) and epidemiology (Aylett-Bullock et al., 2021). Such models simulate the interactions and decisions of a set of autonomous entities, where the rules governing those decisions and interactions are often nonlinear and stochastic. While this modelling approach provides considerable flexibility to the modeller, the complex structure and stochastic nature of many abms raise two key difficulties in deploying them in practice: * denoted with \(p(\mathbf{x}\mid\boldsymbol{\theta})\), where \(\mathbf{x}\) is the model output and \(\boldsymbol{\theta}\in\boldsymbol{\Theta}\subseteq\mathbb{R}^{d}\) are the \(d\)-dimensional free parameters of the model - which complicates the problem of calibrating the model's free parameters; * the mathematical expressions comprising the abm's specification are typically non-differentiable, which presents a barrier to the use of gradient-based methods in problems such as model calibration. Over recent years, two active lines of research have emerged that seek to address each of these problems: a growing literature on approximate parameter inference techniques for abms, which has become increasingly focused on Bayesian methods (see e.g. Grazzini et al., 2017; Platt, 2021; Dyer et al., 2022a); and the development of techniques for building differentiable approximations to initially non-differentiable abms(Chopra et al., 2022b; Monti et al., 2022). However, the combination of Bayesian parameter calibration methods and differentiable abms has not yet been considered. In this paper, we examine the problem of performing approximate Bayesian parameter inference for differentiable abms, and consider an approach that exploits the differentiability of the differentiable agent-based simulator. We discuss how the approach we consider - which is derived from the literature on generalised Bayesian inference (see e.g. Bissiri et al., 2016; Knoblauch et al., 2022) - may enjoy favourable robustness properties in comparison to alternative techniques, enabling the differentiable abm to be applied more successfully in misspecified settings. ## 2 Background ### Simulation-based Bayesian inference for agent-based models Simulation-based inference (sbi) algorithms are a set of procedures for performing parameter inference for simulation models, such as abms. Bayesian approaches to sbi for abms have gained popularity over recent years with the use of approximate Bayesian computation (abc)(Tavare et al., 1997; Pritchard et al., 1999; Beaumont et al., 2002), see e.g. Grazzini et al. (2017); van der Vaart et al. (2015). Broadly speaking, abc targets an approximate posterior \(q_{\textsc{aac}}\) of the form \[q_{\textsc{aac}}(\mathbf{\theta}\mid\mathbf{s}_{\mathbf{y}})\propto\hat{p}_{ \textsc{aac}}(\mathbf{s}_{\mathbf{y}}\mid\mathbf{\theta})\,\pi(\mathbf{\theta}), \tag{1}\] where \(\mathbf{s}_{\mathbf{x}}\) is some summary statistic of \(\mathbf{x}\), \(\hat{p}_{\textsc{aac}}(\mathbf{s}_{\mathbf{y}}\mid\mathbf{\theta})\) is some approximation to the model's likelihood function, and \(\pi\) is a prior density over \(\mathbf{\Theta}\). A variety of choices for \(\hat{p}_{\textsc{aac}}(\mathbf{s}_{\mathbf{y}}\mid\mathbf{\theta})\) have been explored in the literature; see Grazzini et al. (2017) and Platt (2021) for examples and Dyer et al. (2022a) for a broader review. Beyond abc, Dyer et al. (2022a) introduces and motivates the use of a class of more recently developed sbi procedures. These approaches offer considerable benefits, such as a vastly reduced simulation burden compared to alternative techniques, and the ability to flexibly accommodate data of different kinds (e.g. dynamic graph data, see Dyer et al. (2022b)) - but their performance can deteriorate in misspecified settings (Cannon et al., 2022), motivating research into methods for improving their robustness (Ward et al., 2022; Kelly et al., 2023). ### Differentiable agent-based models Differentiable agent-based models (Gradabms) are a recent class of tensorized abms that offer a number of benefits, such as compatibility with gradient-based learning via automatic differentiation, and the ability to simulate million-size populations in a few seconds on commodity hardware, integrate with deep neural networks, and ingest heterogeneous data sources. Recent prior work has demonstrated the utility of Gradabms for scalable and fast forward simulations (Chopra et al., 2021), end-to-end calibration by coupling with neural networks (Chopra et al., 2022a), as well as efficient validation using gradient-based sensitivity analyses (Quera-Bofarull et al., 2023). Direct access to simulator gradients in Gradabms allows the modeller to leverage a statistical model for calibration without the need to build approximate surrogates. While this has shown to help integrate diverse data sources to enable more accurate calibration, prior work has only leveraged such gradients to generate point estimates of parameters. However, maximising the real-world utility of Gradabms will also require uncertainty quantification during model calibration tasks; this motivates the use of Bayesian approaches for model calibration. ## 3 Method To perform approximate Bayesian inference for a differentiable agent-based simulator, we consider a simulation-based Bayesian inference procedure with appealing robustness properties. In particular, we find a posterior density \(q\) as the solution to a generalised variational inference (gvi) problem, which has been shown to possess favourable robustness properties in comparison to methods that target the classical Bayesian posterior (Knoblauch et al., 2022). This enables us to obtain practically useful updates to the initial belief distribution, captured by the prior density \(\pi\), when the agent-based model is not a perfect representation of the true data-generating mechanism (as is the case in reality). Gvi proceeds by first establishing a triple consisting of: a lower-bounded loss function/scoring rule \(\ell:\mathcal{X}\times\mathbf{\Theta}\to\mathbb{R}_{\geq 0}\) capturing some notion of discrepancy between the observed data \(\mathbf{x}\) and the behaviour of the simulator at parameter value \(\mathbf{\theta}\); a divergence \(D\) between the posterior \(q\) and prior \(\pi\); and a search space \(\mathcal{Q}\) of permitted solutions for \(q\). Bayesian inference is then performed by solving the following optimisation problem: \[q^{*}=\arg\min_{q\in\mathds{Q}}\mathcal{L}(q),\qquad\mathcal{L}(q):=\mathbb{ E}_{q(\mathbf{\theta})}\left[\ell(\mathbf{x},\mathbf{\theta})\right]+D\left(q\|\pi \right). \tag{2}\] The solution \(q^{*}\) is the generalised Bayesian posterior, while the _classical_ Bayesian posterior is obtained with \(\ell(\mathbf{x},\mathbf{\theta})=-\log p(\mathbf{x}\mid\mathbf{\theta})\) and \(D\) as the Kullback-Liebler divergence \(D_{\mathrm{KL}}\). To construct a flexible class of feasible solutions, we take \(\mathcal{Q}=\{q_{\phi}:\phi\in\Phi\}\), where \(q_{\phi}\) is a normalising flow with trainable parameters \(\phi\) taking values in some set \(\Phi\). The flow parameters are then found as \[\phi^{*}=\arg\min_{\phi\in\Phi}\Bigg{\{}\mathbb{E}_{q_{\phi}(\mathbf{\theta})} \left[\ell(\mathbf{x},\mathbf{\theta})\right]+D\left(q_{\phi}\|\pi\right)\Bigg{\}}. \tag{3}\] In this way - owing to the ability to backpropagate pathwise gradients through the Gradabm - we can use the misspecification-robust inference framework of gvi to calibrate the model parameters without recourse to potentially high-variance score-based gradient estimators (Mohamed et al., 2020). A schematic of our method is shown in Figure 1. ## 4 Experiments We evaluate the proposed inference procedure by calibrating Gradabm-June (Quera-Bofarull, 2023), the differentiable version of the June model (Aylett-Bullock et al., 2021). We use Gradabm-June to model the spread of SARS-CoV-2 in the London Borough of Camden, comprising approximately 250,000 people. For this experiment, we restrict ourselves to modelling the spread of infection in households, companies, and schools, each of which is regulated by the \(\beta\)-parameters that parameterize the spread of infection at different locations (see Appendix A.1 for details). We create a synthetic ground-truth time-series of SARS-CoV-2 infections for 30 days using the values \(\beta_{\mathrm{household}}=0.9\), \(\beta_{\mathrm{school}}=0.6\), and \(\beta_{\mathrm{company}}=0.3\) Figure 1: A schematic of the method we employ. The posterior density estimator \(q_{\phi}\) is encouraged to remain similar to the prior \(\pi\) through some divergence \(D\) (dashed arrows), while also being encouraged to generate simulations from the abm that closely match the data \(\mathbf{x}\) to which the model is being calibrated (solid arrows). The overall loss is a linear combination of these contributing terms and is used to inform the shape of \(q_{\phi}\) (dotted arrow). and obtain a posterior density by training a Neural Spline Flow (NSF) (Durkan et al., 2019). We take \[\ell\left(\mathbf{x},\mathbf{\theta}\right)=\mathbb{E}_{\tilde{\mathbf{x}}\sim p \left(\cdot|\mathbf{\theta}\right)}\left[\sum_{t=1}^{T}\frac{\|\mathbf{x}_{t}- \tilde{\mathbf{x}}_{t}\|^{2}}{w}\right] \tag{4}\] in the loss function (Equation 2), where \(\mathbf{x}_{t}\) is the logarithm of the number of infections per time-step and the hyperparameter \(w>0\) balances the relative influence of the scoring rule to the divergence-measuring term. We further choose \(D=D_{\mathrm{KL}}\) for simplicity. Details of the neural network architecture and additional training hyperparameters are provided in Appendix A.2. In this way, we obtain a posterior density over \(\mathbf{\theta}\), which we show in Figure 2. We see that the flow assigns high posterior density to the generating parameters, suggesting that the flow has assigned posterior mass to appropriate regions of \(\mathbf{\Theta}\). Of particular interest is the 2-dimensional projection of the density in the household-company and school-company plane, where we observe a trade-off between the contact intensity at households and companies. Indeed, it may be hard to distinguish where exactly infections are taking place when only the _overall_ number of cases is observed. We also show in Figure 3 a comparison between the pseudo-true dataset used to calibrate the model and simulations from the abm generated by parameters drawn from the prior, the untrained flow, and the trained flow. From this, we see that the trained flow generates simulations that much better match the pseudo-true data than simulations generated by the prior or untrained flows, suggesting that our inference scheme has been successful in this model calibration task. Overall, the number of simulations required to train the flow was 2,500, which is small in comparison to e.g. abc. ## 5 Discussion & conclusion In this paper, we consider how the task of Bayesian parameter calibration may be performed for differentiable abms. We discuss how the ability to backpropagate gradients through the agent-based simulator in a pathwise manner provides us with immediate access to a class of Bayesian Figure 3: A comparison of the pseudo-true dataset (black curve) to simulations (blue curves) generated by parameters drawn from the prior density (**left**), the untrained normalising flow (**middle**), and the trained normalising flow (**right**). Figure 2: The inferred posterior distribution over the three calibrated \(\beta\) parameters (\(\beta_{\mathrm{household}},~{}\beta_{\mathrm{school}}~{}\) and \(\beta_{\mathrm{company}}\)). The marginal densities are shown on the diagonal, and the off-diagonals show the bivariate joint densities for all pairs of parameters. The parameters that generated the pseudo-true synthetic dataset are shown with blue lines and points. inference methods known as generalised variational inference, and propose an approach drawn from this class of methods due to the fact that they may remedy misspecification-related problems more readily than existing approximate Bayesian inference methods for abms. Through experiments with Gradabm-June, a differentiable abm of the COVID-19 pandemic in England, we demonstrate that our approach can provide accurate Bayesian inferences. We aim to develop this work into a full paper, at which point we will release the code for reproducing these results. In future, we will test this method on real-world data and compare its performance against alternative sbi techniques. We will also extend this work to the case of multiple observed _iid_ datasets \(\mathbf{x}\) from some real-world density \(p(\mathbf{x})\) by considering the case of a conditional density estimator (e.g. a conditional normalising flow) and by training instead on the following loss function: \[q^{*}=\arg\min_{q\in\mathcal{Q}}\Bigg{\{}\mathbb{E}_{q(\mathbf{\theta}|\mathbf{x}) p(\mathbf{x})}\left[\ell(\mathbf{x},\mathbf{\theta})\right]+\mathbb{E}_{p(\mathbf{x})} \left[D\left(q(\cdot\mid\mathbf{x})\|\pi(\cdot)\right)\right]\Bigg{\}}. \tag{5}\] Once again, the choice \(\ell(\mathbf{x},\mathbf{\theta})=-\log p(\mathbf{x}\mid\mathbf{\theta})\) and \(D=D_{\mathrm{KL}}\) will yield classical Bayesian posteriors, while other choices generate generalised posteriors. This may enable us to deploy the same conditional density estimator and abm over a variety of scenarios covered by the density \(p(\mathbf{x})\), without retraining.
2310.11390
Irregular proton injection to high energies at interplanetary shocks
How thermal particles are accelerated to suprathermal energies is an unsolved issue, crucial for many astrophysical systems. We report novel observations of irregular, dispersive enhancements of the suprathermal particle population upstream of a high-Mach number interplanetary shock. We interpret the observed behavior as irregular "injections" of suprathermal particles resulting from shock front irregularities. Our findings, directly compared to self-consistent simulation results, provide important insights for the study of remote astrophysical systems where shock structuring is often neglected.
Domenico Trotta, Timothy S. Horbury, David Lario, Rami Vainio, Nina Dresing, Andrew Dimmock, Joe Giacalone, Heli Hietala, Robert F. Wimmer-Schweingruber, Lars Berger, Liu Yang
2023-10-17T16:43:41Z
http://arxiv.org/abs/2310.11390v1
# Irregular proton injection to high energies at interplanetary shocks ###### Abstract How thermal particles are accelerated to suprathermal energies is an unsolved issue, crucial for many astrophysical systems. We report novel observations of irregular, dispersive enhancements of the suprathermal particle population upstream of a high-Mach number interplanetary shock. We interpret the observed behavior as irregular "injections" of suprathermal particles resulting from shock front irregularities. Our findings, directly compared to self-consistent simulation results, provide important insights for the study of remote astrophysical systems where shock structuring is often neglected. Acceleration of particles -- plasmas - shock waves -- Sun: heliosphere -- Sun: solar wind + Footnote †: journal: ApJL 0000-0002-8871-6655]Domenico Trotta 0000-0002-4880-7885]Timothy S. Horbury 0000-0002-4888-0885]David Lario 0000-0002-4880-7888]Rami Vainio 0000-0002-4888-0888]Nina Dresing 0000-0002-4888-0888]Andrew Dimmock 0000-0002-4888-0888]Joe Giacalone 0000-0002-4888-0888]Heli Hietala 0000-0002-488-0888]Robert F. Wimmer-Schweingruber 0000-0002-0703-3733]Lars Berger 0000-0002-0703-0733]Liu Yang ## 1 Introduction Collisionless shock waves are fundamental sources of energetic particles, which are ubiquitously present in our universe and pivotal to explain many of its features, such as the non-thermal radiation emission common to many astrophysical sources, as revealed by decades of remote and direct observations (Reames, 1999; Amato & Blasi, 2018). Particle acceleration to suprathermal energies from thermal plasma, less understood than particle acceleration starting from an already energised population, remains a puzzle, and has been object of extensive theoretical and numerical investigations (Drury, 1983; Caprioli & Spitkovsky, 2014; Trotta et al., 2021). Shocks in the heliosphere, unique as directly accessible by spacecraft (Richter et al., 1985), provide the missing link to remote observations of astrophysical systems. Direct observations of the Earth's bow shock using single and multi-spacecraft approaches (e.g., Johlander et al., 2016) reveal a complex scenario of energy conversion and particle acceleration at the shock transition (Amano et al., 2020; Schwartz et al., 2022). The emerging picture, well supported by theory and modelling, is that small scale irregularities in the spatial and temporal evolution of the shock environment (Greensat et al., 1980; Matsumoto et al., 2015) are fundamental for efficient ion injection to high energies (Dimmock et al., 2019). This idea of irregular particle injection has been investigated in the past for the Earth's bow shock (Madanian et al., 2021) and in numerical simulations (Guo & Giacalone, 2013), thus suggesting that particle behaviour at shocks is much more complex than what is expected neglecting space-time irregularities, as suggested by early theoretical and numerical works (Decker, 1990; Ao et al., 2008; Lu et al., 2009). Such a complex picture is not as well observed and understood for shocks beyond the Earth's bow shock. In particular, shock structuring at Interplanetary (IP) shocks, generated as a consequence of phenomena such as Coronal Mass Ejections (CMEs, Gosling et al., 1974) and its role in particle acceleration remains elusive (Blanco-Cano et al., 2016; Kajdic et al., 2019). IP shocks are generally weaker and have larger radii of curvature with respect to Earth's bow shock, allowing for direct observations of collisionless shocks in profoundly different regimes (e.g., Kilpua et al., 2015; Yang et al., 2020), and are more relevant to astrophysical environments such as galaxy cluster shocks, where shock irregularities are not resolved, but they are likely to play a crucial role in efficient particle acceleration (Brunetti & Jones, 2014). Therefore, the study of particle injection at IP shocks is fundamental to test our current understanding built on Earth's bow shock, as well for addressing shocks at objects currently beyond reach. This paper demonstrates that, in order to address the suprathermal particle production upstream of supercritical collisionless shocks, the inherent variability of the injection process in both time and space must be taken into account. The Solar Orbiter mission (SolO, Muller et al., 2020) probes the inner heliosphere with unprecedented levels of time-energy resolution for energetic particles, thus opening a new observational window for particle acceleration. In this work, we study the acceleration of low-energy (\(\sim 1\) keV) particles to supra-thermal energies (\(\sim 50\) keV) at a strong IP shock observed by SolO at heliocentric distance of about 0.8 AU on 2021 October 30\({}^{\rm th}\) at 22:02:07 UT. We use the SupraThermal Electrons and Protons sensor (STEP) of the Energetic Particle Detector (EPD) suite (Rodriguez-Pacheco et al., 2020), measuring particles in the 6 - 60 keV energy range (close to the injection range), at the very high time resolution of 1 s, close to suprathermal particle gyroscales. Our work exploits such novel, previously unavailable datasets for suprathermal particles upstream of IP shocks. We resolve upstream enhancements in the suprathermal particle population with dispersive velocity signatures, and link them to irregular proton injection along the shock front. Our findings are corroborated by kinetic simulations showing similar irregular proton energization upstream close to the shock, thus elucidating the mechanisms responsible for this behaviour. This letter is organised as follows: results are presented in Section 2. SolO observations are shown and discussed in Section 2.1, while modelling results are reported in 2.2. The conclusions are in Section 3. ## 2 Results ### Solar Orbiter Observations Fig. 1 shows a 30 minute overview across the shock transition. Panels (a)-(b) reveal the pres ence of shock accelerated particles at energies of up to 100 keV, while particle fluxes at higher energies do not respond to the shock passage. At these high energies the fluxes were enhanced following a large Solar Energetic Particle (SEP) event (see Klein et al., 2022). The most striking feature of the period prior to the shock arrival at SolO is the irregular energetic particle enhancements particularly evident at 10 - 30 keV energies (Fig. 1 (b), black box), found in the time interval \(~{}\sim\) 15 minutes before the shock crossing, corresponding to \(2\times 10^{5}\) km or 2500 ion inertial lengths, \(d_{i}\). These particle enhancements have the novel feature of being dispersive in energy and are the focus of this work. The typical timescales at which the irregularities are observed are of 10-20 seconds, corresponding to spatial scales of about 50 \(d_{i}\). Such signatures were previously inaccessible to observations, as shown in Fig. 1 (c), where the time profile of ion differential flux in the 0.012 - 0.015 MeV channel, rising exponentially up to the shock (Giacalone, 2012), is shown at full resolution (blue) and averaged using a \(\sim\) 1 minute window, typical of previous IP shock measurements. Fig. 1(d) shows pitch angle intensities for 0.011 - 0.019 MeV ions (i.e., energies at which the irregular enhancements are observed). Pitch angles are computed in the plasma rest frame assuming that all ions are protons, and performing a Compton-Getting correction (Compton and Getting, 1935), thereby combining magnetic field data from the magnetometer (MAG, Horbury et al., 2020), and solar wind plasma data from the Proton and Alpha particle Sensor (PAS) on the Solar Wind Analyser (SWA) instrument suite (Owen et al., 2020), and particle data from EPD/STEP (Yang, L. et al., 2023). For the interval studied, low pitch angles are in the 30\({}^{\circ}\) field of view of STEP, relevant for shock reflected particles. The irregular enhancements of energetic particles are field aligned, as is evident for the strongest signal close to the shock transition. The flux enhancement visible in PAS (Fig. 1(e)) at lower energies starting immediately before the shock (22:00 UT) also reveals a field-aligned population. The study of the PAS low-energy population and the behaviour very close to the shock transition is Figure 1: Event overview. (a) EPD-Electron Proton Telescope (EPT) particle flux (sunward aperture). (b) EPD-STEP particle flux (magnet channel averaged over the entire field of view). (c) Pitch angle distributions for ions with an energy of 0.011 - 0.019 MeV in the spacecraft frame. (d) Time profile of the STEP energy flux in the 0.012 - 0.015 MeV energy channel at full resolution (blue), and time-averaged using a 1 minute window. (e) SWA-PAS ion energy flux (Owen et al., 2020). (f) SWA-PAS proton density. (g) MAG burst magnetic field data in RTN coordinates (Horbury et al., 2020). The magenta line marks the shock crossing, and the black rectangle highlights the dispersive energetic particle enhancements observed by STEP. Differential fluxes are in E\({}^{2}\cdot\) cm\({}^{-2}\)s\({}^{-1}\)sr\({}^{-1}\)MeV for the EPD instruments and cm\({}^{-2}\)s\({}^{-1}\)eV for PAS. object of another investigation (Dimmock et al., 2023). The magnetic field reveals a wave foreshock \(\sim 2\) minutes upstream of the shock, in conjunction with a population of low-energy (\(\sim 4\) keV) reflected particles seen by SWA/PAS, visible as the light blue enhancement in Fig. 1(e) around 22:00 UT. Interestingly, the magnetic field is quieter where signals of irregular injection are found, indicating that efficient particle scattering may be reduced in this region (Lario et al., 2022). In this "quiet" shock upstream, we found two structures compatible with shocklets in the process of steepening (\(\sim\)21:57 UT), very rarely observed at IP shocks (Wilson et al., 2009; Trotta et al., 2023). The shock parameters were estimated using upstream/downstream averaging windows varied systematically between 1 and 8 minutes (Trotta et al., 2022). The shock was oblique, with a normal angle \(\theta_{Bn}=44\pm 1.5^{\circ}\) (obtained with the Mixed Mode 3 technique (MX3 Paschmann and Schwartz, 2000), compatible with MX1,2 and Magnetic Coplanarity). The shock speed in the spacecraft frame and along the shock normal is \(\rm V_{shock}=400\pm 5\,km/s\). The shock Alfvenic and fast magnetosonic Mach numbers are \(\rm M_{A}\sim 7.6\) and \(\rm M_{fms}\sim 4.6\), respectively. Thus, the event provides us with the opportunity to study a shock with particularly high Mach number in comparison with other IP shocks, while the shock speed is moderate with respect to typical IP shocks (Kilpua et al., 2015). The shock is supercritical, and therefore expected to have a corrugated, rippled front (Trotta and Burgess, 2019; Kajdic et al., 2021). The presence of reflected particles, enhanced wave activity in close proximity (1 minute) to the shock transition and upstream shocklets in the process of steepening is consistent with the local shock parameters (Blanco-Cano et al., 2016). To further elucidate the dispersive nature of the suprathermal particles, we show the STEP energy spectrogram in \(1/v\) vs \(t\) space (Fig. 2). Here, particle speeds are referred to the center of the relative energy bin and computed in the spacecraft rest frame, assuming that all particles detected are protons (see Wimmer-Schweingruber et al., 2021, for further details). During the period of irregular particle enhancements, we also combined magnetic field and plasma data to compute the particle pitch angles in the solar wind frame (Compton and Getting, 1935), revealing that the particles detected by STEP are closely aligned with the field (not shown here). Interestingly, by visual inspection, it can be seen that these dispersive signals are shallower going far upstream, consistent with the fact that they are injected from more distant regions of the shock. The dispersive flux enhancements are associated with irregular acceleration of protons along the shock front. Indeed, due to their dispersive nature, the particles detected by STEP cannot be continuously produced at the shock and propagated upstream, but they must come from a source that is only temporarily magnetically connected to the spacecraft due to time and/or space irregularities. Then, the fastest particles produced at the irregular source are detected first by the spacecraft, followed by the slower ones, yielding the observed dispersive behaviour. Given the short timescales at which energetic particle enhancements are observed with respect to the shock and the quiet behaviour of upstream magnetic field in the 10 minutes upstream of the shock, we assume that particles do not undergo significant scattering from their (irregular) production to the detection at SolO. It is then natural to investigate the connection with the shock. The bottom-left panel of Fig. 2 shows the local \(\theta_{Bn}(t)\equiv\cos^{-1}\left(\mathbf{B}(t)\cdot\hat{\mathbf{n}}_{\rm shock }/|\mathbf{B}(t)|\right)\) changing significantly when the dispersive signals are ob served, indicating that the spacecraft was indeed connected to different portions of the (corrugated) shock front, which in turn is expected to respond rapidly to upstream changes, as recent simulation work elucidated (e.g., Trotta et al., 2023). Note that, given the single-spacecraft nature of the observations, the average shock normal computed with MX3 for both local and average \(\theta_{Bn}\)estimation was used. To further support this idea, similarly to Velocity Dispersion Analyses (VDA) used to determine the injection time of SEP events (e.g., Lintunen and Vainio, R., 2004; Dresing et al., 2023), we chose the clearest dispersive signal (\(\sim 100\) seconds upstream of the shock) and we superimpose the following relation (indicated by the magenta line in Fig. 2): \[t_{\rm O}(v)=t_{i}+\frac{s}{v}, \tag{1}\] where \(t_{\rm O}\) represents the time at which the flux enhancement is observed for a certain speed \(v\), \(t_{i}\) is the time of injection at the source, and \(s\) is the distance travelled by the particles from the source to the spacecraft. Thus, the argument is that the dispersive signals are due to accelerated particles produced by different portions of the shock front temporarily connected with the spacecraft, as sketched in Fig. 2 (right). We note that, due to the very high energy-time resolution of STEP, it was possible to perform the VDA on such small (\(\sim\)seconds) time scales. Determining \(t_{i}\) based on the time when the highest energy particles are observed (\(t_{i}\sim-130s\)), the source distance that we obtain through Equation 1 is \(s\approx 4\times 10^{4}\) km (\(\sim 500d_{i}\)), compatible with their generation at the approaching shock, for which we would expect \(s\sim{\rm V_{shock}\Delta t/sin(\theta_{Bn})}\), where \({\rm V_{shock}}\) is the average shock speed, \(\Delta t\) is the time delay between the observation of the dispersive signal and the shock passage. This is also compatible with the fact that the other dispersive signals observed further upstream, such as the one before 21:54, about 500 seconds upstream of the shock (see Fig. 2), show a shallower inclination, though a more precise, quantitative analysis of this behaviour is complicated by the high noise levels of the observation, and will be the object of later statistical investigation employing more shock candidates (Yang, L. et al., 2023). ### Shock Modelling Figure 2: _Left_: Spectrogram of the irregular signal in seconds from shock vs \(1/v\) axes, with the velocity dispersion shown by the solid magenta line (top). Time series showing the local \(\theta_{Bn}(t)\) angle. The red and grey dashed lines represent the average \(\theta_{Bn}\) and a \(90^{\circ}\) angle, respectively (bottom). _Right_: Cartoon showing the corrugated shock front with local shock normal, trajectory of a reflected particle and the Solar Orbiter trajectory (SolO model: esa.com). Further insights about shock front irregularities are limited by the single-spacecraft nature of these observations. Therefore, we employ 2.5-dimensional kinetic simulations, with parameters compatible with the observed ones, to model the details of the shock transition, where proton injection to suprathermal energies takes place, relevant to our interpretation of the dispersive signals and enabling us to see how the shock surface and normal behave at small scales (see Fig. 2). In the simulations, protons are modelled as macroparticles and advanced with the Particle-In-Cell (PIC) method, while the electrons are modelled as a massless, charge-neutralizing fluid (Trotta et al., 2020). In the model, distances are normalised to the ion inertial length \(d_{i}\), times to the upstream inverse cyclotron frequency \(\Omega_{ci}{}^{-1}\), velocity to the Alfven speed \(v_{A}\), and the magnetic field and density to their upstream values \(B_{0}\) and \(n_{0}\). The shock is launched with the injection method (Quest, 1985), where an upstream flow speed \(V_{\rm in}=4.5v_{A}\) was chosen, corresponding to \(M_{A}\sim 6\). The shock nominal \(\theta_{Bn}\) is 45\({}^{\circ}\). The simulation domain is 512 \(d_{i}\)\(\times\) 512 \(d_{i}\), with resolution \(\Delta x=\Delta y=0.5\)\(d_{i}\) and a particle time-step \(\Delta t_{pa}=0.01\)\(\Omega_{ci}^{-1}\). The number of particles per cell used is always greater than 300. This choice of parameters is compatible with the local properties of the IP shock as estimated from the SolO measurements. However, inherent variability routinely found in the simulations at small scales and in the observations at larger scales must be considered when comparing numerical and observational results. We note that these simulations are initialised with a laminar upstream, and therefore the fluctuations that impact the shock are self-generated (due to particle reflection and subsequent upstream propagation). An exhaustive characterization of these self-induced fluctuations is discussed in Kajdic et al. (2021). Simulation results are shown in Fig. 3. In the top panel, we present the proton density for a simulation snapshot where the shock transition is well-developed, showing the strongly perturbed character of the shock front. In such an irregular shock transition, particle dynamics become extremely complex (e.g., Lembege and Savoini, 1992). To further elucidate the irregularities of the shock front, we computed the shock position in the simulation domain (with the criterion \(B>3B_{0}\), as in Trotta et al. (2023b)) and evaluated the local \(\theta_{Bn}\) along it Figure 3: _Top_: Simulation snapshot of proton density (colormap). The inset shows a zoom around the shock transition (grey), and the local shock position is superimposed, with a colormap correesponding to the local \(\theta_{Bn}\). _Bottom_: Density map of upstream superathermal protons (colormap) and magnetic field lines (magenta) computed at the same simulation time as (a). The inset shows the upstream particle energy spectrum, with the dashed blue lines indicating the suprathermal energy range considered. (Fig. 3(a), inset), showing high variability (see the sketch in Fig. 2). In the bottom panel of Fig. 3, we study the self-consistently shock-accelerated protons. The upstream energy spectrum is shown in the inset, with a peak at the inflow population energies and a suprathermal tail due to the accelerated protons. To address particle injection, we analyse the upstream spatial distribution of such suprathermal protons (Fig. 3(b)) at the energies highlighted in the inset, which are a factor of 10 larger than the typical energies of particles in the upstream inflow population, in a similar fashion as the energy separation between the STEP energies at which the irregular enhancements are observed (\(\sim 10\) keV) and the Solar wind population energies measured by PAS (\(\sim 1\) keV). It can be seen that suprathermal particles are not distributed uniformly, and their spatial distribution varies with their locations along the shock front, another indication of irregular injection. Furthermore, we observed that the length scale of the irregularities is of 50 \(d_{i}\), directly comparable with the irregularities seen in the STEP fluxes (see Fig. 1). Higher energy particles also show irregularities. ## 3 Conclusions We studied irregular particle acceleration from the thermal plasma using novel SolO observations. Particle injection to high energies is an extremely important issue for a large collection of astrophysical systems making the SolO shock on 2021 October 30\({}^{\rm th}\) an excellent event to tackle this interesting problem. The capabilities of the SolO EPD suite were exploited to probe the complex shock front behaviour in the poorly investigated IP shock case. From this point of view, _in-situ_ observations of irregular particle enhancements have been used as a tool to address the (remote) structuring of the shock, an information not available by simply looking at the spacecraft shock crossing of in one point in space and time. Such an approach is reminiscent to the ones used to reconstruct the properties of SEP events (Krucker et al., 1999), and even to the ones looking at the properties of the heliospheric termination shock with the Interstellar Boundary Explorer mission (IBEX, McComas et al., 2009), where particles produced at different portions of the shock are used to understand its dynamics (Zirnstein et al., 2022). The hybrid kinetic simulations are consistent with this complex scenario of proton acceleration, with irregularly distributed suprathermal particles along the shock front, an invaluable tool to elucidate the small-scale behaviour of this IP shock and of shock transitions in a variety of astrophysical systems. Our model highlights the very small-scale behaviour of the shock, but neglects other effects like pre-existing turbulence and interplanetary disturbances that may be important (Lario and Decker, 2002; Trotta et al., 2022; Nakanotani et al., 2022; Trotta et al., 2023). The direct investigation of shock acceleration in systems other than the Earth's bow shock (having a small radius of curvature and many other properties important for planetary bow shocks) is important to build a comprehensive understanding of collisionless shocks energetics. This work significantly strengthens an evolving theory of collisionless shock acceleration. Combining high resolution energetic particle data upstream of heliospheric shocks with hybrid simulations, we have shown, for interplanetary shocks, that the inherent variability of the injection process in both time and space must be considered to solve the problem of how suprathermal particle injection occurs in astrophysical systems. The process analysed here is general, as it does not depend on how shock irregularities are generated. Indeed, this study is relevant for astrophysical systems where shock front irregularities cannot be resolved but are likely to play an important role for particle acceleration from the thermal distribution, such as galaxy cluster shocks, where efficient parti cle acceleration, which is inferred to happen at very large, \(\sim\) Mpc scales, remains a puzzle, particularly in the absence of pre-existing cosmic rays (Botteon et al., 2020). This study has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101004159 (SERPENTINE, www.serpentine-h2020.eu). Part of this work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk), under the project "dp031 Turbulence, Shocks and Dissipation in Space Plasmas". N.D. acknowledges the support of the Academy of Finland (SHOCKSEE, grant nr. 346902). H.H. is supported by the Royal Society University Research Fellowship URF\(\backslash\)R1\(\backslash\)180671. D.L. acknowledges support from NASA Living With a Star (LWS) program NNH19ZDA001N-LWS, and the Goddard Space Flight Center Heliophysics Innovation Fund (HIF) program.
2306.13261
Heat kernel estimate for the Laplace-Beltrami operator under Bakry-Émery Ricci curvature condition and applications
We establish a Gaussian upper bound of the heat kernel for the Laplace-Beltrami operator on complete Riemannian manifolds with Bakry-\'Emery Ricci curvature bounded below. As applications, we first prove an L^1-Liouville property for non-negative subharmonic functions when the potential function of the Bakry-\'Emery Ricci curvature tensor is of at most quadratic growth. Then we derive lower bounds of the eigenvalues of the Laplace-Beltrami operator on closed manifolds. An upper bound of the bottom spectrum is also obtained.
Xingyu Song, Ling Wu, Meng Zhu
2023-06-23T02:11:17Z
http://arxiv.org/abs/2306.13261v2
Heat kernel estimate for the Laplace-Beltrami operator under Bakry-Emery Ricci curvature condition and applications ###### Abstract. We establish a Gaussian upper bound of the heat kernel for the Laplace-Beltrami operator on complete Riemannian manifolds with Bakry-Emery Ricci curvature bounded below. As applications, we first prove an \(L^{1}\)-Liouville property for non-negative subharmonic functions when the potential function of the Bakry-Emery Ricci curvature tensor is of at most quadratic growth. Then we derive lower bounds of the eigenvalues of the Laplace-Beltrami operator on closed manifolds. An upper bound of the bottom spectrum is also obtained. ## 1. Introduction Let \((M^{n},g)\) be an \(n\)-dimensional Riemmanian manifold. The Bakry-Emery Ricci curvature tensor of \(M\) ([1]) is defined as \[\operatorname{Ric}_{f}:=\operatorname{Ric}+\operatorname{Hess}f, \tag{1.1}\] where \(f\) is a smooth function on \(M\) (called the potential function), and \(\operatorname{Ric}\) and \(\operatorname{Hess}f\) denote the Ricci curvature tensor and the hessian of \(f\), respectively. It is clear that when \(f\) is a constant, \(\operatorname{Ric}_{f}\) reduces to the Ricci curvature tensor. Also, manifolds with lower Bakry-Emery Ricci curvature bound are closely related to the singularity analysis of the Ricci flow, Ricci limit spaces, and stationary black holes (see e.g., [17, 32, 36, 37, 28, 12]). Therefore, many efforts have been made in extending the results under Ricci curvature condition to Bakry-Emery Ricci curvature condition. Since \(\operatorname{Ric}_{f}\) appears in a Bochner type formula for the weighted Laplace operator \(\Delta_{f}:=\Delta-\langle\nabla f,\nabla\rangle\) (see [1]) which is self-adjoint with respect to the weighted measure \(e^{-f}dv\) on \(M\), the Bakry-Emery Ricci curvature can be considered as the "Ricci curvature" for the smooth metric measure space \((M^{n},g,e^{-f}dv)\). Here \(\Delta\) is the Laplace-Beltrami operator, \(\nabla f\) is the gradient of \(f\), and \(dv\) is the volume form of \(g\). Thus, while doing analysis under Bakry-Emery Ricci curvature condition, one would naturally consider replacing \(\Delta\) and \(dv\) with the weighted Laplace operator \(\Delta_{f}\) and volume form \(e^{-f}dv\) (see e.g., [1, 27, 40, 11, 30, 26, 39, 42, 44, 45]). Let us mention that there are many similar studies under assumptions on the \(m\)-Bakry-Emery Ricci curvature, \(\operatorname{Ric}_{f}^{m}=\operatorname{Ric}+\operatorname{Hess}f-\frac{ \nabla f\otimes\nabla f}{m-n}\), where \(m\in(n,+\infty)\) is a constant (see e.g., [2, 23, 25, 38, 41]). The Bakry-Emery Ricci curvature corresponds to the case \(m=+\infty\), and hence it is also called the \(\infty\)-Bakry-Emery Ricci curvature. In this paper, we treat the Bakry-Emery Ricci curvature as a generalization of the Ricci curvature on \((M^{n},g)\) with the original volume measure \(dv\), and study the properties of elliptic and the heat equations related to the Laplace-Beltrami operator. This type of study has previously been conducted by Q. Zhang and the third author, and was applied in the extension of the Cheeger-Colding-Naber theory and the proof of compactness theorems for gradient Ricci solitons (see [49], [50], [16], [18]). First, we investigate estimates of the heat kernel. On manifolds with Ricci curvature bounded below, there have already been a long history and many classical results about the heat kernel estimates, the readers may refer to [10, 13, 35] and the references therein. For manifolds with Bakry-Emery Ricci curvature, or \(m\)-Bakry-Emery Ricci curvature bounded below, the heat kernel estimates for the weighted Laplace operator \(\Delta_{f}\) have been studied in many literatures recently (see [34, 23, 24, 43, 44, 45]). Our first main result is a Gaussian upper bound of the heat kernel of \(\Delta\) depending on the lower bound of the Bakry-Emery Ricci curvature and the bound of the potential function \(f\). **Theorem 1.1**.: _(Gaussian upper bound of the heat kernel) Let \((M^{n},g)\) be a complete Riemannian manifold and \(H(x,y,t)\) the heat kernel of \(\Delta\) on \(M\times M\times(0,+\infty)\). Pick a fixed point \(o\in M\) and \(R>0\). Suppose that \(\mathrm{Ric}_{f}\geq-Kg\) on \(B_{3R}(o)\) for some constant \(K\geq 0\). Then for any \(\epsilon>0\), there exist constants \(C_{1}(n,\epsilon)\) and \(C_{2}(n)\), such that_ \[H(x,y,t)\leq\frac{C_{1}(n,\epsilon)e^{C_{2}(n)(Kt+L(R))}}{\mathrm{Vol}(B_{ \sqrt{t}}(x))^{\frac{1}{2}}\mathrm{Vol}(B_{\sqrt{t}}(y))^{\frac{1}{2}}}e^{ \left(-\frac{d^{2}(x,y)}{(4+\epsilon)t}\right)} \tag{1.2}\] _for all \(x,y\in B_{\frac{1}{2}R}(o)\) and \(0<t<R^{2}/4\), where \(L(R)=\sup_{B_{3R}(o)}|f|\) and \(\lim_{\epsilon\to 0}C_{1}(n,\epsilon)=\infty\)._ **Remark 1.2**.: _Note that in general the heat kernel \(H(x,y,t)\) of the Laplace-Beltrami operator \(\Delta\) and the heat kernel \(H_{f}(x,y,t)\) of the weighted Laplace operator \(\Delta_{f}\) are not equivalent. Hence, one cannot expect to derive (1.2) directly from the estimates of \(H_{f}(x,y,t)\). Indeed, on the Euclidean space \((\mathbb{R}^{n},g_{0})\) with \(f=x_{1}\), we have (see e.g. [44])_ \[H_{f}(x,y,t)=\frac{e^{\frac{2(x_{1}+y_{1})-t}{4}}}{(4\pi t)^{-\frac{n}{2}}}e ^{-\frac{|x-y|^{2}}{4t}}=e^{\frac{2(x_{1}+y_{1})-t}{4}}H(x,y,t).\] The idea of the proof is standard. A key ingredient is a relative volume comparison theorem (Theorem 2.1 below) under the assumptions in the theorem above. Using the volume comparison, we can derive a Sobolev inequality, from which a parabolic mean value property follows. Then the mean value property and Davies' double integral estimate will imply the heat kernel upper bound. With the help of the heat kernel upper bound, one can prove an \(L^{1}\)-Liouville property for non-negative subharmonic functions. On manifolds with Ricci curvature bounded below, this was first proved by Li-Schoen [21] and Li [19], where the lower bound of the Ricci curvature is allowed to have certain decay. We show that the same result still holds on manifolds with Bakry-Emery Ricci curvature bounded below. **Theorem 1.3**.: _Let \((M^{n},g)\) be a complete noncompact Riemannian manifold with \(\mathrm{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). Assume that there exist non-negative constants \(a\) and \(b\) such that_ \[|f(x)|\leq ar^{2}(x)+b\ for\ all\ x\in M, \tag{1.3}\] _i _where \(r(x)=d(x,o)\) is the geodesic distance function to a fixed point \(o\in M\). Then any non-negative \(L^{1}\)-integrable subharmonic function on \(M\) must be identically constant. In particular, any \(L^{1}\)-integrable harmonic function must be identically constant._ **Remark 1.4**.: _In [47], Yau obtained the \(L^{\infty}\) Liouville property of harmonic functions on manifolds with nonnegative Ricci curvature. Later, Yau [48] showed that for \(1<p<+\infty\) the \(L^{p}\) Liouville property for subharmonic functions actually holds without any curvature assumption._ _The \(L^{1}\) Liouville property for subharmonic functions with respect to the weighted Laplace operator \(\Delta_{f}\) on manifolds with Bakry-Emery Ricci curvature (or \(m\)-Bakry-Emery Ricci curvature, resp.) bounded below was proved by Wu-Wu [45] (X. Li [23], resp.)._ The conditions in the above theorem are especially satisfied on so called gradient Ricci solitons. A gradient Ricci soliton is a Riemannian manifold \((M^{n},g)\) with constant Bakry-Emery Ricci curvature, namely, \[\operatorname{Ric}+\operatorname{Hess}f=\lambda g \tag{1.4}\] for some constant \(\lambda\). It is called a shrinking, steady, or expanding Ricci soliton when \(\lambda>0\), \(=0\), or \(<0\), respectively. For gradient Ricci solitons, it is well known that \[S+|\nabla f|^{2}=2\lambda f+C, \tag{1.5}\] where \(S\) is the scalar curvature of \(M\). Then from the results in [8] and [5], we know that \(|f|\) is at most of quadratic and linear growth on gradient shrinking and steady Ricci solitons, respectively. This implies that (1.3) holds. For gradient expanding solitons, it is showed in [33] that \(S\geq n\lambda\), then (1.5) implies that \(\left|\nabla\sqrt{-f-\frac{C-n\lambda}{2\lambda}}\right|\leq\sqrt{-\frac{ \lambda}{2}}\). Hence \[\sqrt{-f(x)-\frac{C-n\lambda}{2\lambda}}\leq\sqrt{-\frac{\lambda}{2}}r(x)+ \sqrt{-f(o)-\frac{C-n\lambda}{2\lambda}},\] and the conditions in Theorem 1.3 continue to hold with \(f\) replaced by \(\tilde{f}=f(x)+\frac{C-n\lambda}{2\lambda}\). Therefore, Theorem 1.3 implies the following Liouville theorem for complete gradient Ricci solitons. **Theorem 1.5**.: _Let \((M^{n},g)\) be a complete gradient Ricci soliton. Then any non-negative \(L^{1}\)-integrable subharmonic function on \(M\) must be identically constant. In particular, any \(L^{1}\)-integrable harmonic function on \(M\) must be identically constant._ **Remark 1.6**.: _In [29], Munteanu-Sesum proved that on a gradient shrinking Kahler-Ricci soliton or a gradient steady Ricci soliton, harmonic functions with finite energy must be constant._ By Theorem 1.3, following the arguments of Li in [19] we can prove a uniqueness theorem for \(L^{1}\)-solutions of the heat equation. **Theorem 1.7**.: _Let \((M^{n},g)\) be a complete noncompact Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). Assume that there exist non-negative constants \(a\) and \(b\) such that_ \[|f|(x)\leq ar^{2}(x)+b\ for\ all\ x\in M.\] _If \(u(x,t)\) is a non-negative function defined on \(M\times[0,+\infty)\) satisfying_ \[\left(\Delta-\partial_{t}\right)u(x,t)\geq 0,\ \int_{M}u(x,t)dv<+\infty\] _for all \(t>0\), and \(\lim\limits_{t\to 0}\int_{M}u(x,t)=0\), then \(u(x,t)\equiv 0\)._ _In particular, any \(L^{1}\)-solution of the heat equation is uniquely determined by its initial data in \(L^{1}\)._ Another application of the heat kernel estimate is that one can get Li-Yau type [22] lower bound estimates for the eigenvalues of \(\Delta\) on closed manifolds. More precisely, we show that **Theorem 1.8**.: _Let \((M^{n},g)\) be a closed Riemannian manifold with \(\mathrm{Ric}_{f}\geq-Kg\) and \(|f|\leq L\). Let \(0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{k}\leq\cdots\) be the eigenvalues of the Laplace-Beltrami operator. Then there exist constants \(C_{3}(n)\) and \(C_{4}(n)\) such that_ \[\lambda_{k}\geq\frac{C_{3}(n)(k+1)^{\frac{2}{n}}}{D^{2}}e^{-C_{4}(n)(KD^{2}+L)} \tag{1.6}\] _for all \(k\geq 1\), where \(D\) is an upper bound of the diameter of \(M\)._ Finally, to get an upper bound of \(\lambda_{1}\), we may relax the requirements on the manifold and \(f\). Indeed, we can obtain a Cheng type upper bound of the bottom spectrum \(\mu_{1}(\Delta)\) of \(\Delta\) on complete manifolds with \(f\) being of at most linear growth. **Theorem 1.9**.: _Let \((M^{n},g)\) be a complete Riemannian manifold with \(\mathrm{Ric}_{f}\geq-(n-1)Kg\) for some constant \(K\geq 0\). Assume that there exist non-negative constants \(\tilde{a}\) and \(\tilde{b}\) such that_ \[|f|(x)\leq\tilde{a}r(x,o)+\tilde{b}\ for\ all\ x\in M.\] _Then we have_ \[\mu_{1}(\Delta)\leq\frac{1}{4}\left(2\tilde{a}+(n-1)\sqrt{K}\right)^{2}.\] _In particular, if \(f\) is of sublinear growth, then the bottom spectrum of the Laplacian has the following sharp upper bound:_ \[\mu_{1}(\Delta)\leq\frac{1}{4}\left(n-1\right)^{2}K.\] When \(f\) is constant, i.e, \(\mathrm{Ric}\geq-(n-1)Kg\), Theorem 1.9 reduces to Cheng's theorem (see Theorem 4.2 in [4]). This incidentally indicates that our estimate is sharp. The rest of the paper is organized as follows. In section 2, we derive a Laplacian and volume comparison theorem for complete Riemannian manifolds with Bakry-Emery Ricci curvature bounded below. Using the comparison theorem, we get Poincare and Sobolev inequalities. Section 3 is devoted to the proofs of a mean value inequality for non-negative subsolutions of the heat equation and the Gaussian upper bound (1.2) of the heat kernel. The \(L^{1}\)-Liouville property for non-negative subharmonic functions and the uniqueness theorem for \(L^{1}\)-solutions of the heat equation will be obtained in section 4. Then for the purpose of comparison, we take a detour in section 5 to discuss an \(L^{\infty}\)-Liouville property for harmonic function with polynomial growth. Finally, lower bounds of the eigenvalues and an upper bound of the bottom spectrum of the Beltrami-Laplace operator are shown in section 6. ## 2. Poincare and Sobolev inequalities In this section, we derive local Poincare and Sobolev inequalities which are instrumental in proving the main results of the paper. A crucial step is to show the following volume comparison theorem, the idea of whose proof is similar to [46], [30] and [49], but our result is slightly more general. For a fixed point \(o\in M\) and \(R>0\), we define \[L(R)=\sup_{x\in B_{3R}(o)}|f(x)|, \tag{2.1}\] where \(B_{3R}(o)\) is the geodesic ball centered at \(o\in M\) with radius \(3R\). **Theorem 2.1**.: _Let \((M^{n},g)\) be a complete Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). Then the following conclusions are true. (a)(Laplacian comparison) Let \(r=d(y,p)\) be the distance from any point \(y\) to some fixed point \(p\in B_{R}(o)\) with \(0<r<R\). Then for \(0<r_{1}<r_{2}<R\), we have_ \[\int_{r_{1}}^{r_{2}}(\Delta r-\frac{n-1}{r})dr\leq\frac{K}{6}(r_{2}^{2}-r_{1}^ {2})+6L(R). \tag{2.2}\] _(b)(Volume element comparison)Take any point \(p\in B_{R}(o)\) and denote the volume form in geodesic polar coordinates centered at \(p\) with \(J(r,\theta,p)drd\theta\), where \(r>0\) and \(\theta\in S_{p}(M)\), a unit tangent vector at \(p\). Then for \(0<r_{1}<r_{2}<R\), we have_ \[\frac{J(r_{2},\theta,p)}{J(r_{1},\theta,p)}\leq\left(\frac{r_{2}}{r_{1}}\right) ^{n-1}e^{\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+6L(R)}. \tag{2.3}\] _(c)(Volume comparison)For any \(p\in B_{R}(o),\ 0<r_{1}<r_{2}<R\), we have_ \[\frac{\operatorname{Vol}(B_{r_{2}}(p))}{\operatorname{Vol}(B_{r_{1}}(p))} \leq\left(\frac{r_{2}}{r_{1}}\right)^{n}e^{\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+6 L(R)}, \tag{2.4}\] _where \(\operatorname{Vol}(.)\) denotes the volume of a region._ _Proof of part (a)_ Let \(r=d(y,p)\) be the distance from any point \(y\) to some fixed point \(p\in B_{R}(o)\) with \(0<r<R\) and \(\gamma:[0,r]\to M\) a normal minimal geodesic with \(\gamma(0)=p\) and \(\gamma(r)=y\). Then we know \(\gamma(t)\subset B_{3R}(o)\) and \(y\in B_{3R}(o)\). From the Bochner formula, we have \[0=\frac{1}{2}\Delta|\nabla r|^{2}=|\operatorname{Hess}r|^{2}+\langle\nabla \Delta r,\nabla r\rangle+\operatorname{Ric}(\partial r,\partial r).\] By using Cauchy-Schwarz inequality \(|\operatorname{Hess}r|^{2}\geq\frac{(\Delta r)^{2}}{n-1}\), it yields \[\frac{\partial}{\partial r}(\Delta r)+\frac{(\Delta r)^{2}}{n-1}\leq- \operatorname{Ric}(\partial r,\partial r),\] which is equivalent to \[\frac{1}{r^{2}}\frac{\partial}{\partial r}(r^{2}\Delta r)+\frac{1}{n-1}\left( \Delta r-\frac{n-1}{r}\right)^{2}\leq\frac{n-1}{r^{2}}-\operatorname{Ric}( \partial r,\partial r). \tag{2.5}\] Multiplying both sides of (2.5) by \(r^{2}\) and integrating from \(0\) to \(r\), we get \[\Delta r\leq\frac{n-1}{r}-\frac{1}{r^{2}}\int_{0}^{r}t^{2}\operatorname{Ric}( \gamma^{\prime}(t),\gamma^{\prime}(t))dt. \tag{2.6}\] We observe that \[\operatorname{Ric}(\gamma^{\prime}(t),\gamma^{\prime}(t)) \geq-K-\operatorname{Hess}f(\gamma^{\prime}(t),\gamma^{\prime}(t))\] \[=-K-\left\langle\nabla_{\gamma^{\prime}(t)}\nabla f,\gamma^{ \prime}(t)\right\rangle\] \[=-K-\gamma^{\prime}(t)\left\langle\nabla f,\gamma^{\prime}(t) \right\rangle+\left\langle\nabla f,\nabla_{\gamma^{\prime}(t)}\gamma^{\prime} (t)\right\rangle\] \[=-K-f^{\prime\prime}(t),\] where \(f(t):=f(\gamma(t))\). Hence, (2.6) becomes \[\Delta r-\frac{n-1}{r} \leq\frac{1}{r^{2}}\int_{0}^{r}\left[Kt^{2}+t^{2}f^{\prime\prime} (t)\right]dt \tag{2.7}\] \[=\frac{1}{r^{2}}\left[\frac{K}{3}r^{3}+r^{2}f^{\prime}(r)-2\int_ {0}^{r}f^{\prime}(t)tdt\right]\] \[=\frac{K}{3}r+f^{\prime}(r)-\frac{2}{r}f(r)+\frac{2}{r^{2}}\int_ {0}^{r}f(t)dt.\] For \(0<r_{1}<r_{2}<R\), integrating (2.7) from \(r_{1}\) to \(r_{2}\) yields \[\int_{r_{1}}^{r_{2}}\left(\Delta r-\frac{n-1}{r}\right)dr \leq\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+\int_{r_{1}}^{r_{2}}\left(f^{ \prime}(r)-\frac{2}{r}f(r)\right)dr-2\int_{r_{1}}^{r_{2}}\int_{0}^{r}f(t)dtd \frac{1}{r} \tag{2.8}\] \[=\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+f(r_{2})-f(r_{1})-2\int_{r_{1}} ^{r_{2}}\frac{f(r)}{r}dr\] \[\quad-2\left[\frac{1}{r}\int_{0}^{r}f(t)dt\right]_{r_{1}}^{r_{2}} -\int_{r_{1}}^{r_{2}}\frac{f(r)}{r}dr\right]\] \[=\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+f(r_{2})-f(r_{1})-\frac{2\int_ {0}^{r_{2}}f(t)dt}{r_{2}}+\frac{2\int_{0}^{r_{1}}f(t)dt}{r_{1}}\] \[\leq\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+6L(R).\] _Proof of part (b)_ By the first variation of the area \[\Delta r=\frac{J^{\prime}(r,\theta,p)}{J(r,\theta,p)},\] where \(r(y)=d(y,p)\). For \(0<r_{1}<r_{2}<R\), integrating this from \(r_{1}\) to \(r_{2}\) we get \[\int_{r_{1}}^{r_{2}}\frac{\partial}{\partial r}\ln J(r,\theta,p)dr\leq\int_{ r_{1}}^{r_{2}}\frac{n-1}{r}dr+\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+6L(R).\] Then we have \[\frac{J(r_{2},\theta,p)}{J(r_{1},\theta,p)}\leq\left(\frac{r_{2}}{r_{1}} \right)^{n-1}e^{\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+6L(R)}.\] _Proof of part (c)_ For \(0<r_{1}<r_{2}<R\), we have \[\frac{\operatorname{Vol}(B_{r_{2}}(p))}{\operatorname{Vol}(B_{r_{1}}(p))}=\frac{ \int_{0}^{r_{2}}\int_{S^{n-1}}J(r,\theta,p)d\theta dr}{\int_{0}^{r_{1}}\int_{S^{ n-1}}J(r,\theta,p)d\theta dr}=\frac{\frac{r_{2}}{r_{1}}\int_{0}^{r_{1}}\int_{S^{n-1}}J( \frac{r_{2}}{r_{1}}t,\theta,p)d\theta dt}{\int_{0}^{r_{1}}\int_{S^{n-1}}J(t, \theta,p)d\theta dt},\] where \(S^{n-1}\) denotes the unit sphere in \(\mathbb{R}^{n}\) and \(d\theta\) is its volume element. By volume element comparison, for \(0<t<r_{1}\), we have \[J\left(\frac{r_{2}}{r_{1}}t,\theta,p\right) \leq J(t,\theta,p)\left(\frac{r_{2}}{r_{1}}\right)^{n-1}e^{\frac {K}{6}\left(\frac{r_{2}^{2}}{r_{1}^{2}}t^{2}-t^{2}\right)+6L(R)}\] \[\leq J(t,\theta,p)\left(\frac{r_{2}}{r_{1}}\right)^{n-1}e^{\frac {K}{6}(r_{2}^{2}-r_{1}^{2})+6L(R)}.\] Integrating this gives that \[\frac{\operatorname{Vol}(B_{r_{2}}(p))}{\operatorname{Vol}(B_{r_{1}}(p))}\leq \left(\frac{r_{2}}{r_{1}}\right)^{n}e^{\frac{K}{6}(r_{2}^{2}-r_{1}^{2})+6L(R)}.\] By Theorem 2.1, following Buser's proof [3] or Saloff-Coste's alternative proof (Theorem 5.6.5 in [35]), we can get a local Neumann Poincare inequality, see also Lemma 3.1 in [30]. **Lemma 2.2**.: _Let \((M^{n},g)\) be a complete Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). Then for any \(p\in B_{R}(o)\), there exist constants \(c_{1}\) and \(c_{2}\) depending only on \(n\) such that_ \[\int_{B_{r}(p)}\left|u-u_{B_{r}(p)}\right|^{2}\leq c_{1}e^{c_{2}(Kr^{2}+L(R))} r^{2}\int_{B_{r}(p)}\left|\nabla u\right|^{2}\] _for all \(0<r<R\), where \(u\in C^{\infty}\left(B_{r}(p)\right)\) and \(u_{B_{r}(p)}=\frac{\int_{B_{r}(p)}u}{\operatorname{Vol}(B_{r}(p))}\)._ Combining Theorem 2.1 and Lemma 2.2, and using a similar argument as in the proof of Lemma 3.2 in [30], we obtain a local Neumann Sobolev inequality. **Theorem 2.3**.: _Let \((M^{n},g)\) be a complete Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). Then there exist constants \(\mu=4n-2>2\), \(c_{3}\) and \(c_{4}\) depending only on \(n\) such that_ \[\left(\int_{B_{r}(o)}\left|u-u_{B_{r}(o)}\right|^{\frac{2\mu}{\mu-2}}\right)^ {\frac{\mu-2}{\mu}}\leq\frac{c_{3}e^{c_{4}(Kr^{2}+L(R))}}{\operatorname{Vol}( B_{r}(o))^{\frac{2}{\mu}}}r^{2}\int_{B_{r}(o)}\left|\nabla u\right|^{2} \tag{2.9}\] _for all \(0<r<R\), where \(u\in C^{\infty}\left(B_{r}(o)\right)\) and \(u_{B_{r}(o)}=\frac{\int_{B_{r}(o)}u}{\operatorname{Vol}(B_{r}(o))}\)._ By the Minkowski inequality and applying (2.9), it is well known that one can get the following Sobolev inequality. **Theorem 2.4**.: _Let \((M^{n},g)\) be a complete Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). Then there exist constants \(\mu=4n-2>2\), \(c_{5}\) and \(c_{6}\), all depending only on \(n\) such that_ \[\left(\int_{B_{r}(o)}\left|u\right|^{\frac{2\mu}{\mu-2}}\right)^{\frac{\mu-2}{ \mu}}\leq\frac{c_{5}e^{c_{6}(Kr^{2}+L(R))}}{\operatorname{Vol}(B_{r}(o))^{ \frac{2}{\mu}}}r^{2}\int_{B_{r}(o)}(\left|\nabla u\right|^{2}+r^{-2}u^{2}) \tag{2.10}\] _for all \(0<r<R\), where \(u\in C^{\infty}\left(B_{r}(o)\right)\)._ ## 3. Mean value inequality and Gaussian upper bounds of the heat kernel In this section, we apply Theorem 2.4 to prove a mean value inequality for the non-negative subsolution of the heat equation and the Gaussian upper bound (1.2) of the heat kernel on complete Riemannian manifolds with Bakry-Emery Ricci curvature bounded below. In the following context, for function \(u\) on \(M\), the \(L^{q}\) norm on a domain \(\Omega\subset M\) is denoted by \[||u||_{q,\Omega}=\left(\int_{\Omega}|f|^{q}\right)^{\frac{1}{q}}.\] \(||u||_{q}\) denotes the \(L^{q}\) norm of \(u\) on \(M\). **Proposition 3.1**.: _(Mean value inequality) Let \((M^{n},g)\) be a complete Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). For any real number \(s\) and any \(0<\delta<\delta^{\prime}\leq 1\), let \(u\) be a smooth non-negative subsolution of the heat equation in the cylinder \(Q=B_{r}(o)\times(s-r^{2},s)\), \(0<r<R\)._ _For \(2\leq p<\infty\), there exist constants \(\tilde{c}_{1}(n)\) and \(\tilde{c}_{2}(n)\) such that_ \[\sup_{Q_{\delta}}u^{p}\leq\frac{\tilde{c}_{1}(n)e^{\tilde{c}_{2}(n)(Kr^{2}+L( R))}}{(\delta^{\prime}-\delta)^{dn}r^{2}\operatorname{Vol}(B_{r}(o))}\cdot \int_{Q_{\delta^{\prime}}}u^{p}dvdt. \tag{3.1}\] _For \(0<p<2\), there exist constants \(\tilde{c}_{3}(n,p)\) and \(\tilde{c}_{4}(n)\) such that_ \[\sup_{Q_{\delta}}u^{p}\leq\frac{\tilde{c}_{3}(n,p)e^{\tilde{c}_{4}(n)(Kr^{2}+ L(R))}}{(\delta^{\prime}-\delta)^{dn}r^{2}\operatorname{Vol}(B_{r}(o))}\cdot \int_{Q_{\delta^{\prime}}}u^{p}dvdt. \tag{3.2}\] _Here \(Q_{\delta}=B_{\delta r}(o)\times(s-\delta r^{2},s)\), \(Q_{\delta^{\prime}}=B_{\delta^{\prime}r}(o)\times(s-\delta^{\prime}r^{2},s)\), \(L(R)=\sup_{B_{3R}(o)}|f|\)._ Proof.: The proof is similar to Theorem 5.2.9 in [35]. We need to carefully examine the explicit coefficients of the mean value inequality in terms of the Sobolev constants in (2.10). Without loss of generality we may assume \(\delta^{\prime}=1.\) Denote \(B=B_{r}(o)\) for simplicity. For any non-negative function \(\phi\in C_{0}^{\infty}(B)\), we have \[\int_{B}(\phi u_{t}+\left\langle\nabla u,\nabla\phi\right\rangle)dv\leq 0.\] In particular, when \(\phi=\Phi^{2}u,\Phi\in C_{0}^{\infty}(B)\), we obtain \[\int_{B}(\Phi^{2}uu_{t}+\Phi^{2}|\nabla u|^{2})dv \leq 2\left|\int_{B}u\Phi\left\langle\nabla u,\nabla\Phi \right\rangle dv\right|\] \[\leq 3\int_{B}|\nabla\Phi|^{2}u^{2}dv+\frac{1}{3}\int_{B}\Phi^{2} |\nabla u|^{2}dv.\] It then implies that \[\int_{B}(2\Phi^{2}uu_{t}+|\nabla(\Phi u)|^{2})dv \leq 2\int_{B}\Phi^{2}uu_{t}dv+\frac{4}{3}\int_{B}\Phi^{2}|\nabla u |^{2}dv+4\int_{B}|\nabla\Phi|^{2}u^{2}dv\] \[\leq 10||\nabla\Phi||_{\infty}^{2}\int_{\operatorname{supp}(\Phi )}u^{2}dv.\] For any smooth non-negative function \(\lambda(t)\) of the time variable \(t\), which will be chosen later, we get \[\frac{\partial}{\partial t} \left(\int_{B}(\lambda\Phi u)^{2}dv\right)+\lambda^{2}\int_{B}| \nabla(\Phi u)|^{2}dv\] \[\leq 2\lambda|\lambda^{\prime}|\sup\Phi^{2}\int_{\operatorname{ supp}(\Phi)}u^{2}dv+\lambda^{2}\left(2\int_{B}\Phi^{2}uu_{t}dv+\int_{B}|\nabla( \Phi u)|^{2}dv\right)\] \[\leq C\lambda(\lambda||\nabla\Phi||_{\infty}^{2}+|\lambda^{ \prime}|\sup\Phi^{2})\int_{\operatorname{supp}(\Phi)}u^{2}dv,\] where \(C\) is a constant which will change from line to line in the following. Now we choose \(\Phi\) and \(\lambda(t)\) such that, for any \(0<\sigma^{\prime}<\sigma<1,\ w=\sigma-\sigma^{\prime}\), \((1)0\leq\Phi\leq 1\), \(\operatorname{supp}(\Phi)\subset B_{\sigma r}(o)\), \(\Phi=1\) in \(B_{\sigma^{\prime}r}(o)\) and \(|\nabla\Phi|\leq 2(wr)^{-1}\); \((2)0\leq\lambda\leq 1\), \(\lambda=0\) in \((-\infty,s-\sigma r^{2})\), \(\lambda=1\) in \((s-\sigma^{\prime}r^{2},+\infty)\), and \(|\lambda^{\prime}(t)|\leq 2(wr)^{-2}\). Let \(I_{\sigma}=(s-\sigma r^{2},s)\) and \(I_{\sigma^{\prime}}=(s-\sigma^{\prime}r^{2},s)\). For any \(t\in I_{\sigma^{\prime}}\), integrating the above inequality over \((s-\sigma r^{2},t)\), we obtain \[\sup_{I_{\sigma^{\prime}}}\left(\int_{B}\Phi^{2}u^{2}dv\right)\leq C(wr)^{-2} \int_{Q_{\sigma}}u^{2}dvdt, \tag{3.3}\] and \[\int_{B\times I_{\sigma^{\prime}}}|\nabla(\Phi u)|^{2}dvdt\leq C(wr)^{-2} \int_{Q_{\sigma}}u^{2}dvdt. \tag{3.4}\] On the other hand, by the Holder inequality and the Sobolev inequality in Theorem 2.4, for some constant \(\mu=4n-2\), we have \[\begin{split}\int_{B}g^{2(1+\frac{2}{\mu})}dv&\leq \left(\int_{B}|g|^{\frac{2\mu}{\mu-2}}dv\right)^{\frac{\mu-2}{\mu}}\left(\int _{B}|g|^{2}dv\right)^{\frac{2}{\mu}}\\ &\leq\left(\int_{B}|g|^{2}dv\right)^{\frac{2}{\mu}}\left(E(B) \int_{B}(|\nabla g|^{2}+r^{-2}g^{2})dv\right)\end{split} \tag{3.5}\] for all \(g\in C^{\infty}(B)\), where \(E(B)=\frac{\tilde{c}_{5}(n)e^{2\tilde{c}_{6}(n)(Kr^{2}+L(R))}}{\operatorname{ Vol}(B_{r}(o))^{\frac{2}{\mu}}}r^{2}\). Setting \(g=\Phi u\), \(\theta=1+\frac{2}{\mu}\), (3.5) becomes \[\int_{B}(\Phi u)^{2\theta}dv\leq\left(\int_{B}(\Phi u)^{2}dv\right)^{\frac{2} {\mu}}\left(E(B)\int_{B}(|\nabla(\Phi u)|^{2}+r^{-2}(\Phi u)^{2})dv\right).\] Combining (3.3), (3.4) and integrating the above inequality over \((s-\sigma^{\prime}r^{2},s)\), we obtain \[\int_{s-\sigma^{\prime}r^{2}}^{s}\int_{B}(\Phi u)^{2\theta}dvdt\geq\int_{Q_{ \sigma^{\prime}}}u^{2\theta}dvdt,\] \[\int_{B}(\Phi u)^{2}dv\leq\sup_{I_{\sigma^{\prime}}}\int_{B}(\Phi u)^{2}dv\leq C (wr)^{-2}\int_{Q_{\sigma}}u^{2}dv,\] and \[\int_{s-\sigma^{\prime}r^{2}}^{s}\int_{B}\left(|\nabla(\Phi u)|^{ 2}+r^{-2}(\Phi u)^{2}\right)dvdt =\int_{B\times I_{\sigma^{\prime}}}|\nabla(\Phi u)|^{2}dvdt+\int_ {s-\sigma^{\prime}r^{2}}^{s}\int_{B}r^{-2}(\Phi u)^{2}dvdt\] \[\leq\int_{B\times I_{\sigma^{\prime}}}|\nabla(\Phi u)|^{2}dvdt+ \sigma^{\prime}r^{2}\times r^{-2}\sup_{I_{\sigma^{\prime}}}\int_{B}(\Phi u)^ {2}dv\] \[\leq C(wr)^{-2}\int_{Q_{\sigma}}u^{2}dvdt,\] which implies \[\int_{Q_{\sigma^{\prime}}}u^{2\theta}dvdt\leq E(B)\left(C(wr)^{-2}\int_{Q_{ \sigma}}u^{2}dvdt\right)^{\theta}.\] For any \(m\geq 1\), \(u^{m}\) is also a smooth non-negative subsolution of the heat equation. Hence the above inequality indeed implies \[\int_{Q_{\sigma^{\prime}}}u^{2m\theta}dvdt\leq E(B)\left(C(wr)^{-2}\int_{Q_{ \sigma}}u^{2m}dvdt\right)^{\theta}\] for \(m\geq 1\). Let \(w_{i}=(1-\delta)2^{-i}\), which satisfies \(\sum_{1}^{\infty}w_{i}=1-\delta\). Let \(\sigma_{0}=1\), \(\sigma_{i+1}=\sigma_{i}-w_{i+1}\) \(=1-\sum_{j=1}^{i+1}w_{j}\). Applying (3.6) for \(m=\theta^{i}\), \(\sigma=\sigma_{i}\), \(\sigma^{\prime}=\sigma_{i+1}\), we have \[\int_{Q_{\sigma_{i+1}}}u^{2\theta^{i+1}}dvdt\leq E(B)\left[C^{i+1}((1-\delta) r)^{-2}\int_{Q_{\sigma_{i}}}u^{2\theta^{i}}dvdt\right]^{\theta},\] i.e., \[\left(\int_{Q_{\sigma_{i+1}}}u^{2\theta^{i+1}}dvdt\right)^{\theta^{-(i+1)}} \leq E(B)^{\theta^{-(i+1)}}C^{(i+1)\theta^{-i}}((1-\delta)r)^{-2\theta^{-i}} \left(\int_{Q_{\sigma_{i}}}u^{2\theta^{i}}dvdt\right)^{\theta^{-i}}.\] Iterating from \(i=0\) to \(\infty\), we obtain \[\sup_{Q_{\delta}}u^{2}\leq E(B)^{\sum_{i=0}^{\infty}\theta^{-(i+1)}}C^{\sum_{ i=0}^{\infty}(i+1)\theta^{-i}}((1-\delta)r)^{-2\sum_{i=0}^{\infty}\theta^{-i}} \int_{Q}u^{2}dvdt.\] Therefore \[\sup_{Q_{\delta}}u^{2}\leq C(n)E(B)^{\frac{\mu}{2}}((1-\delta)r)^{-(\mu+2)}||u ||_{2,Q}^{2},\] i.e., \[\sup_{Q_{\delta}}u^{2}\leq\frac{\tilde{c}_{7}(n)e^{\tilde{c}_{8}(n)(Kr^{2}+L(R))} }{(1-\delta)^{4n}r^{2}\operatorname{Vol}(B_{r}(o))}\int_{Q}u^{2}dvdt. \tag{3.7}\] Formula (3.7) is in fact an \(L^{2}\)-mean value inequality. The case \(p\geq 2\) immediately follows, since for any smooth non-negative subsolution \(u\) of the heat equation, \(u^{\frac{p}{2}}\), \(p\geq 2\) is also a smooth non-negative subsolution of the heat equation. Next, for \(0<p<2\), we will apply (3.7) to prove (3.2) by a different iterative argument. Let \(\sigma\in(0,1)\) and \(\eta=\sigma+(1-\sigma)/4\). Then (3.7) implies \[\sup_{Q_{\sigma}}u\leq F(B)(1-\sigma)^{(-1-\frac{\mu}{2})}\left(\int_{Q_{\eta} }u^{2}dvdt\right)^{\frac{1}{2}},\] where \(F(B)=\frac{\tilde{c}_{9}(n)e^{\tilde{c}_{10}(n)(Kr^{2}+L(R))}}{r\operatorname {Vol}(B_{r}(o))^{\frac{1}{2}}}\). Since \(\left(\int_{Q_{\eta}}u^{2}dvdt\right)^{\frac{1}{2}}=\left(\int_{Q_{\eta}}u^{p} u^{2-p}dvdt\right)^{\frac{1}{2}}\leq\sup_{Q_{\eta}}u^{1-\frac{p}{2}}\left( \int_{Q}u^{p}dvdt\right)^{\frac{1}{2}}\), we have \[||u||_{\infty,Q_{\sigma}}=\sup_{Q_{\sigma}}u\leq F(B)(1-\sigma)^{(-1-\frac{\mu }{2})}||u||_{p,Q}^{\frac{p}{2}}||u||_{\infty,Q_{\eta}}^{1-\frac{p}{2}}. \tag{3.8}\] Now fix \(\delta\in(0,1)\) and let \(\sigma_{0}=\delta\), \(\sigma_{i+1}=\sigma_{i}+(1-\sigma_{i})/4\), which satisfy \(1-\sigma_{i}=(\frac{3}{4})^{i}(1-\delta)\). Applying (3.8) to \(\sigma=\sigma_{i}\), and \(\eta=\sigma_{i+1}\), we have \[||u||_{\infty,Q_{\sigma_{i}}}\leq(\frac{4}{3})^{(1+\frac{\mu}{2})i}F(B)||u||_ {p,Q}^{\frac{p}{2}}(1-\delta)^{(-1-\frac{\mu}{2})}||u||_{\infty,Q_{\sigma_{i+1 }}}^{1-\frac{p}{2}}.\] Therefore, for any \(i\), \[||u||_{\infty,Q_{\delta}}\leq(\frac{4}{3})^{(1+\frac{\mu}{2})\sum_{j}j(1- \frac{p}{2})^{j}}\times[F(B)||u||_{p,Q}^{\frac{p}{2}}(1-\delta)^{(-1-\frac{\mu }{2})}]^{\sum_{j}(1-\frac{p}{2})^{j}}||u||_{\infty,Q_{\sigma_{i}}}^{(1-\frac{p }{2})^{i}},\] where \(\sum\) denotes the summations from \(0\) to \(i-1\). Letting \(i\to\infty\) we get \[||u||_{\infty,Q_{\delta}}\leq\left(\frac{4}{3}\right)^{\tilde{c}(n,p)}\times [F(B)||u||_{p,Q}^{\frac{p}{2}}(1-\delta)^{(-1-\frac{\mu}{2})}]^{\frac{2}{p}},\] that is \[\sup_{Q_{\delta}}u^{p}\leq\frac{\tilde{c}_{11}(n,p)e^{\tilde{c}_{12}(n)(Kr^{2 }+L(R))}}{(1-\delta)^{4n}r^{2}\operatorname{Vol}(B_{r}(o))}\int_{Q}u^{p}dvdt, \ 0<p<2.\] Then the proposition follows. To get the Gaussian upper bound of the heat kernel, let us first recall Davies' double integral estimate [10]. **Lemma 3.2**.: _(Davies [10]) Let \((M^{n},g)\) be a complete Riemannian manifold and \(H(x,y,t)\) the heat kernel. Let \(\mu_{1}(M)\geq 0\) be the greatest lower bound for the \(L^{2}\)-spectrum of the Laplacian \(\Delta\) on \(M\). Assume that \(B_{1}\) and \(B_{2}\) are bounded subsets of \(M\). Then_ \[\int_{B_{1}}\int_{B_{2}}H(x,y,t)dydx\leq\operatorname{Vol}(B_{1})^{\frac{1}{2} }\operatorname{Vol}(B_{2})^{\frac{1}{2}}e^{\left(-\frac{d^{2}(B_{1},B_{2})}{4 t}-\mu_{1}(M)t\right)},\] _where \(d(B_{1},B_{2})\) denotes the distance between \(B_{1}\) and \(B_{2}\)._ Now we prove the Gaussian upper bound of the heat kernel by applying the mean value inequality in Proposition 3.1 and Lemma 3.2. Proof of Theorem 1.1.: Fixing \(x\in B_{\frac{1}{2}R}(o)\) and applying Proposition 3.1 to the heat kernel \(H(x,y,t)\) with \(Q=B_{\sqrt{t}}(y)\times(t-(\sqrt{t})^{2},t),\delta=\frac{1}{8}\) and \(\delta^{\prime}=\frac{1}{4}\), we have \[H(x,y,t)\leq\sup_{(z,s)\in Q_{\delta}}H(x,z,s) \leq\frac{\overline{c}_{1}(n)e^{\overline{c}_{2}(n)(Kt+\sup_{B_{ 3}\sqrt{t}(y)}^{\infty}|f|)}}{\left(\frac{1}{8}\right)^{4n}t\operatorname{Vol }(B_{\sqrt{t}}(y))}\int_{t-\frac{1}{4}t}^{t}\int_{B_{\frac{1}{4}\sqrt{t}(y)}}H (x,z,s)dzds \tag{3.9}\] \[\leq\frac{\overline{c}_{1}(n)e^{\overline{c}_{2}(n)(Kt+L(R))}}{ \left(\frac{1}{8}\right)^{4n}t\operatorname{Vol}(B_{\sqrt{t}}(y))}\int_{t- \frac{1}{4}t}^{t}\int_{B_{\frac{1}{4}\sqrt{t}(y)}}H(x,z,s)dzds\] \[\leq\frac{\overline{c}_{1}(n)e^{\overline{c}_{2}(n)(Kt+L(R))}}{ \operatorname{Vol}(B_{\sqrt{t}}(y))}\int_{B_{\sqrt{t}}(y)}H(x,z,s^{\prime})dz\] for some \(s^{\prime}\in\left(\frac{3}{4}t,t\right)\), where \(Q_{\delta}=B_{\frac{1}{8}\sqrt{t}}(y)\times(t-\frac{1}{8}(\sqrt{t})^{2},t)\) and \(B_{\sqrt{t}}(y)\subset B_{R}(o)\) for any \(y\in B_{\frac{1}{2}R}(o)\) and \(0<t<\frac{R^{2}}{4}\). Fixing \(z\in B_{\sqrt{t}}(y)\) and applying Proposition 3.1 to the heat kernel \(H(x,z,s^{\prime})\) with \(Q=B_{\sqrt{t}}(x)\times(t-(\sqrt{t})^{2},t)\), \(\delta=\frac{1}{4}\) and \(\delta^{\prime}=\frac{1}{2}\), we have \[H(x,z,s^{\prime})\leq\sup_{(x^{\prime},s^{\prime\prime})\in Q_{ \delta}}H(x^{\prime},z,s^{\prime\prime}) \leq\frac{\overline{c}_{4}(n)e^{\overline{c}_{5}(n)(Kt+L(R))}}{ \left(\frac{1}{4}\right)^{4n}t\operatorname{Vol}(B_{\sqrt{t}}(x))}\int_{t- \frac{1}{2}t}^{t}\int_{B_{\frac{1}{2}\sqrt{t}}(x)}H(w,z,\tau)dwd\tau \tag{3.10}\] \[\leq\frac{\overline{c}_{4}(n)e^{\overline{c}_{5}(n)(Kt+L(R))}}{ \left(\frac{1}{4}\right)^{4n}t\operatorname{Vol}(B_{\sqrt{t}}(x))}\int_{\frac {1}{2}t}^{t}\int_{B_{\sqrt{t}}(x)}H(w,z,\tau)dwd\tau\] \[=\frac{\overline{c}_{6}(n)e^{\overline{c}_{5}(n)(Kt+L(R))}}{ \operatorname{Vol}(B_{\sqrt{t}}(x))}\int_{B_{\sqrt{t}}(x)}H(w,z,\tau^{\prime })dw\] for some \(\tau^{\prime}\in\left(\frac{1}{2}t,t\right)\), where \(Q_{\delta}=B_{\frac{1}{4}\sqrt{t}}(x)\times(t-\frac{1}{4}(\sqrt{t})^{2},t)\) and \(B_{\sqrt{t}}(x)\subset B_{R}(o)\) for any \(x\in B_{\frac{1}{2}R}(o)\) and \(0<t<\frac{R^{2}}{4}\). Combining (3.9) and (3.10), the heat kernel satisfies \[H(x,y,t)\leq\frac{\overline{c}_{7}(n)e^{\overline{c}_{8}(n)(Kt+L(R))}}{ \operatorname{Vol}(B_{\sqrt{t}}(x))\operatorname{Vol}(B_{\sqrt{t}}(y))}\int_{B _{\sqrt{t}}(y)}\int_{B_{\sqrt{t}}(x)}H(w,z,\tau^{\prime})dwdz \tag{3.11}\] for any \(x,\ y\in B_{\frac{1}{2}R}(o)\) and \(0<t<\frac{R^{2}}{4}\). Using Lemma 3.2 and noticing that \(\tau^{\prime}\in\left(\frac{1}{2}t,t\right)\), (3.11) becomes \[H(x,y,t)\leq\frac{\overline{c}_{7}(n)e^{\overline{c}_{8}(n)(Kt+L(R))}}{\operatorname {Vol}(B_{\sqrt{t}}(x))^{\frac{1}{2}}\operatorname{Vol}(B_{\sqrt{t}}(y))^{ \frac{1}{2}}}e^{\left(-\frac{d^{2}(B_{\sqrt{t}}(x),B_{\sqrt{t}}(y))}{4t} \right)} \tag{3.12}\] for all \(x,\ y\in B_{\frac{1}{2}R}(o)\) and \(0<t<\frac{R^{2}}{4}\). Notice that if \(d(x,y)\leq 2\sqrt{t}\), then \(d(B_{\sqrt{t}}(x),B_{\sqrt{t}}(y))=0\), and hence \[-\frac{d^{2}(B_{\sqrt{t}}(x),B_{\sqrt{t}}(y))}{4t}=0\leq-\frac{d^{2}(x,y)}{4t},\] and if \(d(x,y)>2\sqrt{t}\), then \(d(B_{\sqrt{t}}(x),B_{\sqrt{t}}(y))=d(x,y)-2\sqrt{t}\), and hence \[-\frac{d^{2}(B_{\sqrt{t}}(x),B_{\sqrt{t}}(y))}{4t}=-\frac{(d(x,y)-2\sqrt{t})^{ 2}}{4t}\leq 1-\frac{d^{2}(x,y)}{4(1+\epsilon)t}+\frac{1}{\epsilon}\] for any \(\epsilon>0\). Combining the above two conditions, gives that \[e^{\left(-\frac{d^{2}(B_{\sqrt{t}}(x),B_{\sqrt{t}}(y))}{4t}\right)}\leq e^{ \left(-\frac{d^{2}(x,y)}{4(1+\epsilon)t}+1+\frac{1}{\epsilon}\right)}.\] Therefore, by (3.12) we have \[H(x,y,t)\leq\frac{\overline{c}_{9}(n,\epsilon)e^{\overline{c}_{8}(n)(Kt+L(R)) }}{\operatorname{Vol}(B_{\sqrt{t}}(x))^{\frac{1}{2}}\operatorname{Vol}(B_{ \sqrt{t}}(y))^{\frac{1}{2}}}e^{\left(-\frac{d^{2}(x,y)}{(4+\epsilon)t}\right)} \tag{3.13}\] for all \(x,y\in B_{\frac{1}{2}R}(o)\) and \(0<t<\frac{R^{2}}{4}\), where \(\lim_{\epsilon\to 0}\overline{c}_{9}(n,\epsilon)=\infty\). (3.1) follows by letting \(R\to\infty\). Notice that Theorem 2.1 implies \[\operatorname{Vol}(B_{\sqrt{t}}(y)) \leq\operatorname{Vol}(B_{\sqrt{t}+d(x,y)}(x))\] \[\leq\left(\frac{\sqrt{t}+d(x,y)}{\sqrt{t}}\right)^{n}e^{\frac{K} {6}(d^{2}(x,y)+2d(x,y)\sqrt{t})+6L(R)}\operatorname{Vol}(B_{\sqrt{t}}(x))\] for all \(x,y\in B_{\frac{1}{4}R}(o)\) and \(B_{\sqrt{t}+d(x,y)}(x)\subset B_{R}(o)\) with \(0<t<\frac{R^{2}}{16}\). Therefore, the upper bound in Theorem 1.1 can be rewritten as follows. **Corollary 3.3**.: _Under the same assumptions as Theorem 1.1, for any \(\epsilon>0\), there exist constants \(C_{5}(n,\epsilon)\) and \(C_{6}(n)\), such that_ \[H(x,y,t)\leq\frac{C_{5}(n,\epsilon)e^{C_{6}(n)(KR^{2}+L(R)+Kd^{2}(x,y))}}{ \operatorname{Vol}(B_{\sqrt{t}}(y))}\left(1+\frac{d(x,y)}{\sqrt{t}}\right)^{ \frac{n}{2}}e^{\left(-\frac{d^{2}(x,y)}{(4+\epsilon)t}\right)} \tag{3.14}\] _for all \(x,y\in B_{\frac{1}{4}R}(o)\) and \(0<t<R^{2}/16\), where \(L(R)=\sup_{B_{3R}(o)}|f|\) and \(\lim_{\epsilon\to 0}C_{5}(n,\epsilon)=\infty\)._ ## 4. \(L^{1}\)-Liouville theorem and uniqueness of \(L^{1}\) solutions of the heat equation In this section, inspired by the work of Li [19], we prove the \(L^{1}\)-Liouville theorem for non-negative subharmonic functions and the uniqueness theorem for \(L^{1}\)-solutions of the heat equation on complete noncompact Riemannian manifolds with Bakry-Emery Ricci curvature bounded below and the potential function of at most quadratic growth. We start from a useful lemma. **Lemma 4.1**.: _(Theorem 11.8 in [13]) Let \((M^{n},g)\) be a complete Riemannian manifold. If, for some point \(x_{0}\in M\),_ \[\int_{1}^{+\infty}\frac{R\,dR}{\ln(\operatorname{Vol}(B_{R}(x_{0})))}=\infty,\] _then \((M^{n},g)\) is stochastically complete, i.e.,_ \[\int_{M}H(x,y,t)dy=1.\] Under our assumptions mentioned earlier, it is easy to check the stochastic completeness of the manifold due to Theorem 2.1. **Proposition 4.2**.: _Let \((M^{n},g)\) be a complete non-compact Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) for some constant \(K\geq 0\). Assume there exist non-negative constants \(a\) and \(b\) such that_ \[|f|(x)\leq ar^{2}(x)+b\ for\ all\ x\in M,\] _where \(r(x)=d(x,o)\) is the geodesic distance function to a fixed point \(o\in M\). Then \((M^{n},g)\) is stochastically complete._ Proof.: In (2.4), letting \(r_{1}\to 0,\ r_{2}=R\), and \(p=o\in M\) yields \[\operatorname{Vol}(B_{R}(o))\leq c(n,b)R^{n}e^{c(K,a)R^{2}}\] for all \(R>1\). Hence \[\int_{1}^{+\infty}\frac{R\,dR}{\ln(\operatorname{Vol}(B_{R}(o)))}=\infty.\] By Lemma 4.1, this implies that \((M^{n},g)\) is stochastically complete. Now, we are ready to check the integration by parts formula by using the upper bound of the heat kernel in Corollary 3.3 and the mean value inequality in Proposition 3.1. **Proposition 4.3**.: _Under the same assumptions as Proposition 4.2, for any non-negative \(L^{1}\)-integrable subharmonic function \(u\), we have_ \[\int_{M}\Delta_{y}H(x,y,t)u(y)dy=\int_{M}H(x,y,t)\Delta_{y}u(y)dy\] _for any \(x\in M\) and \(t>0\)._ Proof.: Applying integration by parts on \(B_{R}(o)\) gives \[\begin{split}&\left|\int_{B_{R}(o)}\Delta_{y}H(x,y,t)u(y)dy-\int_{B _{R}(o)}H(x,y,t)\Delta_{y}u(y)dy\right|\\ &=\left|\int_{\partial B_{R}(o)}\frac{\partial}{\partial r}H(x,y,t )u(y)dS-\int_{\partial B_{R}(o)}H(x,y,t)\frac{\partial}{\partial r}u(y)dS \right|\\ &\leq\int_{\partial B_{R}(o)}|\nabla_{y}H|(x,y,t)u(y)dS+\int_{ \partial B_{R}(o)}H(x,y,t)|\nabla_{y}u|(y)dS,\end{split} \tag{4.1}\] where \(dS\) denotes the area measure on \(\partial B_{R}(o)\). We shall show that the above two boundary integrals vanish as \(R\to\infty\). Without loss of generality, we assume \(R>1\) and \(x\in B_{\frac{1}{8}R}(o)\). Step 1. Since \(|f|(x)\leq ar^{2}(x)+b\), by Proposition 3.1, we get \[\sup_{B_{R}(o)}u\leq\frac{\tilde{C}_{1}(n)e^{\tilde{C}_{2}(n,K,a,b)R^{2}}}{ \operatorname{Vol}(B_{2R}(o))}\int_{B_{2R}(o)}u\leq\frac{\tilde{C}_{1}(n)e^{ \tilde{C}_{2}(n,K,a,b)R^{2}}}{\operatorname{Vol}(B_{2R}(o))}||u||_{1}. \tag{4.2}\] Let \(\phi(y)=\phi(d(y,o))\) be a non-negative cut-off function satisfying \(0\leq\phi\leq 1,\ |\nabla\phi|\leq\sqrt{3}\), \(\phi(y)=1\) on \(B_{R+1}(o)\backslash B_{R}(o)\), and \(\phi(y)=0\) on \(B_{R-1}(o)\cup(M\backslash B_{R+2}(o))\). Since \(u\) is subharmonic, by the Cauchy-Schwarz inequality, we have \[0\leq\int_{M}\phi^{2}u\Delta u =-\int_{M}\left\langle\nabla(\phi^{2}u),\nabla u\right\rangle\] \[=-2\int_{M}\phi u\left\langle\nabla\phi,\nabla u\right\rangle- \int_{M}\phi^{2}|\nabla u|^{2}\] \[\leq 2\int_{M}|\nabla\phi|^{2}u^{2}-\frac{1}{2}\int_{M}\phi^{2}| \nabla u|^{2},\] i.e., \[\int_{M}\phi^{2}|\nabla u|^{2}\leq 4\int_{M}|\nabla\phi|^{2}u^{2}.\] It then follows from (4.2) that \[\int_{B_{R+1}(o)\backslash B_{R}(o)}|\nabla u|^{2}\leq\int_{M} \phi^{2}|\nabla u|^{2} \leq 4\int_{M}|\nabla\phi|^{2}u^{2}\] \[\leq 12\int_{B_{R+2}(o)}u^{2}\] \[\leq 12\sup_{B_{R+2}(o)}u\times||u||_{1}\] \[\leq\frac{\tilde{C}_{3}(n)e^{\tilde{C}_{4}(n,K,a,b)(R+2)^{2}}}{ \operatorname{Vol}(B_{2R+4}(o))}||u||_{1}^{2}.\] On the other hand, the Cauchy-Schwarz inequality implies that \[\int_{B_{R+1}(o)\backslash B_{R}(o)}|\nabla u|\leq\left(\int_{B_{R+1}(o) \backslash B_{R}(o)}|\nabla u|^{2}\right)^{\frac{1}{2}}\cdot\left[\operatorname {Vol}(B_{R+1}(o))-\operatorname{Vol}(B_{R}(o))\right]^{\frac{1}{2}}.\] Combining the above two inequalities, we have \[\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla u|\leq\tilde{C}_{5}(n)e^{\tilde{C}_{6} (n,K,a,b)(R+2)^{2}}||u||_{1}. \tag{4.3}\] Step 2. By letting \(\epsilon=1\) in Corollary 3.3, the heat kernel \(H(x,y,t)\) satisfies \[H(x,y,t)\leq\frac{\tilde{C}_{7}(n)e^{\tilde{C}_{8}(n,K,a,b)[R^{2}+d^{2}(x,y)]} }{\text{Vol}(B_{\sqrt{t}}(x))}\left(1+\frac{d(x,y)}{\sqrt{t}}\right)^{\frac{n }{2}}e^{\left(-\frac{d^{2}(x,y)}{5t}\right)} \tag{4.4}\] for all \(x,y\in B_{\frac{1}{4}R}(o)\) and \(0<t<\frac{R^{2}}{16}\). Together with (4.3) we get \[J_{1}: =\int_{B_{R+1}(o)\setminus B_{R}(o)}H(x,y,t)|\nabla u|(y)dy\] \[\leq\sup_{y\in B_{R+1}(o)\setminus B_{R}(o)}H(x,y,t)\cdot\int_{B_ {R+1}(o)\setminus B_{R}(o)}|\nabla u|(y)dy\] \[\leq\frac{\tilde{C}_{9}(n)e^{\tilde{C}_{10}(n,K,a,b)[(R+2)^{2}+(d (x,o)+R+1)^{2}]-\frac{(R-d(x,o))^{2}}{5t}}||u||_{1}}{\text{Vol}(B_{\sqrt{t}}(x ))}\left(1+\frac{d(x,o)+R+1}{\sqrt{t}}\right)^{\frac{n}{2}}.\] Thus, for \(T\) sufficiently small, all \(t\in(0,T)\), and \(d(x,o)\leq\frac{R}{8}\), there exists a fixed constant \(\beta>0\) such that \[J_{1}\leq\frac{\hat{C}_{1}(n,K,a,b)||u||_{1}}{\text{Vol}(B_{\sqrt{t}}(x))} \left(1+\frac{2R+1}{\sqrt{t}}\right)^{\frac{n}{2}}e^{\left[-\beta R^{2}+\hat{ C}_{2}(n,K,a,b)R\right]}.\] Hence for all \(t\in(0,T)\) and all \(x\in M\), \(J_{1}\to 0\) as \(R\to\infty\). Step 3. We show that \(\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla_{y}H|(x,y,t)u(y)dy\to 0\) as \(R\to\infty\). First, consider the integral \[\int_{M}\phi^{2}(y)|\nabla_{y}H|^{2}(x,y,t)dy =-2\int_{M}\left\langle H(x,y,t)\nabla\phi(y),\phi(y)\nabla_{y}H (x,y,t)\right\rangle dy\] \[\quad-\int_{M}\phi^{2}(y)H(x,y,t)\Delta_{y}H(x,y,t)dy\] \[\leq 2\int_{M}|\nabla\phi|^{2}(y)H^{2}(x,y,t)dy+\frac{1}{2}\int_{M }\phi^{2}(y)|\nabla_{y}H|^{2}(x,y,t)dy\] \[\quad-\int_{M}\phi^{2}(y)H(x,y,t)\Delta_{y}H(x,y,t)dy.\] This implies \[\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla_{y}H|^{2}(x,y,t)dy\leq \int_{M}\phi^{2}(y)|\nabla_{y}H|^{2}(x,y,t)dy\] \[\leq 4\int_{M}|\nabla\phi|^{2}(y)H^{2}(x,y,t)dy-2\int_{M}\phi^{2}(y )H(x,y,t)\Delta_{y}H(x,y,t)dy\] \[\leq 12\int_{B_{R+2}(o)\setminus B_{R-1}(o)}H^{2}(x,y,t)dy+2\int_{ B_{R+2}(o)\setminus B_{R-1}(o)}H(x,y,t)|\Delta_{y}H|(x,y,t)dy\] \[\leq 12\int_{B_{R+2}(o)\setminus B_{R-1}(o)}H^{2}(x,y,t)dy+2\left( \int_{B_{R+2}(o)\setminus B_{R-1}(o)}H^{2}\right)^{\frac{1}{2}}\left(\int_{M}| \Delta_{y}H|^{2}(x,y,t)\right)^{\frac{1}{2}}. \tag{4.5}\] By Proposition 4.2, we have \[\int_{M}H(x,y,t)dy=1\] for all \(x\in M\) and \(t>0\). Combining this with (4.4) gives \[\int_{B_{R+2}(o)\setminus B_{R-1}(o)}H^{2}(x,y,t)dy\leq\sup_{y\in B _{R+2}(o)\setminus B_{R-1}(o)}H(x,y,t) \tag{4.6}\] \[\leq\frac{\tilde{C}_{11}(n)e^{\tilde{C}_{12}(n,K,a,b)[(R+2)^{2}+( d(x,o)+R+2)^{2}]-\frac{(R-1-d(o,x))^{2}}{5t}}}{\operatorname{Vol}(B_{\sqrt{t}}(x))} \left(1+\frac{d(x,o)+R+2}{\sqrt{t}}\right)^{\frac{n}{2}}.\] Form (29.15) in [20], we know that \[\int_{M}(\Delta_{y}H)^{2}(x,y,t)dy\leq\frac{\tilde{C}}{t^{2}}H(x,x,t). \tag{4.7}\] Combining (4.5), (4.6) and (4.7), we obtain \[\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla_{y}H|^{2}(x,y,t)dy\leq \tilde{C}_{13}(n)e^{\tilde{C}_{14}(n,K,a,b)[(R+2)^{2}+(d(x,o)+R+2)^{2}]-\frac {(R-1-d(x,o))^{2}}{10t}}\] \[\times\left[\operatorname{Vol}(B_{\sqrt{t}}(x))^{-1}+\operatorname {Vol}(B_{\sqrt{t}}(x))^{-\frac{1}{2}}t^{-1}H^{\frac{1}{2}}(x,x,t)\right]\left( 1+\frac{d(x,o)+R+2}{\sqrt{t}}\right)^{\frac{n}{2}}.\] By the Cauchy-Schwarz inequality, we get \[\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla_{y}H|(x,y,t)dy \tag{4.8}\] \[\leq[\operatorname{Vol}(B_{R+1}(o))-\operatorname{Vol}(B_{R}(o))] ^{\frac{1}{2}}\cdot\left(\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla_{y}H|^{2 }(x,y,t)dy\right)^{\frac{1}{2}}\] \[\leq\operatorname{Vol}(B_{R+1}(o))^{\frac{1}{2}}\tilde{C}_{16}(n )e^{\tilde{C}_{15}(n,K,a,b)[(R+2)^{2}+(d(x,o)+R+2)^{2}]-\frac{(R-1-d(x,o))^{2 }}{20t}}\] \[\times\left[\operatorname{Vol}(B_{\sqrt{t}}(x))^{-1}+\operatorname {Vol}(B_{\sqrt{t}}(x))^{-\frac{1}{2}}t^{-1}H^{\frac{1}{2}}(x,x,t)\right]^{ \frac{1}{2}}\left(1+\frac{d(x,o)+R+2}{\sqrt{t}}\right)^{\frac{n}{4}}.\] Therefore, by (4.2) and (4.8), we have \[J_{2}: =\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla_{y}H|(x,y,t)u(y)dy\] \[\leq\sup_{y\in B_{R+1}(o)\setminus B_{R}(o)}u(y)\cdot\int_{B_{R+1} (o)\setminus B_{R}(o)}|\nabla_{y}H|(x,y,t)dy\] \[\leq\frac{\tilde{C}_{18}(n)e^{\tilde{C}_{17}(n,K,a,b)(R+1)^{2}}}{ \text{Vol}(B_{2R+2}(o))^{\frac{1}{2}}}||u||_{1}e^{\tilde{C}_{15}(n,K,a,b)[(R+2) ^{2}+(d(x,o)+R+2)^{2}]-\frac{(R-1-d(x,o))^{2}}{20t}}\] \[\times\left[\text{Vol}(B_{\sqrt{t}}(x))^{-1}+\text{Vol}(B_{\sqrt{ t}}(x))^{-\frac{1}{2}}t^{-1}H^{\frac{1}{2}}(x,x,t)\right]^{\frac{1}{2}}\left(1+ \frac{d(x,o)+R+2}{\sqrt{t}}\right)^{\frac{n}{4}}\] \[\leq\frac{\tilde{C}_{18}(n)e^{\tilde{C}_{19}(n,K,a,b)[(R+2)^{2}+(d (x,o)+R+2)^{2}]-\frac{(R-1-d(x,o))^{2}}{20t}||u||_{1}}}{\text{Vol}(B_{2}(o))^ {\frac{1}{2}}}\] \[\times\left[\text{Vol}(B_{\sqrt{t}}(x))^{-1}+\text{Vol}(B_{\sqrt{ t}}(x))^{-\frac{1}{2}}t^{-1}H^{\frac{1}{2}}(x,x,t)\right]^{\frac{1}{2}}\left(1+ \frac{d(x,o)+R+2}{\sqrt{t}}\right)^{\frac{n}{4}}.\] Similar to the case of \(J_{1}\), by choosing \(T\) sufficiently small, for all \(t\in(0,T)\) and all \(x\in M\), \(J_{2}\) also tends to zero when \(R\) tends to infinity. Step 4. By the mean value theorem, for all \(R>0\) there exists \(\bar{R}\in(R,R+1)\) such that \[J: =\int_{\partial B_{\bar{R}}(o)}|\nabla_{y}H|(x,y,t)u(y)dS+\int_{ \partial B_{\bar{R}}(o)}H(x,y,t)|\nabla_{y}u|(y)dS\] \[=\int_{B_{R+1}(o)\setminus B_{R}(o)}|\nabla_{y}H|(x,y,t)u(y)dy+ \int_{B_{R+1}(o)\setminus B_{R}(o)}H(x,y,t)|\nabla_{y}u|(y)dy\] \[=J_{2}+J_{1}.\] From step 2 and step 3, we know that by choosing \(T\) sufficiently small, for all \(t\in(0,T)\) and all \(x\in M\), \(J\) tends to zero along a sequence of radii tending to infinity. Since \(H(x,y,t)\Delta u\geq 0\) and \(\Delta_{y}H(x,y,t)=\frac{\partial}{\partial t}H(x,y,t)\) is bounded by a result of Grigor'yan [14], one can see from (4.1) that \(J\to 0\) for all \(R\to\infty\). Therefore, Proposition 4.3 is true for \(t\) sufficiently small. Step 5. For all \(t\in(0,T)\) and \(s\in(0,+\infty)\), using the semigroup property of the heat kernel, we have \[\int_{M}\Delta_{y}H(x,y,t+s)u(y)dy =\int_{M}\int_{M}H(x,z,s)\Delta_{y}H(z,y,t)dzu(y)dy\] \[=\int_{M}\left(\int_{M}\Delta_{y}H(z,y,t)u(y)dy\right)H(x,z,s)dz\] \[=\int_{M}\left(\int_{M}H(z,y,t)\Delta_{y}u(y)dy\right)H(x,z,s)dz\] \[=\int_{M}H(x,y,t+s)\Delta_{y}u(y)dy.\] This completes the proof of Proposition 4.3 for all time \(t>0\) Applying the regularity theory of harmonic functions, combining Proposition 4.2 and Propositon 4.3, we can obtain the \(L^{1}\)-Liouville property. Proof of Theorem 1.3.: Let \(u(x)\) be a non-negative \(L^{1}\)-integrable subharmonic function on \(M\). We define a space-time function \[u(x,t)=\int_{M}H(x,y,t)u(y)dy\] with initial data \(u(x,0)=u(x)\). From Proposition 4.3, we conclude that \[\frac{\partial}{\partial t}u(x,t) =\int_{M}\frac{\partial}{\partial t}H(x,y,t)u(y)dy\] \[=\int_{M}\Delta_{y}H(x,y,t)u(y)dy\] \[=\int_{M}H(x,y,t)\Delta_{y}u(y)dy\geq 0,\] that is, \(u(x,t)\) is increasing in \(t\). By Proposition 4.2, \[\int_{M}H(x,y,t)dy=1\] for all \(x\in M\) and \(t>0\). So we have \[\int_{M}u(x,t)dx=\int_{M}\int_{M}H(x,y,t)u(y)dydx=\int_{M}u(y)dy.\] Since \(u(x,t)\) is increasing in \(t\), so \(u(x,t)=u(x)\) and hence \(u(x)\) is a non-negative harmonic function, i.e., \(\Delta u(x)=0\). On the other hand, for any positive constant \(\epsilon\), let us define a new function \(h(x)=\min\{u(x),\epsilon\}\). Then \(h\) satisfies \(0\leq h(x)\leq u(x),\ \ |\nabla h|\leq|\nabla u|\ \ \text{and}\ \Delta h(x)\leq 0\). So \(h\) has the integration by parts formula as \(u\). Similarly we define \(h(x,t)=\int_{M}H(x,y,t)h(y)dy\) and \[\frac{\partial}{\partial t}h(x,t) =\int_{M}\frac{\partial}{\partial t}H(x,y,t)h(y)dy\] \[=\int_{M}H(x,y,t)\Delta_{y}h(y)dy\leq 0.\] By the same argument, we have that \(\Delta h(x)=0\). By the regularity theory of harmonic functions, that is impossible unless \(h=u\) or \(h=\epsilon\). Since \(\epsilon\) is arbitrary and \(u\) is non-negative, so \(u\) must be identically constant. The theorem then follows from the fact that the absolute value of a harmonic function is a non-negative subharmonic function. With the \(L^{1}\)-Liouville property, one can prove the uniqueness of \(L^{1}\) solution of the heat equation. Proof of Theorem 1.7.: Let \(u(x,t)\in L^{1}\) be a non-negative function satisfying the assumptions in Theorem 1.7. For \(\epsilon>0\), we define a space-time function \[u_{\epsilon}(x,t)=\int_{M}H(x,y,t)u(y,\epsilon)dy \tag{4.9}\] and \[F_{\epsilon}(x,t)=\max\{0,u(x,t+\epsilon)-u_{\epsilon}(x,t)\}.\] Then \(F_{\epsilon}(x,t)\) is non-negative and satisfies \[\lim_{t\to 0}F_{\epsilon}(x,t)=0,\ \left(\Delta-\frac{\partial}{\partial t} \right)F_{\epsilon}(x,t)\geq 0.\] Let \(T>0\) be fixed. Let \(h(x)=\int_{0}^{T}F_{\epsilon}(x,t)dt\), which implies \[\Delta h(x)=\int_{0}^{T}\Delta F_{\epsilon}(x,t)dt\geq\int_{0}^{T}\partial_{t }F_{\epsilon}(x,t)dt=F_{\epsilon}(x,T)\geq 0, \tag{4.10}\] and \[\int_{M}h(x)dx =\int_{0}^{T}\int_{M}F_{\epsilon}(x,t)dxdt\leq\int_{0}^{T}\int_{ M}|u(x,t+\epsilon)-u_{\epsilon}(x,t)|dxdt\] \[\leq\int_{0}^{T}\int_{M}u(x,t+\epsilon)dxdt+\int_{0}^{T}\int_{M}u _{\epsilon}(x,t)dxdt<\infty,\] where the first term on the right hand is finite from our assumption, and the second term is finite because the heat semigroup is contractive on \(L^{1}\). Therefore, \(h(x)\) is a non-negative \(L^{1}\)-integrable subharmonic function. By Theorem 1.3, \(h(x)\) must be constant. Combining with (4.10) we have \(F_{\epsilon}(x,t)=0\) for all \(x\in M\) and \(t>0\), which implies \[u_{\epsilon}(x,t)\geq u(x,t+\epsilon). \tag{4.11}\] Next we estimate the function \(u_{\epsilon}(x,t)\) in (4.9). Applying the upper bound estimate (3.14) of the heat kernel \(H(x,y,t)\) and letting \(\epsilon=1\), \(R=2d(x,y)+1\), we have \[u_{\epsilon}(x,t)\leq\frac{c}{\operatorname{Vol}(B_{\sqrt{t}}(x))}\int_{M} \left[e^{\tilde{c}d^{2}(x,y)-\frac{d^{2}(x,y)}{5t}}\left(1+\frac{d(x,y)}{ \sqrt{t}}\right)^{\frac{n}{2}}\right]u(y,\epsilon)dy.\] For sufficiently small values of \(t>0\), the right-hand side can be estimated by \[\frac{C}{\operatorname{Vol}(B_{\sqrt{t}}(x))}\int_{M}u(y,\epsilon)dy.\] Hence as \(\epsilon\to 0\), \(u_{\epsilon}(x,t)\to 0\) since \(\int_{M}u(y,\epsilon)dy\to 0\). However, by the semigroup property, \(u_{\epsilon}(x,t)\to 0\) for all \(x\in M\) and \(t>0\). Combining with (4.11) we get \(u(x,t)\leq 0\). Therefore \(u(x,t)\equiv 0\). To prove that any \(L^{1}\)-solution of the heat equation is uniquely determined by its initial data in \(L^{1}\), we suppose that \(u_{1}(x,t),\ u_{2}(x,t)\) are two \(L^{1}\)-integrable solutions of the heat equation \((\Delta-\partial_{t})u=0\) with the initial data \(u(x,0)\in L^{1}\). Applying this above result to \(v(x,t)=|u_{1}(x,t)-u_{2}(x,t)|\), we see that \(v(x,t)\equiv 0\). The proof of Theorem 1.7 is finished. An \(L^{\infty}\) Liouville Property for harmonic functions with polynomial growth In this section, we take a detour to prove an \(L^{\infty}\) Liouville theorem for harmonic functions with polynomial growth when the Bakry-Emery Ricci curvature is non-negative and the potential function is bounded by using a similar idea in [6]. This is not the main result of the paper, but can be compared with the results in the last section and may be of independent interest. First, we give a gradient estimate in the integral sense. **Lemma 5.1**.: _Let \((M^{n},g)\) be a complete non-compact Riemannian manifold with \(\operatorname{Ric}_{f}\geq 0\) and \(|f|\leq L\) on \(M\) for some constant \(L\geq 0\). For any point \(o\in M\) and \(R>0\), let \(u\) be a harmonic function on \(B_{2R}(o)\), and \(diam\ \partial B_{r}(o):=\sup\limits_{x,y\in\partial B_{r}(o)}d(x,y)\leq \epsilon r\) with \(\epsilon\in(0,\frac{1}{12})\) for all \(r\in[R,2R]\). Then we have_ \[\int_{B_{R}(o)}|\nabla u|^{2}\leq\delta^{\frac{1}{\epsilon}-6}\int_{B_{2R}(o) }|\nabla u|^{2}, \tag{5.1}\] _where \(\delta=\left(\frac{9c_{1}e^{c_{2}L}}{1+9c_{1}e^{c_{2}L}}\right)^{\frac{1}{6}}\) and \(c_{1}\), \(c_{2}\) are constants depending only on \(n\) from Lemma 2.2._ Proof.: Choosing \(r\in[R+3\epsilon R,2R-3\epsilon R]\), where \(\epsilon\in(0,\frac{1}{12})\), the diameter hypothesis in the Lemma implies that there exists some \(x\in\partial B_{r}(o)\) such that \[B_{r+\epsilon R}(o)\backslash B_{r}(o)\subset B_{\epsilon R+\epsilon r}(x).\] Let \(\eta(y)\) be a cut-off function with support in \(B_{r+\epsilon R}(o)\) such that \[\eta(y)=\begin{cases}1&y\in B_{r}(o)\,\\ \frac{r+\epsilon R-d(y,o)}{\epsilon R}&y\in B_{r+\epsilon R}(o)\backslash B_{ r}(o)\,\\ 0&y\in M\backslash B_{r+\epsilon R}(o)\.\end{cases}\] We easily observe that \[\int_{M}|\nabla(\eta(u-c))|^{2}=\int_{M}\eta^{2}|\nabla(u-c)|^{2}+2\eta(u-c) \left\langle\nabla\eta,\nabla(u-c)\right\rangle+(u-c)^{2}|\nabla\eta|^{2},\] where \(c\) is a real number to be chosen later. Notice that \[\int_{M}\eta^{2}|\nabla(u-c)|^{2}+2\eta(u-c)\left\langle\nabla \eta,\nabla(u-c)\right\rangle =\int_{M}\left\langle\nabla((u-c)\eta^{2}),\nabla(u-c)\right\rangle\] \[=-\int_{M}(u-c)\eta^{2}\Delta(u-c)=0.\] Therefore, \[\int_{M}|\nabla(\eta(u-c))|^{2}=\int_{B_{r+\epsilon R}(o)}(u-c)^{2}|\nabla \eta|^{2}.\] According to the definition of \(\eta(y)\), by the above equality, we get that \[\int_{B_{r}(o)}|\nabla u|^{2} \leq\int_{B_{r+\epsilon R}(o)}|\nabla(\eta(u-c))|^{2}\] \[=\int_{B_{r+\epsilon R}(o)}(u-c)^{2}|\nabla\eta|^{2}\] \[\leq\frac{1}{\epsilon^{2}R^{2}}\int_{B_{r+\epsilon R}(o)\setminus B _{r}(o)}(u-c)^{2}\] \[\leq\frac{1}{\epsilon^{2}R^{2}}\int_{B_{\epsilon R+\epsilon r}(x) }(u-c)^{2}.\] For the right-hand side of the above inequality, if we choose \(c=u_{B_{\epsilon R+\epsilon r}(x)}\), then using the Poincare inequality in Lemma 2.2 and the fact that \(r+R\leq 3R\), we have \[\int_{B_{r}(o)}|\nabla u|^{2}\leq 9c_{1}e^{c_{2}L}\int_{B_{3\epsilon R}(x)}| \nabla u|^{2}.\] This implies that \[\int_{B_{r-3\epsilon R}(o)}|\nabla u|^{2}\leq\int_{B_{r}(o)}|\nabla u|^{2} \leq 9c_{1}e^{c_{2}L}\int_{B_{r+3\epsilon R}(o)\setminus B_{r-3\epsilon R}(o)}| \nabla u|^{2},\] that is, \[\int_{B_{r-3\epsilon R}(o)}|\nabla u|^{2}\leq\frac{9c_{1}e^{c_{2}L}}{1+9c_{1} e^{c_{2}L}}\int_{B_{r+3\epsilon R}(o)}|\nabla u|^{2},\ r\in[R+3\epsilon R,2R-3\epsilon R].\] Set \(r=R+3\epsilon R\), then \[\int_{B_{R}(o)}|\nabla u|^{2}\leq\frac{9c_{1}e^{c_{2}L}}{1+9c_{1}e^{c_{2}L}} \int_{B_{R+6\epsilon R}(o)}|\nabla u|^{2}.\] Iterating this inequality \(N\) times, we finally get \[\int_{B_{R}(o)}|\nabla u|^{2}\leq\left(\frac{9c_{1}e^{c_{2}L}}{1+9c_{1}e^{c_{ 2}L}}\right)^{N}\int_{B_{2R}(o)}|\nabla u|^{2}\] provided that \(N6\epsilon R\leq R\). Thus, we can choose \(N=\left[\frac{1}{6\epsilon}\right]\geq\frac{1}{6\epsilon}-1>0\) for \(\epsilon\in\left(0,\frac{1}{12}\right)\). Then we have \[\int_{B_{R}(o)}|\nabla u|^{2}\leq\left(\frac{9c_{1}e^{c_{2}L}}{1+9c_{1}e^{c_{ 2}L}}\right)^{\frac{1}{6\epsilon}-1}\int_{B_{2R}(o)}|\nabla u|^{2}.\] The desired result follows by choosing \(\delta=\left(\frac{9c_{1}e^{c_{2}L}}{1+9c_{1}e^{c_{2}L}}\right)^{\frac{1}{6}}\). Now we are ready to prove the Liouville property by using Lemma 5.1. **Theorem 5.2**.: _Let \((M^{n},g)\) be a complete noncompact Riemannian manifold with \(\mathrm{Ric}_{f}\geq 0\) and \(|f|\leq L\). For a base point \(o\in M\), if the diameter of the geodesic sphere \(\partial B_{R}(o)\) has a sublinear growth, i.e., \(diam\ \partial B_{R}(o)=o(R),\ R\to\infty\), then any harmonic function with polynomial growth is constant._ Proof.: Let \(u:M\to\mathbb{R}\) be a harmonic function with polynomial growth of order \(\nu\), namely \[|u(x)|\leq c(1+d(x,o))^{\nu}.\] For \(R>>1\), we define \[I_{R}:=\int_{B_{R}(o)}|\nabla u|^{2}\] and \[\epsilon(r):=\sup_{t\geq r}\frac{\rho(t)}{t},\] where \(\rho(t):=\sup_{x,y\in\partial B_{t}(o)}d(x,y)\). To estimate \(I_{R}\), we introduce a cut-off function \(\xi(x)\) such that \[\xi(x)=\begin{cases}1&x\in B_{R}(o)\,\\ \frac{2R-d(x,o)}{R}&x\in B_{2R}(o)\backslash B_{R}(o)\,\\ 0&x\in M\backslash B_{2R}(o)\.\end{cases}\] Then we have \[I_{R} \leq\int_{B_{2R}(o)}|\nabla(\xi u)|^{2}=\int_{B_{2R}(o)}|u|^{2}| \nabla\xi|^{2}+\xi^{2}|\nabla u|^{2}+2u\xi\left\langle\nabla u,\nabla\xi\right\rangle\] \[=\int_{B_{2R}(o)}|u|^{2}|\nabla\xi|^{2}\leq\int_{B_{2R}(o)}c(1+2 R)^{2\nu}\frac{1}{R^{2}}\leq\int_{B_{2R}(o)}c(3R)^{2\nu}\frac{1}{R^{2}}\] \[=cR^{2\nu-2}\operatorname{Vol}(B_{2R}(o)).\] The volume comparison Theorem 2.1 shows that \[\frac{\operatorname{Vol}(B_{2R}(o))}{\operatorname{Vol}(B_{1}(o))}\leq e^{6L} (2R)^{n}.\] Hence \[I_{R}\leq C(n,L,\operatorname{Vol}(B_{1}(o)))R^{2\nu+n-2}. \tag{5.2}\] On the other hand, if we iterate the inequality (5.1) proved in Lemma 5.1\(l\) times, we can show that, for all sufficiently large \(R\) such that \(\epsilon(R)<\frac{1}{12}\), \[I_{R}\leq\delta^{-6l+\sum_{j=0}^{l-1}\frac{1}{\epsilon(2^{j}R)}I_{2^{l}R}},\] where \(\delta=\left(\frac{9c_{1}e^{c_{2}L}}{1+9c_{1}e^{c_{2}L}}\right)^{\frac{1}{6}}\). Applying (5.2) to the right-hand side of the above inequality yields \[I_{R}\leq C(n,L,\operatorname{Vol}(B_{1}(o)))e^{\left\lceil\left(\frac{\sum_{ j=0}^{l-1}\frac{1}{\epsilon(2^{j}R)}}{l}-6\right)\ln\delta+(2\nu+n-2)\left(\ln 2 +\frac{\ln R}{l}\right)\right\rceil}. \tag{5.3}\] For all sufficiently large \(R\), \[\lim_{l\to\infty}\frac{\sum_{j=0}^{l-1}\frac{1}{\epsilon(2^{j}R)}}{l}=\lim_{j \to\infty}\frac{1}{\epsilon(2^{j}R)}=+\infty.\] Meanwhile, we know that \(0<\delta<1\) for any \(R\). Therefore, letting \(l\to\infty\) in (5.3), we conclude that \(I_{R}=0\) for all sufficiently large \(R\). Therefore \(u\) is constant. Eigenvalue estimates In this section, we derive lower bound estimations of eigenvalues of the Laplace-Beltrami operator \(\Delta\) on closed Riemannian manifolds with Bakry-Emery Ricci curvature bounded below and bounded the potential function. Denote the eigenvalues of \(\Delta\) by \(0=\lambda_{0}<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{k}\leq\cdots\). First, we bound \(\lambda_{1}\) from below. According to [7], it suffices to bound Cheeger's isoperimetric constant from below. Let us recall the definitions of isoperimetric constants. We adapt the notations and definitions in [20]. **Definition 6.1**.: _Let \((M^{n},g)\) be a compact Riemannian manifold (with or without boundary). For \(\alpha>0\), The Neumann \(\alpha\)-isoperimetric constant of M is defined by_ \[IN_{\alpha}(M)=\inf_{\begin{subarray}{c}\partial\Omega_{1}=H=\partial\Omega_{ 2}\\ M=\Omega_{1}\cup H\cup\Omega_{2}\end{subarray}}\frac{\operatorname{Vol}(H)}{ \min\{\operatorname{Vol}(\Omega_{1}),\operatorname{Vol}(\Omega_{2})\}^{ \frac{1}{\alpha}}},\] _where the infimum is taken over all hypersurfaces \(H\) dividing \(M\) into two parts, denoted by \(\Omega_{1}\) and \(\Omega_{2}\)._ In [7], Cheeger showed that \[\lambda_{1}\geq\frac{IN_{1}(M)^{2}}{4} \tag{6.1}\] on closed manifolds. Thus, one can get a lower bound of \(\lambda_{1}\) by bounding \(IN_{1}(M)\) from below. We will accomplish this goal by following the method of Dai-Wei-Zhang in [9] and using Theorem 2.1. **Theorem 6.2**.: _Let \((M^{n},g)\) be a complete Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) and \(|f|\leq L\) on \(M\), where \(K,\ L\) are non-negative constants. Let \(\Omega\) be a bounded convex domain in \(M\). Then for \(1\leq\alpha\leq\frac{n}{n-1},\) we have_ \[IN_{\alpha}(\Omega)\geq d^{-1}2^{-2n-1}5^{-n}e^{-(18-\frac{6}{\alpha})L-(17 \frac{1}{6}-\frac{1}{6\alpha})Kd^{2}}\operatorname{Vol}(\Omega)^{1-\frac{1}{ \alpha}}, \tag{6.2}\] _and for \(0<\alpha<1,\) we have_ \[IN_{\alpha}(\Omega)\geq d^{-1}2^{-2n-1}5^{-n}e^{-12L-17Kd^{2}} \operatorname{Vol}(\Omega)^{1-\frac{1}{\alpha}}, \tag{6.3}\] _where \(d=diam(\Omega)\), the diameter of \(\Omega\). In particular, if \(M\) is closed, then_ \[IN_{1}(M)\geq D^{-1}2^{-2n-1}5^{-n}e^{-12L-17KD^{2}}, \tag{6.4}\] _where \(D\) is an upper bound of the diameter of \(M\)._ Before starting the proof of Theorem 6.2, we need the following lemma by Gromov. **Lemma 6.3** ([15]).: _Let \((M^{n},g)\) be a complete Riemannian manifold. Let \(\Omega\) be a convex domain in \(M\) and \(H\) a hypersurface dividing \(\Omega\) into two parts \(\Omega_{1},\Omega_{2}\). For any Borel subsets \(W_{i}\subset\Omega_{i},i=1,2\), there exists \(x_{1}\) in one of \(W_{i}\), say \(W_{1}\), and a subset \(W\) in the other part \(W_{2}\), such that_ \[\operatorname{Vol}(W)\geq\frac{1}{2}\operatorname{Vol}(W_{2}), \tag{6.5}\] _and for any \(x_{2}\in W\), there is a unique minimal geodesic between \(x_{1}\) and \(x_{2}\) which intersects \(H\) at some \(z\) with_ \[d(x_{1},z)\geq d(x_{2},z), \tag{6.6}\] _where \(d(x_{1},z)\) denotes the distance between \(x_{1}\) and \(z\)._ Combining Theorem 2.1 and Lemma 6.3, we get **Lemma 6.4**.: _Let \(H,W\) and \(x_{1}\) be as in Lemma 6.3. Then_ \[\operatorname{Vol}(W)\leq D_{1}2^{n-1}e^{6L+\frac{K}{2}D_{1}^{2}}\operatorname {Vol}(H^{{}^{\prime}}), \tag{6.7}\] _where \(D_{1}=\sup_{x\in W}d(x_{1},x)\), and \(H^{{}^{\prime}}\) is the set of intersection points with \(H\) of geodesics \(\gamma_{x_{1},x}\) for all \(x\in W\)._ Proof.: Let \(\Gamma\subset S_{x_{1}}(M)\) be the subset of unit vectors \(\theta\) such that \(\gamma_{\theta}=\gamma_{x_{1},x_{2}}\) for some \(x_{2}\in W\). Set polar coordinates at \(x_{1}\). The volume element of the metric \(g\) is written as \(dv=J(t,\theta,x_{1})dtd\theta\) in the polar coordinates \((\theta,t)\in S_{x_{1}}(M)\times\mathbb{R}^{+}\). For any \(\theta\in\Gamma\), let \(r(\theta)\) be the radius such that \(exp_{x_{1}}(r(\theta)\theta)\in H\). Then form Lemma 6.3\(W\subset\{exp_{x_{1}}(r\theta)|\theta\in\Gamma,r(\theta)\leq r\leq 2r( \theta)\}\). We conclude \[\operatorname{Vol}(W)\leq\int_{\Gamma}\int_{r(\theta)}^{2r(\theta)}J(t,\theta,x_{1})dtd\theta.\] For \(r(\theta)\leq t\leq 2r(\theta)\leq 2D_{1}\), by Theorem 2.1, it implies \[\frac{J(t,\theta,x_{1})}{t^{n-1}}\leq e^{\frac{K}{6}(t^{2}-r(\theta)^{2})+6L} \frac{J(r(\theta),\theta,x_{1})}{r(\theta)^{n-1}},\] so \[J(t,\theta,x_{1})\leq e^{\frac{K}{2}D_{1}^{2}+6L}2^{n-1}J(r(\theta),\theta,x_{ 1}).\] It gives \[\operatorname{Vol}(W)\leq e^{\frac{K}{2}D_{1}^{2}+6L}2^{n-1}\int_{\Gamma}r( \theta)J(r(\theta),\theta,x_{1})d\theta\leq D_{1}2^{n-1}e^{\frac{K}{2}D_{1}^ {2}+6L}\operatorname{Vol}(H^{{}^{\prime}}).\] Form Lemmas 6.3 and 6.4, we immediately have **Corollary 6.5**.: _Let \(H\) be any hypersurface dividing a convex domain \(\Omega\) into two parts \(\Omega_{1},\Omega_{2}\). For any ball \(B=B_{r}(x)\) in \(M\), we have_ \[\min(\operatorname{Vol}(B\cap\Omega_{1}),\operatorname{Vol}(B\cap\Omega_{2}) )\leq 2^{n+1}re^{\frac{K}{2}d^{2}+6L}\operatorname{Vol}(H\cap(B_{2r}(x))), \tag{6.8}\] _where \(d=diam(\Omega)\). In particular, if \(B\cap\Omega\) is divided equally by \(H\), then_ \[\operatorname{Vol}(B\cap\Omega)\leq 2^{n+2}re^{\frac{K}{2}d^{2}+6L}\operatorname{ Vol}(H\cap(B_{2r}(x))). \tag{6.9}\] Proof.: Put \(W_{i}=B\cap\Omega_{i}\) in the above lemma and use \(D_{1}\leq 2r\) and \(H^{{}^{\prime}}\subset H\cap B_{2r}(x)\). Now we are ready to prove Theorem 6.2. Proof of Theorem 6.2.: Let \(H\) be any hypersurface dividing \(\Omega\) into two parts, \(\Omega_{1}\) and \(\Omega_{2}\). We may assume that \(\operatorname{Vol}(\Omega_{1})\leq\operatorname{Vol}(\Omega_{2})\). For any \(x\in\Omega_{1}\), let \(r_{x}\) be the smallest radius such that \[\operatorname{Vol}(B_{r_{x}}(x)\cap\Omega_{1})=\operatorname{Vol}(B_{r_{x}}(x )\cap\Omega_{2})=\frac{1}{2}\operatorname{Vol}(B_{r_{x}}(x)\cap\Omega).\] Let \(d=diam(\Omega)\). By (6.9) we have, \[\operatorname{Vol}(B_{r_{x}}(x)\cap\Omega)\leq 2^{n+2}r_{x}e^{\frac{K}{2}d^{2}+6L} \operatorname{Vol}(H\cap(B_{2r_{x}}(x))). \tag{6.10}\] The domain \(\Omega_{1}\) has a covering \[\Omega_{1}\subset\cup_{x\in\Omega_{1}}B_{2r_{x}}(x).\] By Vitali Covering Lemma, we can choose a countable family of disjoint balls \(B_{i}=B_{2r_{x_{i}}}(x_{i})\) such that \(\cup_{i}B_{10r_{x_{i}}}(x_{i})\supset\Omega_{1}.\) So \[\operatorname{Vol}(\Omega_{1})\leq\sum_{i}\operatorname{Vol}(B_{10r_{x_{i}}}(x_ {i})\cap\Omega_{1}).\] Applying the volume comparison Theorem 2.1 in \(\Omega_{1}\) gives \[\frac{\operatorname{Vol}(B_{10r_{x_{i}}}(x_{i})\cap\Omega_{1})}{(10r_{x_{i}})^ {n}}\leq e^{\frac{33}{2}Kr_{x_{i}}^{2}+6L}\frac{\operatorname{Vol}(B_{r_{x_{i}} }(x_{i})\cap\Omega_{1})}{(r_{x_{i}})^{n}}.\] On the other hand, since \(\operatorname{Vol}(\Omega_{1})\leq\operatorname{Vol}(\Omega_{2})\), we have \(r_{x}\leq d\) for any \(x\in\Omega_{1}\). Thus, \[\operatorname{Vol}(B_{10r_{x_{i}}}(x_{i})\cap\Omega_{1}) \leq 10^{n}e^{\frac{33}{2}Kd^{2}+6L}\operatorname{Vol}(B_{r_{x_{i}} }(x_{i})\cap\Omega_{1})\] \[=2^{-1}10^{n}e^{\frac{33}{2}Kd^{2}+6L}\operatorname{Vol}(B_{r_{x_ {i}}}(x_{i})\cap\Omega).\] Therefore, \[\operatorname{Vol}(\Omega_{1})\leq 2^{-1}10^{n}e^{\frac{33}{2}Kd^{2}+6L}\sum_{i} \operatorname{Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega). \tag{6.11}\] Moreover, since the balls \(B_{i}\) are disjoint, (6.10) gives \[\operatorname{Vol}(H)\geq\sum_{i}\operatorname{Vol}(B_{i}\cap H)\geq 2^{-n-2}e^{- \frac{K}{2}d^{2}-6L}\sum_{i}r_{x_{i}}^{-1}\operatorname{Vol}(B_{r_{x_{i}}}(x_{ i})\cap\Omega). \tag{6.12}\] Firstly, for \(1\leq\alpha\leq\frac{n}{n-1}\), it follows from (6.11) and (6.12) that \[\frac{\operatorname{Vol}(H)}{\operatorname{Vol}(\Omega_{1})^{ \frac{1}{\alpha}}} \geq\frac{2^{-n-2}e^{-\frac{K}{2}d^{2}-6L}}{(2^{-1}10^{n}e^{\frac{ 33}{2}Kd^{2}+6L})^{\frac{1}{\alpha}}}\frac{\sum_{i}r_{x_{i}}^{-1}\operatorname {Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega)}{\left(\sum_{i}\operatorname{Vol}(B_{r_{ x_{i}}}(x_{i})\cap\Omega)\right)^{\frac{1}{\alpha}}}\] \[\geq\frac{2^{-n-2}e^{-\frac{K}{2}d^{2}-6L}}{2^{-1}10^{n}e^{\frac{ 33}{2}Kd^{2}+6L}}\frac{\sum_{i}r_{x_{i}}^{-1}\operatorname{Vol}(B_{r_{x_{i}}}(x _{i})\cap\Omega)}{\sum_{i}\operatorname{Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega)^{ \frac{1}{\alpha}}}\] \[\geq 2^{-2n-1}5^{-n}e^{-12L-17Kd^{2}}\inf_{i}\frac{r_{x_{i}}^{-1} \operatorname{Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega)}{\operatorname{Vol}(B_{r_{x_ {i}}}(x_{i})\cap\Omega)^{\frac{1}{\alpha}}}\] \[=2^{-2n-1}5^{-n}e^{-12L-17Kd^{2}}\inf_{i}r_{x_{i}}^{-1} \operatorname{Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega)^{1-\frac{1}{\alpha}}.\] We apply the volume comparison Theorem 2.1 in \(\Omega\), then \[\frac{\operatorname{Vol}(B_{d}(x_{i})\cap\Omega)}{d^{n}}\leq e^{\frac{K}{6}(d^ {2}-r_{x_{i}}^{2})+6L}\frac{\operatorname{Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega) }{r_{x_{i}}^{n}}.\] By \(1-\frac{1}{\alpha}\geq 0\), and \(n(1-\frac{1}{\alpha})-1\leq 0\), we can derive \[\inf_{i}r_{x_{i}}^{-1}\operatorname{Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega)^{1- \frac{1}{\alpha}}\geq d^{-1}e^{-(\frac{K}{6}d^{2}+6L)(1-\frac{1}{\alpha})} \operatorname{Vol}(\Omega)^{1-\frac{1}{\alpha}}.\] Taking infimum over \(H\), we conclude the following \[IN_{\alpha}(\Omega)\geq d^{-1}2^{-2n-1}5^{-n}e^{-(18-\frac{6}{\alpha})L-(17\frac{ 1}{6}-\frac{1}{6\alpha})Kd^{2}}\operatorname{Vol}(\Omega)^{1-\frac{1}{\alpha}}.\] On the other hand, for \(0<\alpha<1\), we have \[\frac{\operatorname{Vol}(H)}{\operatorname{Vol}(\Omega_{1})^{ \frac{1}{\alpha}}} =\frac{\operatorname{Vol}(H)}{\operatorname{Vol}(\Omega_{1}) \operatorname{Vol}(\Omega_{1})^{\frac{1}{\alpha}-1}}\geq\frac{\operatorname{ Vol}(H)}{\operatorname{Vol}(\Omega_{1})\operatorname{Vol}(\Omega)^{\frac{1}{ \alpha}-1}}\] \[\geq\frac{2^{-n-2}e^{-\frac{K}{2}d^{2}-6L}}{2^{-1}10^{n}e^{\frac{ 33}{2}Kd^{2}+6L}}\frac{\sum_{i}r_{x_{i}}^{-1}\operatorname{Vol}(B_{r_{x_{i}}}( x_{i})\cap\Omega)}{\sum_{i}\operatorname{Vol}(B_{r_{x_{i}}}(x_{i})\cap\Omega)} \operatorname{Vol}(\Omega)^{1-\frac{1}{\alpha}}\] \[\geq d^{-1}2^{-2n-1}5^{-n}e^{-12L-17Kd^{2}}\operatorname{Vol}( \Omega)^{1-\frac{1}{\alpha}}.\] Taking infimum over \(H\) finishes the proof. From (6.1) and Theorem 6.2, we immediately have the estimate of the first eigenvalue. **Theorem 6.6**.: _Let \((M^{n},g)\) be a closed Riemannian manifold with \(\operatorname{Ric}_{f}\geq-Kg\) and \(|f|\leq L\) on \(M\), where \(K,\ L\) are non-negative constants. Then_ \[\lambda_{1}\geq D^{-2}2^{-4n-4}5^{-2n}e^{-24L-34KD^{2}}:=\alpha_{0}, \tag{6.13}\] _where \(D\) is an upper bound of the diameter of \(M\)._ Next, we derive lower bounds for \(\lambda_{k}\ (k\geq 2)\) by using the upper bound estimate of the heat kernel in Theorem 1.1 and an argument of Li-Yau [22]. Proof of Theorem 1.8.: In Theorem 1.1, letting \(\epsilon=1\), we have \[H(x,x,t)\leq\frac{\overline{C}_{1}(n)e^{\overline{C}_{2}(n)(Kt+L)}}{ \operatorname{Vol}(B_{\sqrt{t}}(x))}\] for all \(x\in M\) and \(t>0\). Note that the heat kernel can be written as \[H(x,y,t)=\sum_{i=0}^{\infty}e^{-\lambda_{i}t}\phi_{i}(x)\phi_{i}(y),\] where \(\phi_{i}\) is the eigenfunction of \(\Delta\) corresponding to \(\lambda_{i}\) and \(\{\phi_{i}\}_{i=0}^{\infty}\) form an orthonormal basis with respect to the \(L^{2}\)-norm. So we have \[\int_{M}H(x,x,t)dx=\sum_{i=0}^{\infty}e^{-\lambda_{i}t}\leq\overline{C}_{1}(n) e^{\overline{C}_{2}(n)(Kt+L)}\int_{M}\operatorname{Vol}(B_{\sqrt{t}}(x))^{-1}dx.\] When \(\sqrt{t}\geq D\), \(\operatorname{Vol}(B_{\sqrt{t}}(x))=\operatorname{Vol}(M)\). On the other hand, when \(\sqrt{t}\leq D\), since \(|f|\leq L\) on \(M\), by the volume comparison theorem in (2.4), we get \[\frac{\operatorname{Vol}(B_{D}(x))}{\operatorname{Vol}(B_{\sqrt{t}}(x))}\leq e ^{\left(\frac{K}{6}D^{2}+6L\right)}\left(\frac{D}{\sqrt{t}}\right)^{n}\] for all \(x\in M\). Then we conclude that \[\sum_{i=0}^{\infty}e^{-\lambda_{i}t} \leq\overline{C}_{1}(n)e^{\overline{C}_{2}(n)(Kt+L)}\begin{cases}e^ {\left(\frac{K}{6}D^{2}+6L\right)}\left(\frac{D}{\sqrt{t}}\right)^{n}&t\leq D^ {2}\,\\ 1&t\geq D^{2},\end{cases}\] \[\leq\begin{cases}\overline{C}_{3}(n)e^{\overline{C}_{4}(n)(KD^{2}+L )}\left(\frac{D}{\sqrt{t}}\right)^{n}&t\leq D^{2}\,\ :=q(t)\\ \overline{C}_{3}(n)e^{\overline{C}_{4}(n)(Kt+L)}&t\geq D^{2}.\end{cases}\] Fixing \(k\geq 1\), and taking the first \((k+1)\) terms, we get \[(k+1)e^{-\lambda_{k}t}\leq q(t),\] that is, \(k+1\leq q(t)e^{\lambda_{k}t}\), for any \(t>0\). It is easy to see that \(q(t)e^{\lambda_{k}t}\) is continuous for \(t>0\) and \[\inf_{t>0}\left(\frac{D}{\sqrt{t}}\right)^{n}e^{\lambda_{k}t}=\left(\frac{D}{ \sqrt{t}}\right)^{n}e^{\lambda_{k}t}\bigg{|}_{t=\frac{n}{2\lambda_{k}}}=\left( \frac{2e}{n}\right)^{\frac{n}{2}}\cdot\left(D\sqrt{\lambda_{k}}\right)^{n}.\] When \(\frac{n}{2\lambda_{k}}\leq D^{2}\), i.e., \(\lambda_{k}\geq\frac{n}{2D^{2}}\), we have \[\inf_{t>0}q(t)e^{\lambda_{k}t}=q(t)e^{\lambda_{k}t}\bigg{|}_{t=\frac{n}{2 \lambda_{k}}}=\overline{C}_{3}(n)e^{\overline{C}_{4}(n)(KD^{2}+L)}\left(\frac {2e}{n}\right)^{\frac{n}{2}}\cdot\left(D\sqrt{\lambda_{k}}\right)^{n}.\] Hence \[\overline{C}_{3}(n)e^{\overline{C}_{4}(n)(KD^{2}+L)}\left(\frac{2e}{n}\right) ^{\frac{n}{2}}\cdot\left(D\sqrt{\lambda_{k}}\right)^{n}\geq k+1,\] that is, \[\lambda_{k}\geq\frac{\overline{C}_{5}(n)(k+1)^{\frac{2}{n}}}{D^{2}}e^{- \overline{C}_{6}(n)(KD^{2}+L)}. \tag{6.14}\] When \(\frac{n}{2\lambda_{k}}\geq D^{2}\), i.e., \(\lambda_{k}D^{2}\leq\frac{n}{2}\), we have \[\inf_{t>0}q(t)e^{\lambda_{k}t}=q(t)e^{\lambda_{k}t}\bigg{|}_{t=D^{2}}= \overline{C}_{3}(n)e^{\overline{C}_{4}(n)(KD^{2}+L)}e^{\lambda_{k}D^{2}}.\] Hence \[\alpha_{1}:=\overline{C}_{3}(n)e^{\overline{C}_{4}(n)(KD^{2}+L)}e^{\frac{n}{2} }\geq k+1, \tag{6.15}\] which shows that there are only finitely many \(\lambda_{k}\) in this case (the total number \(k\) only depends on \(n,K,L,D\)). Combining (6.15) and the lower bound of \(\lambda_{1}\) in (6.13), for these finitely many \(\lambda_{k}\)'s, one can choose \(\overline{C}_{7}(n)\) and \(\overline{C}_{8}(n)\) such that \[\frac{\lambda_{k}D^{2}}{(k+1)^{\frac{2}{n}}}\geq\frac{\lambda_{1}D^{2}}{(k+1) ^{\frac{2}{n}}}\geq\frac{\alpha_{0}D^{2}}{\alpha_{1}^{\frac{2}{n}}}\geq \overline{C}_{7}(n)e^{-\overline{C}_{8}(n)(KD^{2}+L)}. \tag{6.16}\] Combining (6.14) and (6.16) finishes the proof. Finally, we study the bottom spectrum of the Beltrami Laplacian \(\Delta\) on complete Riemannian manifolds with Bakry-Emery Ricci curvature bounded below. Recall the definition of \(\mu_{1}(\Delta):=\inf Spec(\Delta)\), the bottom spectrum of \(\Delta\). By the variational principle, we have \[\mu_{1}(\Delta)=\inf_{\phi\in T\backslash\{0\}}\frac{\int_{M}|\nabla\phi|^{2}} {\int_{M}\phi^{2}}, \tag{6.17}\] where \(T\) is any class of functions such that \[C_{0}^{\infty}(M)\subset T\subset W_{0}^{1}(M).\] Here \(W_{0}^{1}(M)\) is the closure of \(C_{0}^{\infty}(M)\) in \(W^{1}(M)\), the \(L^{2}\) Sobolev space. Following Munteanu and Wang's method [31], the volume growth estimate and the variational principle immediately imply the upper bound of \(\mu_{1}(\Delta)\). However, it is required that the volume should not exceed the exponential linear growth, so we cannot use Theorem 2.1 to obtain the growth estimate of the volume, where the calculated volume is the exponential square growth. Therefore we adopt the method of Lemma 2.1 in [31] to obtain the volume growth estimate. **Lemma 6.7**.: _Let \((M^{n},g)\) be a complete Riemannian manifold with \(\operatorname{Ric}_{f}\geq-(n-1)Kg\) for some constant \(K\geq 0\). Assume that there exist non-negative constants \(\tilde{a}\) and \(\tilde{b}\) such that_ \[|f|(x)\leq\tilde{a}r(x,o)+\tilde{b}\ for\ all\ x\in M.\] _Then for any \(\epsilon>0\), there exists a constant \(\tilde{A}(\epsilon)>0\) such that the volume upper bound_ \[\operatorname{Vol}(B_{R}(o))\leq\tilde{A}(\epsilon)e^{(2\tilde{a}+(n-1)( \sqrt{K}+\epsilon))R}\] _holds for all \(R>0\). Here, \(o\) is a fixed point on \(M\), and \(r(x,o)\) denotes the distance from \(x\) to \(o\)._ Proof.: By the Bochner formula, we have for \(r(x)=r(x,o)\) \[0=\frac{1}{2}\Delta|\nabla r|^{2} =|\operatorname{Hess}r|^{2}+\langle\nabla\Delta r,\nabla r\rangle+ \operatorname{Ric}(\partial r,\partial r)\] \[\geq\frac{(\Delta r)^{2}}{n-1}+\partial_{r}(\Delta r)+\operatorname {Ric}(\partial_{r},\partial_{r})\] \[\geq\frac{(\Delta r)^{2}}{n-1}+\partial_{r}(\Delta r)-f^{\prime \prime}(r)-(n-1)K.\] Integrating this inequality from \(1\) to \(r\), we get \[\frac{1}{n-1}\int_{1}^{r}(\Delta t)^{2}dt+\Delta r-f^{\prime}(r)\leq(n-1)Kr+ \tilde{b}_{0}\] for some constant \(\tilde{b}_{0}>0\) independent of \(r\). Then for any \(r\geq 1\), \[\Delta_{f}r+\frac{1}{n-1}\int_{1}^{r}\left(\Delta_{f}t+f^{\prime}(t)\right)^{ 2}dt\leq(n-1)Kr+\tilde{b}_{0}, \tag{6.18}\] The Cauchy-Schwarz inequality implies that \[\int_{1}^{r}\left(\Delta_{f}t+f^{\prime}(t)\right)^{2}dt\geq\frac{1}{r-1} \left(\int_{1}^{r}\left(\Delta_{f}t+f^{\prime}(t)\right)dt\right)^{2}.\] Therefore, from (6.18) we obtain \[\Delta_{f}r+\frac{1}{(n-1)r}\left(f(r)-f(1)+\int_{1}^{r}\left(\Delta_{f}t \right)dt\right)^{2}\leq(n-1)Kr+\tilde{b}_{0}. \tag{6.19}\] We now claim that for any \(r\geq 1\) and any \(\epsilon>0\), \[\int_{1}^{r}\left(\Delta_{f}t\right)dt\leq(\tilde{a}+(n-1)(\sqrt{K}+\epsilon))r +\tilde{a}+2\tilde{b}+\frac{\tilde{b}_{0}}{\sqrt{K}+\epsilon}. \tag{6.20}\] To prove this, define \[v(r):=(\tilde{a}+(n-1)(\sqrt{K}+\epsilon))r+\tilde{a}+2\tilde{b}+\frac{\tilde{ b}_{0}}{\sqrt{K}+\epsilon}-\int_{1}^{r}\left(\Delta_{f}t\right)dt.\] We show instead that \(v(r)>0\) for all \(r\geq 1\). Clearly, \(v(1)>0\). Suppose that \(v\) does not remain positive for all \(r\geq 1\) and let \(R>1\) be the first number such that \(v(R)=0\). Then \(v^{\prime}(R)\leq 0\) and \[\int_{1}^{R}\left(\Delta_{f}t\right)dt=(\tilde{a}+(n-1)(\sqrt{K}+\epsilon))R+ \tilde{a}+2\tilde{b}+\frac{\tilde{b}_{0}}{\sqrt{K}+\epsilon}.\] In other words, \[\frac{1}{(n-1)R}\left(f(R)-f(1)+\int_{1}^{R}\left(\Delta_{f}t \right)dt\right)^{2}\] \[=\frac{1}{(n-1)R}\left(f(R)-f(1)+(\tilde{a}+(n-1)(\sqrt{K}+ \epsilon))R+\tilde{a}+2\tilde{b}+\frac{\tilde{b}_{0}}{\sqrt{K}+\epsilon} \right)^{2}\] \[\geq\frac{1}{(n-1)R}\left((n-1)(\sqrt{K}+\epsilon)R+\frac{\tilde{ b}_{0}}{\sqrt{K}+\epsilon}\right)^{2}\] \[\geq(n-1)(\sqrt{K}+\epsilon)^{2}R+2\tilde{b}_{0}.\] Plugging this into (6.19), we conclude that \(\Delta_{f}R\leq-\tilde{b}_{0}<0\), so \(v^{\prime}(R)=\tilde{a}+(n-1)(\sqrt{K}+\epsilon)-(\Delta_{f}R)>0\), which is a contradiction. We have thus proved (6.20) is true for any \(r\geq 1\) and \(\epsilon>0\), so \[\ln J(r)-\ln J(1)\leq(2\tilde{a}+(n-1)(\sqrt{K}+\epsilon))r+2\tilde{a}+4 \tilde{b}+\frac{\tilde{b}_{0}}{\sqrt{K}+\epsilon}.\] In particular, for \(R\geq 1\), any \(\epsilon>0\) we have the volume bound of the form \[\operatorname{Vol}(B_{R}(o))\leq\tilde{b}_{1}e^{(2\tilde{a}+(n-1)(\sqrt{K}+ \epsilon))R},\] where the constant \(\tilde{b}_{1}\) depends on \(\tilde{a},\tilde{b},\operatorname{Vol}(B_{1}(o)),\epsilon,K\) and \(n\). Proof of Theorem 1.9.: Let \(R>1\) and \(\psi\) a cut-off function on \(B_{R}(o)\) such that \(\psi=1\) on \(B_{R-1}(o)\) and \(|\nabla\psi|\leq 2\). Set \(\phi(y):=e^{-\frac{(2\tilde{a}+(n-1)(\sqrt{K}+\epsilon)+\delta)}{2}r(y,o)}\psi(y)\) as a test function in the variational principle (6.17) for \(\mu_{1}(\Delta)\), where \(\delta>0\) and \(\epsilon>0\) are arbitrary positive constants. Then by Lemma 6.7, we obtain \[\mu_{1}(\Delta)\leq\frac{1}{4}\left(2\tilde{a}+(n-1)(\sqrt{K}+\epsilon)+ \delta\right)^{2}.\] Since \(\epsilon\) and \(\delta\) are arbitrary, then \(\mu_{1}(\Delta)\leq\frac{1}{4}\left(2\tilde{a}+(n-1)\sqrt{K}\right)^{2}\). In the case that \(f\) is of sublinear growth, we can take \(\tilde{a}=0\). Therefore, \(\mu_{1}(\Delta)\leq\frac{1}{4}(n-1)^{2}K\) and the theorem is proved. ## Data Availability No data was used for the research described in the article. ## Acknowledgements Research is partially supported by NSFC Grant No. 11971168, Shanghai Science and Technology Innovation Program Basic Research Project STCSM 20JC1412900, and Science and Technology Commission of Shanghai Municipality (STCSM) No. 22DZ2229014.
2307.13148
Large-scale circulation reversals explained by pendulum correspondence
We introduce a low-order dynamical system to describe thermal convection in an annular domain. The model derives systematically from a Fourier-Laurent truncation of the governing Navier-Stokes Boussinesq equations and accounts for spatial dependence of the flow and temperature fields. Comparison with fully-resolved direct numerical simulations (DNS) shows that the model captures parameter bifurcations and reversals of the large-scale circulation (LSC), including states of (i) steady circulating flow, (ii) chaotic LSC reversals, and (iii) periodic LSC reversals. Casting the system in terms of the fluid's angular momentum and center of mass (CoM) reveals equivalence to a damped pendulum with forcing that raises the CoM above the fulcrum. This formulation offers a transparent mechanism for LSC reversals, namely the inertial overshoot of a forced pendulum, and it yields an explicit formula for the frequency $f^*$ of regular LSC reversals in the high Rayleigh-number limit. This formula is shown to be in excellent agreement with DNS and produces the scaling law $f^* \sim Ra^{0.5}$.
Nicholas J. Moore, Jinzi Mac Huang
2023-07-24T22:02:10Z
http://arxiv.org/abs/2307.13148v3
# Fluid pendulum explains reversals of the large-scale circulation in thermal convection ###### Abstract We introduce a low-dimensional dynamical system to describe thermal convection in an annulus. The model derives systematically from a Fourier-Laurent truncation of the governing Navier-Stokes Boussinesq equations with no adjustable parameters and with the ability to generalize to any order. Comparison with fully resolved numerical solutions shows that the leading-order model captures parameter bifurcations and reversals of the large-scale circulation (LSC) with quantitative accuracy, including states of (i) steady circulating flow, (ii) chaotic LSC reversals, and (iii) periodic LSC reversals. Casting the system in terms of the fluid's angular momentum and center of mass (CoM) reveals equivalence to a damped pendulum with forcing that raises the CoM above the fulcrum. This formulation offers a transparent mechanism for LSC reversals, namely the inertial overshoot of a driven pendulum, and it yields accurate predictions for the frequency of regular LSC reversals in the high Rayleigh-number limit. Thermal convection and the associated large-scale circulation (LSC) play an instrumental role in applications diverse as atmospheric and oceanic flow patterns [1; 2], mantle convection [3; 4; 5; 6], and solar magneto-hydrodynamics [7]. In all of these settings, it is known that the LSC is prone to spontaneously reverse direction, manifesting, for example, as a sudden change in wind direction [8] or potentially a reversal of the Earth's magnetic dipole [9]. LSC reversals have been observed in controlled laboratory experiments [10; 11; 12; 13; 14; 15; 16] and analyzed theoretically, going back to the famous Lorenz system describing thermal convection in a planar domain [17]. Studies conducted in idealized geometries, e.g. rectangular, cylindrical, or annular, show a sequence of transitions as the Rayleigh number increases. In the case of an annular domain, the sequence includes: (1) a stable conductive state with no fluid motion; (2) steady circulatory flow in either the clockwise (CW) or counter-clockwise (CCW) direction; (3) non-periodic dynamics and chaotic LSC reversals; (4) a high-Ra state in which LSC reversals recur periodically despite turbulent fluctuations at the small scale. The classic Lorenz system has been shown to qualitatively reproduce many of these transitions [18; 19; 20; 21; 10], while more recent phenomenological models have lent further insight [12; 22; 23]. Often, these models conjecture additional terms to represent various physical effects. Such terms, while they may increase the predictive capacity via adjustable parameters, can obscure the connection with the governing equations. Ideally, a model for LSC reversals would achieve the following: 1. Derive systematically from the governing equations, free of adjustable parameters and conjectured terms, and with the ability to generalize to arbitrary order. 2. Predict the parameter bifurcations listed above with quantitative accuracy; predict the frequency of regular LSC reversals in the high-Ra regime. 3. Offer new physical insight into the complex process of LSC reversals. This letter, together with its companion paper [24], describe a framework for thermal convection in an annulus that achieves these objectives. The framework derives systematically from a Fourier-Laurent truncation of the governing Navier-Stokes-Boussinesq (NSB) equations, with no adjustable parameters, and with the order of truncation corresponding to the accuracy of the model. Comparison with fully-resolved direct numerical simulations shows that the leading-order, three-dimensional system predicts the sequence of transitions, including LSC reversals, with quantitative accuracy. Casting the system in terms of the fluid's average angular momentum and center of mass (CoM) reveals equivalence to a damped, driven pendulum, with forcing that drives the CoM above the fulcrum. With this reformulation, a simple physical picture emerges. The driving term, since it raises the CoM, tends to destabilize the system and, depending on the relative strengths of driving, damping, and restoring, leads to a range of different convective states. This physical picture: (1) offers a parsimonious explanation for LSC reversals, namely the inertial overshoot of a damped, driven pendulum; (2) yields accurate predictions for the frequency of regular LSC reversals in the high Rayleigh-number regime. Figure 1(a) depicts the problem setup in which a 2D annular fluid domain is heated from below [18; 10; 20]. Thermal exchange occurs along the outer boundary with an imposed temperature that decreases linearly with height, while the inner boundary remains adiabatic. We note that the annular geometry tends to reinforce the dominant circular flow pattern of thermal convection that appears generically across many settings, thus permitting one to isolate the main mechanism for LSC reversals. Dimensionless temperature \(T\), velocity \(\mathbf{u}\), and pressure \(p\) fields are governed by the incompressible NSB equations \[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} =-\nabla p+\operatorname{Pr}\nabla^{2}\mathbf{u}+\operatorname{Ra} \operatorname{Pr}T\mathbf{e_{\mathbf{y}}}, \tag{1}\] \[\frac{\partial T}{\partial t}+\mathbf{u}\cdot\nabla T =\nabla^{2}T,\quad\nabla\cdot\mathbf{u}=0, \tag{2}\] which hold in the dimensionless annulus, \(r_{0}<r<1/2\). Both the inner and outer rings are no-slip boundaries. Parameters include the Rayleigh number Ra and Prandtl number Pr [24]. When the thermal forcing is sufficiently strong (Ra sufficiently high) the destabilizing action of buoyancy can give rise to natural convection. To quantify different convective states, we will examine the spatially-averaged _fluid angular momentum_\(L(t)\), with \(L>0\) corresponding to CCW rotation. The range of convective states are revealed by direct numerical simulations (DNS) of the NSB system as shown in Figure 1. Simulations are based on a Chebyshev-Fourier pseudo-spectral discretization of Eqs. (1) and (2) in streamfunction-vorticity form with implicit-explicit time stepping [24, 25, 26, 27]. At low Ra, Fig. 1(b) shows the existence of a stable conductive state with no fluid motion. In this regime, perturbations to the conductive state decay rapidly, as seen in the plot below showing \(L(t)\to 0\). Increasing Ra eventually destabilizes the system, leading to the state shown in Fig. 1(c), where the fluid circulates either CW or CCW at a constant rate. By further increasing Ra, this steady circulating state also destabilizes; the direction of circulation now alternates over time and the flow reverses chaotically, as shown in the time series of \(L\) in Fig. 1(d). Interestingly, chaos disappears when Ra becomes sufficiently high, and Fig. 1(e) reveals an oscillating state with periodic LSC reversals. Although the reversals are periodic, the DNS resolves fine-scale turbulent fluctuations. The nature of the fluctuations are characterized by the frequency power spectrum of the temperature field, shown in Fig. 1(f) to follow the turbulent Bolgiano-Obukhov power law of natural convection [28, 29]. Remarkably, all of these states are recovered by a low-dimensional model arising systematically from the NSB equations. Briefly, the derivation is as follows. In polar coordinates, \(\mathbf{u}=u(r,\theta,t)\mathbf{e}_{\theta}+v(r,\theta,t)\mathbf{e}_{r}\) and \(T=T(r,\theta,t)\), we perform a Fourier expansion in \(\theta\) and a Laurent expansion in \(r\), and truncate each to a desired order while enforcing all boundary conditions (BCs). The choice of Laurent expansion is guided by the form of the conductive-state solution (see [24]), and thus recovers this basic state with no approximation made. Inserting the truncated variables into Eqs. (1) and (2) and projecting onto the Fourier-Laurent basis yields a finite-dimensional dynamical system. In this letter, we consider the lowest-order system able to satisfy all BCs. Casting in terms of angular momentum \(L(t)\) and CoM coordinates \((X(t),Y(t))\) gives: \[\dot{L} =-\mathrm{RaPr}\,X-\alpha\mathrm{Pr}\,L, \tag{3}\] \[\dot{X} =-kL(Y-y_{1})-\beta X,\] (4) \[\dot{Y} =kLX-\beta(Y-y_{0}). \tag{5}\] where \(\alpha,\beta,k,y_{0},y_{1}\) are positive parameters that depend on \(r_{0}\) only [24]. We find this ODE system best approximates the true dynamics when the annulus is relatively narrow, and so we set \(r_{0}=0.4\) in all subsequent numerical examples. Notably, our analysis differs from derivations of other ODE models (e.g. the Lorenz system) in that, rather than simply averaging over the radial variable [10, 18, 20], the truncated Laurent expansion satisfies the BCs on both the inner and outer rings (see also [30]), precluding the need for any friction factors with empirically estimated coefficients [20]. We believe this is one reason our model accurately recovers the high-Ra large-scale dynamics even though it does not resolve the turbulent flow field. Interestingly, Eqs. (3) to (5) are mathematically equivalent to a damped pendulum system with a particular form of external driving. The most crucial parameters are \(y_{1}\), the height of the pendulum's fulcrum, and \(y_{0}\), the height of the conductive-state CoM. With no driving (\(\beta=0\)), Eqs. (3) to (5) correspond exactly to a linearly damped pendulum with fulcrum \((0,y_{1})\). The terms with \(\beta\) arise from the interaction of boundary heating and buoyancy, and they drive \((X,Y)\) towards the conductive-state CoM \((0,y_{0})\). Exact formulas available in [24] show that \(0<y_{1}<y_{0}\) for any \(r_{0}\), meaning that the driving acts to destabilize the system by raising the CoM above the fulcrum. We remind the reader Figure 1: Direct numerical simulations of natural convection in an annulus. (a) Schematic of an annular fluid domain heated from below. (b) At low Ra (\(3.9\times 10^{5}\)), the conductive state is stable and any initial angular momentum \(L\) quickly dissipates. (c) At higher Ra (\(3.1\times 10^{6}\)) the system transitions to steady circulation with constant \(L\). (d) At yet higher Ra (\(2.5\times 10^{7}\)), the LSC can spontaneously reverse direction. The plot of \(L(t)\) shows these reversals occur errratically. (e) At the highest Ra (\(1.6\times 10^{9}\)), the LSC reversals recur periodically, even though the small-scale flow is turbulent. (f) The temperature power spectrum of case (e) peaks at frequency \(f^{*}\), corresponding to the LSC reversal frequency, and follows a \(-1.4\) power law at higher \(f\). Movies of (b)–(e) are available in Supplemental Material. In all cases, \(\mathrm{Pr}=4\) and \(r_{0}=0.4\). that none of these terms were conjectured; each arises directly from analysis of the governing NSB equations. How well does this simple ODE system predict the dynamics of convection? Figure 2 shows trajectories of \((L,X,Y)\) computed by fully-resolved DNS (left) versus those computed by the ODE model (right) for the same Rayleigh numbers as Fig. 1(c)-(e). Figure 2(a)-(c) shows that the trajectories from DNS and the ODE model are remarkably similar across the range of \(\mathrm{Ra}\), exhibiting (a) convergence to a stable circulating state, (b) chaotic dynamics near a strange attractor, and (c) periodic orbits at the highest \(\mathrm{Ra}\). The trajectories in (b) and (c) indicate reversals of the LSC, as can be seen by the sign change of \(L\). The LSC reversals are chaotic in (b) and periodic in (c). The bifurcation diagram in Fig. 2(d) shows that a pitchfork bifurcation occurs at a critical value \(\mathrm{Ra}_{1}^{*}\). At this value, the conductive state loses stability, and, simultaneously, the bistable circulating states appear (CW and CCW circulation). At a second critical value, \(\mathrm{Ra}_{2}^{*}\), these circulating states lose stability through a Hopf bifurcation. Immediately past \(\mathrm{Ra}_{2}^{*}\), the dynamics are fractal-like and chaotic, characteristic of a strange attractor. These observations are further supported by measurements of the fractal dimension \(D_{2}\)[31] and Lyapunov exponent \(\lambda\) shown in the inset. At much higher \(\mathrm{Ra}\), order reemerges and the trajectories of \((X,Y)\) closely resemble pendulum motion. The ODE model yields exact formulas for both critical values [24]: \[\mathrm{Ra}_{1}^{*}=\frac{\alpha\beta}{k\Delta y},\quad\mathrm{Ra}_{2}^{*}= \frac{\alpha^{2}\Pr\left(\frac{\alpha\mathrm{Pr}+4\beta}{\alpha\mathrm{Pr}-2 \beta}\right), \tag{6}\] where \(\Delta y=y_{0}-y_{1}>0\) is the distance between the conductive-state CoM and the pendulum fulcrum. Briefly, the value \(\mathrm{Ra}_{1}^{*}\) is found through linear stability analysis of the conductive state \((L,X,Y)=(0,0,y_{0})\). As \(\mathrm{Ra}_{1}^{*}\), the conductive state loses stability and the circulating states appear. Immediately past \(\mathrm{Ra}_{1}^{*}\), the Jacobian of each circulating state possesses three real, negative eigenvalues. As \(\mathrm{Ra}\) increases further, two eigenvalues become complex, \(z_{2,3}=\sigma\pm i\omega\), with \(\sigma<0\) initially. As \(\mathrm{Ra}\) crosses \(\mathrm{Ra}_{2}^{*}\), \(\sigma\) becomes positive and thus the circulating states lose stability, giving way to the strange attractor seen in Fig. 2(b). The formulas for \(\mathrm{Ra}_{1}^{*}\) and \(\mathrm{Ra}_{2}^{*}\) in Eq. (6) delineate parameter space into regions of different qualitative behavior, as illustrated by Figure 3. In the figure, colored dots correspond to fully-resolved DNS, showing regions of a stable conductive state (blue), bistable circulating states (green), and LSC reversals, both chaotic (orange) and periodic (red). Equation (6) predicts the boundaries between these regions well. In particular, \(\mathrm{Ra}_{1}^{*}\) is independent of the Prandtl number, giving the vertical green line, while the orange curve shows the \(\Pr\) dependence of \(\mathrm{Ra}_{2}^{*}\). Interestingly, the formula for \(\mathrm{Ra}_{2}^{*}\) has two asymptotes that can be understood. First, for \(\Pr\) below the threshold \(\Pr^{*}=2\beta/\alpha\) (black dashed line), the denominator of \(\mathrm{Ra}_{2}^{*}\) is negative, indicating that no threshold exists and the circulating states remain stable for Figure 3: Phase diagram of different convective states. Colored dots are from DNS, where blue indicates a stable conductive state, green indicates bistable circulating states, orange indicates chaotic LSC reversals, and red indicates periodic LSC reversals. Formulas for \(\mathrm{Ra}_{1}^{*}\) and \(\mathrm{Ra}_{2}^{*}\) from the ODE model predict the boundaries between the regions well. Figure 2: Trajectories of ODE system (3)–(5) in comparison to fully-resolved DNS. The trajectories of \((L,X,Y)\) are remarkably similar across the range of Rayleigh numbers, showing (a) convergence to a stable circulating state for \(\mathrm{Ra}=3.1\times 10^{6}\), (b) strange-attractor dynamics for \(\mathrm{Ra}=2.5\times 10^{7}\), and (c) periodic dynamics for \(\mathrm{Ra}=1.1\times 10^{9}\). In all cases, \(\Pr=4\) and \(r_{0}=0.4\). (d) Bifurcation diagram shows a pitchfork bifurcation at \(\mathrm{Ra}_{1}^{*}\) and a Hopf bifurcation at \(\mathrm{Ra}_{2}^{*}\). _Inset:_ The fractal dimension \(D_{2}\) and Lyapunov exponent \(\lambda\) distinguish chaotic states from orderly ones. our knowledge, no previous numerical or experimental work has reported this critical \(\Pr^{*}\). Second, as \(\Pr\to\infty\), Eq. (6) shows that \(\mathrm{Ra}_{2}^{*}\) increases linearly with \(\Pr\), giving the slant asymptote seen in the figure. As \(\mathrm{Ra}\) increases well beyond \(\mathrm{Ra}_{2}^{*}\), large-scale chaos subsides and gives way to the nearly periodic LSC reversals seen in Fig. 2(c). The return to order is indicated by the fractal dimension dropping to one and the Lyapunov exponent dropping to zero at the same Rayleigh number, roughly \(\mathrm{Ra}=10^{9}\) in Fig. 2(d) inset. At this value, a stable limit cycle emerges in the ODE system, producing periodic orbits that resemble pendulum motion. Figure 4(a) shows four such orbits for Rayleigh numbers in the range \(\mathrm{Ra}=1/4\) - \(16\times 10^{10}\). At the lowest \(\mathrm{Ra}\), the pendulum length \(l\) varies somewhat over the period, but at higher \(\mathrm{Ra}\), the orbit tightens and \(l\) remains nearly constant throughout. It is important to note that, although the large-scale dynamics are regular in this regime, the DNS shows that turbulent fluctuations still inhabit the small scales [see Fig. 1(e)]. Each swing of the pendulum corresponds to a reversal of the LSC, offering a way to predict the dominant frequency \(f^{*}\) of the reversals. Equations (3) to (5) correspond to a damped, driven pendulum with gravitational constant \(g=kl^{2}\,\mathrm{Ra}\,\mathrm{Pr}\). Since the amplitude of oscillation is not small, the frequency depends on both the pendulum length \(l\) and the maximum swing angle \(\phi_{\mathrm{max}}\). As detailed in [24], both of these quantities can be estimated from an energy balance with energy \(E=\frac{1}{2}kL^{2}+\mathrm{Ra}\,\mathrm{Pr}\left(Y-y_{1}\right)\). The result is a simple formula for the frequency of LSC reversals in the high-\(\mathrm{Ra}\) regime, \[f^{*}=\frac{\sqrt{kl\,\mathrm{Ra}\,\mathrm{Pr}}}{4\,K(\sin^{2}(\phi_{\mathrm{ max}}/2))}, \tag{7}\] where \(K\) is the complete elliptic integral of the first kind, and formulas for \(l\) and \(\phi_{\mathrm{max}}\) are given in [24]. As seen in Fig. 4(b), this simple formula accurately predicts the reversal frequency measured in the fully-resolved DNS (blue circles) over the largest decade of \(\mathrm{Ra}\) run (roughly \(\mathrm{Ra}=2\times 10^{8}\) to \(2\times 10^{9}\)). At higher \(\mathrm{Ra}\), DNS becomes computationally prohibitive but numerical solution of the ODE model is feasible, and the corresponding measurements of \(f^{*}\) also agree with Eq. (7). The close agreement between DNS, the ODE model, and Eq. (7) suggests the primary mechanism for LSC reversals has been properly accounted for. The main result of this work is the ODE model Eqs. (3) to (5) for thermal convection in an annulus, which reveals a previously unrecognized link to a pendulum system with driving that raises the CoM above the fulcrum. The system accurately captures the sequence of parameter bifurcations, including chaotic and periodic reversals of the LSC, and it accurately predicts the frequency of LSC reversals in the high-\(\mathrm{Ra}\) regime. In this letter, we have focused on the lowest-order system capable of satisfying the BCs, but the truncation procedure can in principle be carried out to any order. The analysis thus provides a blueprint for obtaining a hierarchy of models to better understand the turbulent fluctuations underlying thermal convection. We reiterate that the annular shape of the domain analyzed here accentuates the dominant circular flow pattern of thermal convection, while suppressing other effects (e.g. corner rolls or detached plumes [14; 22; 23]) that tend to be geometry specific. These other effects undoubtedly influence LSC dynamics, but the fact that our system exhibits reversals without them indicates that such effects are not essential for LSC reversals. Rather, the primary mechanism for LSC reversals is the inertial overshoot of the fluid CoM, as can be accurately described by pendulum swinging motion. With the essence of LSC reversals captured by Eqs. (3) to (5), we hope this model can serve as the foundation for understanding LSC reversals in other geometries or with other forms of thermal forcing, perhaps through the inclusion of additional forcing or stochastic terms, or through the extension into three dimensions to account for azimuthal rotations of the LSC plane [11]. **Supplemental Material** Supplementary movies are available at [https://math.nyu.edu/~jinzi/research/AnnularConvection/Movie/](https://math.nyu.edu/~jinzi/research/AnnularConvection/Movie/).
2310.08819
Floquet Non-Abelian Topological Insulator and Multifold Bulk-Edge Correspondence
Topological phases characterized by non-Abelian charges are beyond the scope of the paradigmatic tenfold way and have gained increasing attention recently. Here we investigate topological insulators with multiple tangled gaps in Floquet settings and identify uncharted Floquet non-Abelian topological insulators without any static or Abelian analog. We demonstrate that the bulk-edge correspondence is multifold and follows the multiplication rule of the quaternion group $Q_8$. The same quaternion charge corresponds to several distinct edge-state configurations that are fully determined by phase-band singularities of the time evolution. In the anomalous non-Abelian phase, edge states appear in all bandgaps despite trivial quaternion charge. Furthermore, we uncover an exotic swap effect -- the emergence of interface modes with swapped driving, which is a signature of the non-Abelian dynamics and absent in Floquet Abelian systems. Our work, for the first time, presents Floquet topological insulators characterized by non-Abelian charges and opens up exciting possibilities for exploring the rich and uncharted territory of non-equilibrium topological phases.
Tianyu Li, Haiping Hu
2023-10-13T02:20:54Z
http://arxiv.org/abs/2310.08819v1
# Floquet Non-Abelian Topological Insulator and Multifold Bulk-Edge Correspondence ###### Abstract Topological phases characterized by non-Abelian charges are beyond the scope of the paradigmatic tenfold way and have gained increasing attention recently. Here we investigate topological insulators with multiple tangled gaps in Floquet settings and identify uncharted Floquet non-Abelian topological insulators without any static or Abelian analog. We demonstrate that the bulk-edge correspondence is multifold and follows the multiplication rule of the quaternion group \(Q_{8}\). The same quaternion charge corresponds to several distinct edge-state configurations that are fully determined by phase-band singularities of the time evolution. In the anomalous non-Abelian phase, edge states appear in all bandgaps despite trivial quaternion charge. Furthermore, we uncover an exotic swap effect--the emergence of interface modes with swapped driving, which is a signature of the non-Abelian dynamics and absent in Floquet Abelian systems. Our work, for the first time, presents Floquet topological insulators characterized by non-Abelian charges and opens up exciting possibilities for exploring the rich and uncharted territory of non-equilibrium topological phases. The past few decades have witnessed a remarkable surge of research in topological phases of matter [1; 2], culminating in the renowned Altland-Zirnbauer tenfold way [3; 4; 5; 6; 7]. Based on the underlying symmetries and spatial dimensions, gapped bulk Hamiltonians are characterized by Abelian-type topological invariants (\(\mathbb{Z}\) or \(\mathbb{Z}_{2}\)) with their own manifestation of protected boundary states. Very recently, the notion of band topology has been extended to tangled multi-gap scenarios [8; 9; 10; 11; 12; 13; 14; 15]. For instance, in the presence of space-time inversion (PT) symmetry, one-dimensional (1D) insulators involving multiple bandgaps may carry non-Abelian quaternion charges [8] and host richer topological phases as experimentally observed in transmission line networks [16; 17]. Yet in its infancy, the tangled multi-gap topology plays a vital role in describing, e.g., the disclination defects of nematic liquids [18; 19; 20; 21; 22], the admissible nodal lines [23; 24; 25; 26; 27] and the reciprocal braiding of Dirac/Weyl/exceptional points [11; 28; 29; 30; 31]. Floquet engineering provides a powerful knob in manipulating band structures [32; 33; 34; 35; 36; 37; 38; 39; 40; 41], offering unprecedented control over the topological properties of materials and the exploration of non-equilibrium phenomena. With a time-periodic Hamiltonian \(H(t)=H(t+T)\) (\(T\) is the driving period), the stroboscopic dynamics is dictated by an effective Floquet Hamiltonian. Notably, Floquet systems exhibit intriguing topological features with no static analog arising from the replicas of quasienergy bands, such as the emergence of anomalous chiral edge modes [42; 43; 44; 45] despite the triviality of all bulk bands. Incorporating the multi-gap scenario, this paper aims to address three fundamental questions regarding Floquet multi-gap topology. (_i_) Does a Floquet topological insulating phase characterized by non-Abelian charge exist, and if so, how can it be implemented through periodic driving? (_ii_) What novel bulk-edge correspondence does such a non-Abelian phase possess, and how can it be described? (_iii_) Are there any unique topological or dynamical phenomena associated with this phase? Here we answer these questions affirmatively. Firstly, we propose the realization of the simplest Floquet non-Abelian topological insulator (FNATI) in a 1D three-band system with PT symmetry. Secondly, the FNATI is characterized by a quaternion charge, which, on its own, cannot predict the existence or the number of edge states. Moreover, each quaternion charge corresponds to multiple edge-state configurations, demonstrating a multifold bulk-edge correspondence that obeys the multiplication rule of the quaternion group. The full topology or edge-state configuration is completely captured by the phase-band singularities of the time-evolution operator over one driving period. Intriguingly, we identify an anomalous FNATI hosting edge modes inside all bandgaps with a trivial bulk quaternion charge. Thirdly, we reveal the emergence of interface modes with swapped driving sequences as a genuine non-Abelian effect. It indicates the non-commutative nature of the FNATI. This is in sharp contrast to Floquet Abelian topological insulators, where such interface modes are absent due to the same spectral structures regardless of the choice of time frame. We emphasize that the intriguing properties of FNATI stem from the presence of multiple tangled bandgaps. Our findings expand the scope of Floquet topological insulators into the non-Abelian regime and open up new avenues for investigating the vast and unexplored territory of non-equilibrium topological phases. **Results** **Multi-gap topology and driving protocol.** Let us recap the static three-band topological insulator characterized by the quaternion charge \(Q_{8}\)[8]. In the presence of PT symmetry, the Hamiltonian becomes real-valued in momentum space \(H(k)=H^{\star}(k)\) when expressed on a suitable basis. Consequently, the eigenstates represent three real vectors that are orthonormal to each other and
2304.12706
What does BERT learn about prosody?
Language models have become nearly ubiquitous in natural language processing applications achieving state-of-the-art results in many tasks including prosody. As the model design does not define predetermined linguistic targets during training but rather aims at learning generalized representations of the language, analyzing and interpreting the representations that models implicitly capture is important in bridging the gap between interpretability and model performance. Several studies have explored the linguistic information that models capture providing some insights on their representational capacity. However, the current studies have not explored whether prosody is part of the structural information of the language that models learn. In this work, we perform a series of experiments on BERT probing the representations captured at different layers. Our results show that information about prosodic prominence spans across many layers but is mostly focused in middle layers suggesting that BERT relies mostly on syntactic and semantic information.
Sofoklis Kakouros, Johannah O'Mahony
2023-04-25T10:34:56Z
http://arxiv.org/abs/2304.12706v1
# What Does Bert Learn About Prosody? ###### Abstract Language models have become nearly ubiquitous in natural language processing applications achieving state-of-the-art results in many tasks including prosody. As the model design does not define predetermined linguistic targets during training but rather aims at learning generalized representations of the language, analyzing and interpreting the representations that models implicitly capture is important in bridging the gap between interpretability and model performance. Several studies have explored the linguistic information that models capture providing some insights on their representational capacity. However, the current studies have not explored whether prosody is part of the structural information of the language that models learn. In this work, we perform a series of experiments on BERT probing the representations captured at different layers. Our results show that information about prosodic prominence spans across many layers but is mostly focused in middle layers suggesting that BERT relies mostly on syntactic and semantic information. language model, BERT, prosody, prominence, part-of-speech Sofoklis Kakourosa and Johannah O'Mahonyb aUniversity of Helsinki, bUniversity of Edinburgh [email protected], johannah.o'[email protected] Representations from Transformers) [5]. Several recent works have investigated the representations learned at different layers of BERT in an attempt to interpret and understand the linguistic information captured by the model. These studies have uncovered that BERT indeed learns various aspects of the language. For instance, [6] showed that BERT layers capture a rich hierarchy of linguistic information spanning from surface features in the lower layers to syntactic in middle layers and semantic in the higher layers. These findings are further supported by a number of other works with the general observation indicating that learned representations vary with increasing network depth, with greater depth typically involving linguistic functions that require larger contextual relationships across the word tokens [2, 7, 8]. However, to the best of our knowledge, BERT has not been examined with respect to its prosodic information. In general, prosody can be viewed as the characteristics in an utterance that extend individual phonetic segments and encapsulate phonetic and phonological properties that are not due to the choice of individual lexical items, but depend on factors such as their semantic and syntactic relations [9, 10]. These characteristics convey information about the meaning and structure of an utterance. Although prosody is a characteristic of spoken language, the prosodic patterns in speech are connected and interact with their associated sequences of syllables, words, and phrases. Therefore, it is meaningful to assume that some aspects of the prosodic variation can be captured from text alone [11, 3]. In this work, we investigate how prosodic information is linguistically encoded by probing BERT with respect to the prosodic phenomenon of prominence. Prosodic prominence is defined as the subjective impression of a linguistic unit standing out of its context [12, 13, 14]. Given the recent success of BERT in predicting prosodic prominence [3], we attempt to answer the question of what BERT learns about prosody during its pre-training. Does the model rely on general linguistic and syntactic knowledge for its prosodic predictions or does it have a different pattern of weight allocations across its layers suggesting that the model can capture prosodic information? We use three datasets with prominence annotations and examine the weights at different layers and compare them with existing findings from the literature on other tasks. To further validate our approach with the literature we also extract part-of-speech tags for our data and examine the learned layer weights. The code to reproduce the results is publicly available at github.com/skakuoros/bert-prosody. In the next we describe the BERT model architecture, related work, experimental methodology and results. ## 2 Bert BERT [5] is a language model based on the Transformer architecture [15] that enables the bidirectional pre-training of representations by jointly conditioning on the left and right context in all layers. This allows the model to learn the entire surrounding context of a word which also means that the same word in different context will have a distinct representation. In contrast, earlier approaches looked at text sequences from left to right or by combining left to right and right to left training. The BERT representations are optimized based on two training objectives: (i) predicting randomly masked words in the input, and (ii) predicting whether the next sentence is the subsequent sentence in the input. Our experiments are based on the bert-base-uncased variant of BERT. The model consists of 12 layers, each with an embedding size of 768, and 12 attention heads. ## 3 Related Work Probing network layers to investigate the structural knowledge of the language that a model has captured is an active research area that falls under the topic of neural network interpretability. In recent years there has been an increasing number of studies examining the representations that language models learn. Some works use probing tasks to unveil the linguistic features encoded in neural models [6, 16], others use attribution methods such as Integrated Gradients [17, 18] and some analyze Transformers' attention heads for evidence of linguistic and syntactic phenomena [19]. In this work, we identify each layer's contribution to a specific prosodic task by attaching one trainable weight on each layer of BERT and training a light-weight classification head on top of a frozen pre-trained BERT. This enables us to observe the weight allocations across the layers and compare them with existing findings. ## 4 Layer-wise Analysis ### Layer weights We obtain the contribution of BERT layers to the prediction task by introducing learnable scalar weights attached to each transformer layer of the model. An overview of the experimental setup can be seen in Fig. 1. We take representations from all transformer layers in the model and collapse them to one via a weighted average. There is one weight for each layer (a total of \(L+1\)) and all weights are trained jointly with the classification network. The weights are implemented with a learnable vector of size \(L+1\), followed by the Softmax function. ### Layer embeddings To further probe the contribution of individual BERT layers, we extract the embeddings from each layer separately and use them to train a classification head consisting of a single dense layer followed by the Softmax function. For each embedding layer, we then obtain the classification accuracy for the task. ## 5 Experiments In our experiments we use three datasets: two consisting of read speech and one of spontaneous dialogue speech. These are presented next followed by a description of the experimental setup. ### Data #### 5.1.1 Bwnc The Boston University Radio News Corpus (BURNC) is a corpus of professionally read news data in American English [20]. The corpus consists of speech from seven speakers (three female). The corpus also contains phonetic alignments, orthographic transcriptions, part-of-speech tags, and prosodic labels. In this work we use the text prompts Figure 1: Overview of the experimental setup. and prosodic labels from the manually labelled part of the corpus (six speakers; approximately 3 h of data). The prosodic labeling system in BURNC is based on the Tones and Breaks Indices (ToBI) labeling convention and includes prosodic phrasing, phrasal prominence, and boundary tones. To obtain a single prominence label, all ToBI pitch accent types (e.g., H*,L*,L*+H) were marked as prominent while the rest as non-prominent. #### 5.1.2 NXT Switchboard The NXT Switchboard corpus is a dataset that includes the original Switchboard corpus annotations [21] into one coherent integrated format (NITE XML; NXT) enriched with annotations of prosody and contrast as well as syllable and phone information [22]. The prosody annotations are available for a subset of the data that includes 76 conversations labelled with the ToBI transcribing convention. Similar to BURNC dataset, we use a binary prominence distinction marking words with ToBI accents as prominent and the rest as non-prominent. #### 5.1.3 LibriTTS The LibriTTS corpus [23] is a processed (automatically aligned, segmented, and filtered) subset of the original audio and text data of the LibriSpeech corpus [24] that is based on English audiobooks of the LibriVox project. From the corpus we use the _clean_ subset that consists of 262.5 hours of read speech from 1230 speakers that were subsequently automatically labelled for prominence in the Helsinki Prosody Corpus (HPC) [3]. From the HPC data, we use the binary prominence tags. ### Experimental Setup To obtain the layer weights, we train the network (only the classification head and layer weights) with frozen pre-trained BERT for prominence and POS prediction for each dataset separately. For the training we use a 80-15-5 split for train, validation, and test. We run the training for 20 epochs with a batch size of 4 and we repeat each experiment five times. The results are averaged across the independent runs for each task. The network is configured with a learning rate of \(5e-5\) and a different learning rate \(1e-2\) for the layers weights. This was done in order to allow the model to adjust the weights for the different layers more rapidly. We extract the weights from the model checkpoint with the best development accuracy and average them over the five runs. We repeat the same procedure and setup for POS prediction. In addition to the frozen pre-trained BERT we also fine-tune the entire model and report the results in Table 2 to compare the overall model performance with both fine-tuned and frozen BERT models. We use the same learning rates and epochs as the previous experiment with frozen BERT. For POS prediction we used Spacy [25] to extract the part of speech categories. This resulted into 17 discrete classes. POS classes include, for example, adjectives, adpositions, adverbs (see [25] for a complete list of the coarse POS categories). Finally, to balance the data and enable better comparison, each dataset is post-processed to include one full sentence per sample. As BURNC may include an entire paragraph and Switchboard several sentences per dialogue turn within a sample, we explicitly set sample size to be one sentence. Thus, a batch size of four includes four sentences. ## 6 Results and Discussion We report overall model performance for POS and prominence prediction in Table 2, layer-specific performance in Table 1, and illustrate how layer weights vary with respect to different tasks in Fig. 2. We are interested in examining how different layers contribute to prominence and POS prediction with respect to findings on other tasks that have indicated different linguistic functions associated with different layers. Overall, for prominence we observe widespread distribution of the layer weights while POS appears more focused in the earlier BERT layers. These are presented in more detail in the next sections. ### Prosodic Prominence For prosodic prominence, the three datasets tested have shown differences in their overall performance. For example, for frozen BERT, prominence prediction accuracy was 87.14% for BURNC, 78.10% for Switchboard, and 82.67% for Figure 2: BERT layer weights for prominence (top) and POS (bottom) prediction. LibriTTS. These differences are likely due to the different speaking styles involved in the data, with dialogue speech having the lowest performance. Another interesting observation in the results is that the model performance is degraded for the fine-tuned models. It seems that when fine-tuning the model, generalizability decreases due to overfitting on the idiosyncrasies of the training data. With the frozen model, performance is high and on a par with results reported in the literature. For instance, for LibriTTS [3] report 83.20% and in our setup we obtained 82.67%. When it comes to layer weights, prosodic prominence has a widespread pattern of weights across BERT layers. Weights span most layers and are primarily focused within layers 2-8 with a peak appearing at layer 3. Interestingly, the pattern of weights appears to be quite similar across the three datasets tested. The spread of the weights suggests that different types of linguistic information are used in the prediction of prominent tokens. One interpretation of the results is that the model relies greatly on surface linguistic features such as POS but also involves syntactic and semantic information with increasing layers (see also [6]). For BURNC, we also observe that the third layer has a high weight for both prominence and POS prediction. It is possible that prominence in BURNC (professionaly read speech) correlated more with POS than audiobooks (LibriTTS) and dialogue speech (SWBD). Another possibility for these differences could be also attributed to the different prominence coding schemes in the three datasets. ### Pos Part-of-speech information seems to be encoded in the early BERT layers with model accuracy being high for both frozen and fine-tuned runs of the experiments and across all datasets tested. Fine-tuning the model leads to improved performance, where for BURNC we get an increase in accuracy from 95.97% with the frozen model to 97.56% with fine-tuning. Switchboard and LibriTTS perform similarly with an increase in performance when the model is fine-tuned. Layer weights for POS demonstrate a very different pattern when compared to prominence. POS information is found mainly in the lower layers of BERT with weights across the datasets varying but being focused in the early layers of the model. Most of the information seems to come from layers 0-4 which have been shown to encode surface features [6]. This finding is also in agreement with other work that shows maximum POS tagging performance in the lower layers of models with accuracies ranging from 97.2% to 97.4% [7, 2]. ## 7 Conclusions In this work, we performed a series of experiments on prosodic prominence to investigate whether prosody is part of the structural information of the language that BERT learns. Our results show that BERT captures information about prosodic prominence through a widespread allocation of weights across its layers reaching high performance. The weight allocations suggest that BERT relies on a variety of linguistic information for its predictions including surface linguistic features such as POS but also involving syntactic and semantic information. In future work, we will explore the same tasks with an extended set of datasets and methodological approaches. In addition to layer weights, we want to include layer integrated gradients as an attribution method in the experiments. We also aim to examine differences between the styles in the datasets, that is, read versus dialogue speech. ## 8 Acknowledgements This work was supported by the Academy of Finland project no. 340125 "Computational Modeling of Prosody in Speech" and Horizon 2020 Marie Sklodowska-Curie grant agreement No 859588. The authors wish to acknowledge CSC - IT Center for Science, Finland, for providing the computational resources. \begin{table} \begin{tabular}{c c c c c c c} \hline _Layer_ & \multicolumn{2}{c}{**BURNC**} & \multicolumn{2}{c}{**SWBD**} & \multicolumn{2}{c}{**LibriTTS**} \\ & _Pron_ & _POS_ & _Pron_ & _POS_ & _Pron_ & _POS_ \\ \hline 0 & 81.51 & 93.84 & 75.32 & **94.62** & 80.18 & 89.50 \\ 1 & 83.52 & **93.90** & **76.14** & **94.67** & **80.32** & **90.08** \\ 2 & 85.20 & **94.33** & **76.01** & 94.02 & 80.29 & **89.53** \\ 3 & 85.20 & 93.84 & 75.47 & 93.50 & **81.08** & 89.00 \\ 4 & **85.26** & 93.47 & 75.25 & 92.52 & 80.28 & 88.40 \\ 5 & **85.26** & 93.17 & 75.82 & 91.87 & 80.00 & 87.29 \\ 6 & 84.13 & 92.98 & 75.71 & 90.95 & 79.71 & 86.12 \\ 7 & 84.46 & 91.76 & 75.82 & 90.11 & 79.40 & 84.86 \\ 8 & 83.79 & 91.03 & 75.45 & 88.37 & 79.05 & 83.48 \\ 9 & 82.92 & 89.20 & 74.71 & 87.20 & 78.63 & 82.36 \\ 10 & 82.59 & 88.47 & 74.29 & 86.64 & 78.32 & 81.14 \\ 11 & 83.26 & 87.19 & 73.08 & 82.09 & 77.27 & 76.67 \\ \hline \end{tabular} \end{table} Table 1: Layer-wise accuracy for the test set runs for prominence and POS prediction with frozen BERT. Numbers in bold denote the two top results in each task. \begin{table} \begin{tabular}{l c c} \hline _Prominence_ & **BURNC** & **SWBD** & **LibriTTS** \\ \hline frozen & 87.14 & 78.10 & 82.67 \\ ft & 85.53 & 75.95 & 80.32 \\ \hline _POS_ & & & \\ \hline frozen & 95.97 & 97.94 & 98.49 \\ ft & 97.56 & 98.54 & 98.96 \\ \hline \end{tabular} \end{table} Table 2: Accuracy for the test set runs for prominence and POS prediction with frozen and fine-tuned (ft) BERT.
2310.06834
Silent White Light
We investigate the intra-waveguide statistics manipulation of broadband light by combining semiconductor quantum dot physics with quantum optics. By cooling a quantum dot superluminescent diode to liquid nitrogen temperature of $77K$, Blazek et al. [Phys. Rev. A 84, 63840 (2011)] have demonstrated a temperature-dependent reduction of the second-order intensity correlation coefficient from two for thermal amplified spontaneous emission light to $g^{(2)}(T=190 K)\approx 1.33$. Here, we model the broadband photon statistics assuming amplified spontaneous emission radiation in a pumped, saturable quantum dot gain medium. We demonstrate that, by an intensity increase due to the quantum dot occupation dynamics via the temperature-tuned quasi Fermi levels, together with the saturation nonlinearity, a statistics manipulation from thermal Bose-Einstein statistics towards Poissonian statistics can be realized, thus producing "silent white light". Such intensity-noise reduced broadband radiation is relevant for many applications like optical coherence tomography, optical communication or optical tweezers.
Kai Niklas Hansmann, Franziska Dommermuth, Wolfgang Elsäßer, Reinhold Walser
2023-10-10T17:59:03Z
http://arxiv.org/abs/2310.06834v1
# Silent White Light ###### Abstract We investigate the intra-waveguide statistics manipulation of broadband light by combining semiconductor quantum dot physics with quantum optics. By cooling a quantum dot superluminescent diode to liquid nitrogen temperature of \(77\,\mathrm{K}\), Blazek _et al._ [Phys. Rev. A **84**, 63840 (2011)] have demonstrated a temperature-dependent reduction of the second-order intensity correlation coefficient from two for thermal amplified spontaneous emission light to \(g^{(2)}(T=190\,\mathrm{K})\approx 1.33\). Here, we model the broadband photon statistics assuming amplified spontaneous emission radiation in a pumped, saturable quantum dot gain medium. We demonstrate that, by an intensity increase due to the quantum dot occupation dynamics via the temperature-tuned quasi Fermi levels, together with the saturation nonlinearity, a statistics manipulation from thermal Bose-Einstein statistics towards Poissonian statistics can be realized, thus producing "silent white light". Such intensity-noise reduced broadband radiation is relevant for many applications like optical coherence tomography, optical communication or optical tweezers. Since the first realization of the laser, there has been a perpetual interest in the quantum fluctuations of light, driven both by fundamental and practical interest [1]. This can be best summarized in the famous saying of Rolf Landauer: "The noise is the signal" [2]. In this spirit, lasers and thermal light sources as "original light sources" have been considered as benchmarks due to their second-order correlation coefficient \(g^{(2)}(0)=\langle I^{2}\rangle/\langle I\rangle^{2}\) of unity and two respectively [3], determined within a Hanbury Brown and Twiss (HBT) experiment [4], [5]. Nowadays, a HBT classification in the \(g^{(2)}(0)\) scheme is the characteristics for each new light source and it has become central to quantum optical measurements. In particular, novel concepts have been conceived to create light with \(g^{(2)}(0)\) beyond unity and two, emphasizing controlling and manipulating light statistics into regimes beyond, i.e. tailoring \(g^{(2)}(0)\) on-demand. Immediately after the advent of the laser, Martienssen and Spiller and Arecchi [6; 7] realized the so-called pseudo-thermal light source in 1966. There, scattering of laser light at a rotating diffuser transformed the Poissonian laser photon statistics into that of thermal light [8; 9] exhibiting Bose-Einstein statistics with \(g^{(2)}(0)\) of two. This philosophy of exploiting Gaussian and non-Gaussian random walk scattering processes in media led to the achievement of well-controlled states of light [10; 11; 12; 13; 14; 15] in the framework of light with super-Poissonian statistics, i. e., bunched or even super-bunched photon counting statistics. Later on, microscopic and mesoscopic scattering concepts for the manipulation of the light statistics have been comprehensively investigated and extended to waveguides [16; 17; 18; 19]. The concept of manipulating and tailoring light states and exploiting their novel properties beneficially and on-demand in quantum metrology applications has also been investigated by applying nonlinear optical processes onto light [20; 21]. Very recently, it has been shown that disordered systems permit manipulation and tuning of the output statistics via deterministic and coherent control. Monochromatic coherent light traversing a disordered photonic medium evolved into a random field whose statistics has been dictated by the disorder level [22; 18]. Deterministic control over the photon-number distribution was demonstrated by interfering two coherent beams within a disordered photonic lattice thus enabling the generation of super-thermal and sub-thermal light [23]. The generation of squeezed states of light [24] with \(g^{(2)}(0)\) below one and even single photon states by means of non-linear optics have been the next revolutionary steps and opened a huge field of applications in sensing [25; 26; 27; 28; 29; 30]. Optoelectronic semiconductor-based emitters are bridging these concepts. The method of "quiet pumping" pioneered by Y. Yamamoto can be understood as manipulating and creating a particularly interesting emission statistics of a semiconductor laser. By transferring the statistics of the quiet, by Coulomb repulsion regulated sub-Poissonian statistics of the injection current into the sub-Poissonian statistics of the emitted photons, squeezed states of light emerge [31; 32]. In the current evolution of quantum information technologies, quantum dots (QDs) get embedded in optical integrated chips. Semiconductor QDs have been identified as a promising hardware for implementing the basic building blocks, e.g. stationary and flying qubits in the solid state [33; 34]. More recently, even so-called hybrid light has been discovered by Blazek et al. [35]. They investigated the emission properties of quantum dot superluminescent diodes (QDSLDs). At room temperature, they are light sources with an ultra-broadband emission spectrum (first-order coherence) and a second-order correlation coefficient \(g^{(2)}(0)=2.0\). The authors demonstrated experimentally, however, that the intensity fluctuations can be suppressed down to \(g^{(2)}(0)=1.33\) while tuning the temperature to 190 K. Such emission, being first-order incoherent (spectrally broad-band) and second-order coherent (towards that of a laser), might be called "silent white light" and is of particular interest for applications such as optical coherence tomography [36], fiber optic gyroscopy [37] and ghost imaging [38; 39]. Previous theoretical investigations of this behavior focused on the quantum nature of the diode material [40; 41; 42]. Here, we will present a quantitative model for the explanation of these observations [35]. It accounts for the self-consistent modification of the photon statistics caused by the nonlinear response of the QD gain medium [43], as well as thermally induced occupation of its energy levels [44; 45]. This insight will promote further developments, thus paving the avenue for novel, compact and fully integrated light sources. Radiation is generated in an active QD gain medium by amplified spontaneous emission (ASE). In contrast to a laser, the wave-guide medium is terminated with non-reflecting tilted end-facets, shown in Fig. 1 (a). Therefore, there is no feedback mechanism to create coherence. The equilibrium state of the radiation inside the diode is a balance between the nonlinear gain from the QD ensemble, its saturation behavior, the absorption from the passive medium and the emission output coupling of the diode. In the following, these processes are considered in detail for a thin transversal layer of QDs interacting with the radiation field. According to the Maxwell-Bloch equations [46; 47], the slowly varying electric field amplitude \(\varepsilon(t)\) passing through a thin sheet \(\delta z\) of polarizable matter (see Fig. 1(b)) reads \[\varepsilon_{\mathrm{out}}(t)=\eta\varepsilon_{\mathrm{in}}(t)+\frac{ik \delta z}{2\varepsilon_{0}}\mathcal{P}^{(+)}(t). \tag{1}\] Here, \(0<\eta<1\) accounts for scattering losses, \(k\) is the carrier wave number of the electric field and \(\mathcal{P}^{(+)}\) is the positive frequency part of the polarization. QDs are the active agents embedded in the passive waveguide layers inside the diode. They are modeled as pumped three-level systems (see inset in Fig. 1(b)). The electric field drives the transition between levels \(|1\rangle\) and \(|2\rangle\) with Rabi frequency \(\Omega(t)=d_{21}\varepsilon(t)/\hbar\), where \(d_{21}\) is the dipole matrix element. The positive frequency part of the polarization \(\mathcal{P}^{(+)}=nd_{12}\rho_{21}\) scales with the density of QDs \(n\). In the rate equation limit [46], coherences \(\rho_{ij}\) decay much faster than populations \(\rho_{ii}\), which evolve as \[\dot{\rho}_{00}= -(R_{i}+\gamma_{10})\rho_{00}+R_{i}\rho_{22}, \tag{2}\] \[\dot{\rho}_{11}= \gamma_{10}\rho_{00}-(\gamma_{21}+\zeta)\rho_{11}+\zeta\rho_{22},\] \[\dot{\rho}_{22}= R_{i}\rho_{00}+(\gamma_{21}+\zeta)\rho_{11}-(R_{i}+\zeta)\rho_{ 22},\] where \(\zeta=|\Omega|^{2}\mathcal{L}/\gamma\), \(\gamma=\gamma_{21}+R_{i}\) and \(\mathcal{L}=(\gamma/2)^{2}/(\Delta^{2}+(\gamma/2)^{2})\). In this limit, the coherence \(\rho_{21}=-(i/2)\Omega\mathcal{D}w\) is instantaneously related to the inversion \(w=\rho_{11}-\rho_{22}\) and the stochastic field \(\varepsilon(t)\), which is modulated by the lineshape \(\mathcal{D}=1/(i\Delta+\gamma/2)\). Hence, \[\rho_{21}=-\frac{i}{2}\Omega\frac{w_{0}\mathcal{D}}{1+s}, \tag{3}\] with the unsaturated inversion \(w_{0}=(R_{i}(\gamma_{10}-\gamma_{21})-\gamma_{10}\gamma_{21})/(\gamma\gamma_{1 0}+2R_{i}\gamma_{21})\) and denoting the saturation parameter \(s=\mathcal{L}I/I_{s}\) as the ratio of intensity \(I=|\varepsilon|^{2}\) and the saturation intensity \(I_{s}=\hbar^{2}\gamma(\gamma\gamma_{10}+2R_{i}\gamma_{21})/|d_{21}|^{2}(3R_{i} +2\gamma_{10})\). Now, the input-output-relation from Equ. (1) reads \(\varepsilon_{\mathrm{out}}=(\eta+\alpha)\varepsilon_{\mathrm{in}}\) and depends on a nonlinear absorption coefficient \(\alpha=\kappa\gamma\mathcal{D}/(1+s)\), the nonlinear saturation parameter \(\kappa=\alpha_{0}k\delta z\) and the linear, resonant absorption coefficient \(\alpha_{0}=n|d_{21}|^{2}w_{0}/4\hbar\varepsilon_{0}\gamma\)[47]. In the limit of a thin sheet of length \(\delta z\), all terms exceeding linear order in \(\kappa\) can be neglected and the input-output relation for the intensities is obtained \[I_{\mathrm{out}}(I_{\mathrm{in}})=\left(\eta^{2}+\frac{4\eta\kappa}{1+s} \right)I_{\mathrm{in}}+\mathcal{O}[\delta z^{2}]. \tag{4}\] As a consequence of the broad THz-bandwidth of the first-order incoherent light, we can choose the detuning Figure 1: (a) QDSLD pumped by an electric injection current \(I_{p}\). Layers of wave guides with non-reflecting tilted end-facets and embedded QDs serve as a lossy, nonlinear gain material. (b) Stochastic electric field \(\varepsilon(t)\) propagating for a length \(\delta z\) through a transversal sheet of inverted three-level QDs with decay rates \(\gamma_{10},\gamma_{21}\), an internal pumping rate \(R_{i}\), a Rabi frequency \(\Omega\) and a detuning \(\Delta\). \(\Delta=0\). Thus, only three parameters \(\eta\), \(\kappa\) and \(I_{s}\) remain unspecified in Equ. (4). Here, we start with a incoherent Gaussian photon statistics as the starting point for the statistics manipulation by the QD saturable medium and the QD level scheme. This implies an exponential probability density for the input intensity \(p(I_{\text{in}})\)[42] \[p(I_{\text{in}})=e^{-I_{\text{in}}/\bar{I}}/\bar{I},\qquad\qquad\langle I_{ \text{in}}\rangle=\bar{I}, \tag{5}\] with an average intensity \(\bar{I}\). Thus, the \(n\)-th order moments of the output photon intensity are given by \[\langle I_{\text{out}}^{n}(I_{in})\rangle=\int_{0}^{\infty}\text{d}I_{\text{ in}}\;I_{\text{out}}^{n}(I_{\text{in}})\,p(I_{\text{in}}). \tag{6}\] In equilibrium, gain is compensated by loss and defines a self-consistent relation for the intensity (4) \[\langle I_{\text{out}}\rangle=\langle I_{\text{in}}\rangle=\bar{I}(\eta). \tag{7}\] From this condition, we can determine the inaccessible loss rate \(\eta(\bar{I})\) in favor of the equilibrium intensity \(\bar{I}\). Now, we are able to evaluate the stationary, zero delay time (\(\tau=0\)) relative intensity-noise correlation function \[g^{(2)}(\tau=0,\bar{I})=\lim_{t\to\infty}\langle I_{\text{out}}(t)^{2}\rangle /\langle I_{\text{out}}\rangle^{2} \tag{8}\] as a measure for the intensity fluctuations. Serendipitously, one can evaluate this expression analytically in terms of \(u(\mathcal{I})=e^{T}\Gamma(0,\mathcal{I})\), the incomplete Gamma function \(\Gamma(0,\mathcal{I})=\int_{\mathcal{I}}^{\infty}\text{d}t\;e^{-t}/t\)[48] and the relative intensity \(\mathcal{I}=I_{s}/\bar{I}\). Within the thin sheet limit \(\mathcal{O}[\delta z^{2}]\), one finds in good approximation \[g^{(2)}(0)=2-8\kappa\mathcal{I}\left[1+\mathcal{I}\left(1-2u(\mathcal{I}) \right)-\mathcal{I}^{2}u(\mathcal{I})\right]. \tag{9}\] In Fig. 2, we depict this intensity correlation \(g^{(2)}(0)\) versus the internal QDSLD intensity \(\bar{I}\) for a chosen saturation intensity \(I_{s}\) and for various saturation parameters \(\kappa=0.1,0.35\), and \(0.56\), respectively. In general, \(g^{(2)}(0)\) shows a strong decrease with increasing \(\bar{I}\), which is stronger for higher \(\kappa\). However, we note that this is only the case if the medium is inverted, i. e. \(\kappa>0\). Thermal effects within the QD gain medium influence the carrier population and thus determine the key parameter of \(g^{(2)}(0)\) via the generated photon density or the intensity \(\bar{I}\). Its influence on the statistics via the emitted intensity are schematically summarized in Fig. 3. For the description of this temperature-dependent reduced second-order correlation coefficient, we consult a rate equation model that has been developed previously to describe the threshold currents' temperature dependence in strongly inhomogeneously broadened QD lasers, reflecting its radiative recombination processes [44; 45]. Thereby, we combine the two worlds of quantum optics and semiconductor quantum dots. The charge-carrier distribution in semiconductor QD materials depends on temperature, and the mean carrier occupation number for each energy level is obtained by averaging over the whole inhomogeneous dot ensemble. The ingredients of the model are the two confined QD levels, namely the GS (ground state) and the ES (excited state), and the so-called wetting layer, which provides the joint interaction medium for all QDs. Their appropriate interaction is accounted for by relaxation rates, carrier escape processes via thermally activated escape, tunneling, and Auger processes. Finally, all states interact with a bosonic phonon bath. The outcome is the carrier distribution or the population densities entering directly into the radiative photon emission rates. At room temperature, high-energy phonons induce a global thermal equilibrium of the whole QD ensemble through interaction with the surrounding wetting layer Figure 3: Schematic depiction of the occupation of the QD levels as a function of temperature T for the explanation of the experimental measurement of peak power emitted by the diode from [35]. Figure 2: Central second-order coherence correlation coefficient \(g^{(2)}(0)\) as a function of mean intensity \(\bar{I}\), for a saturation intensity \(I_{s}=5\) and for three nonlinear saturation parameters \(\kappa\)= 0.1 (red, solid), 0.35 (blue, dash-dotted), 0.56 (green, dashed). (see Fig. 3). This thermally excites some of the carriers into higher energetic states, leaving some of the lower states unoccupied. Accordingly, the occupation is described by an equilibrium Fermi-Dirac distribution with a global Fermi level for all electron levels. When the temperature is reduced, the carriers are still uniformly distributed among the individual dots. However, thermal excitations freeze out, the nonradiative losses decrease, and charge-carrier condensation into the globally lowest energy state occurs. This maximizes the occupation numbers of the GS and ES transitions. At even lower temperatures, this common occupation statistics or global equilibrium collapses. The exchange of carriers between the individual dots breaks down and inside each dot all the energetically lowest states have the same population, irrespective of their energy. This characterizes a so-called random population. The resulting distribution is a non-equilibrium distribution with a "virtual" excitation spectrum obtained by averaging over the whole ensemble, thus reflecting more the energetically inhomogeneous dot distribution. This leads again to a decrease in radiative recombination accompanied by a small increase in linewidth. In essence, at around 190 K, a maximum in the radiative recombination occurs due to the occupation condensation into the globally lowest-lying state that is still described by a Fermi-Dirac distribution. This redistribution of carriers modifies the optical gain properties of the QDSLD that we investigate through temperature-resolved spectral analysis. The spectral peak power extracted from the maximum value of the optical spectra represents an easily accessible indicator for the spectral gain. The relative development of the peak power is shown in Fig. 4. In the weakly coupled thermal regime at 190 K, we find an increase in peak power compared to room temperature due to the condensation of charge carriers. The local maximum in peak power indicates a larger amplification, which in turn affects the photon emission process. At room temperature, the QDSLD emits amplified spontaneous emission in a delicate balance, where spontaneously emitted photons are amplified moderately. At 190 K, the maximum in the spectral gain increases the probability of stimulated emission such that the initial spontaneous emission experiences a stronger amplification. These quasi-stimulated processes reduce the second-order intensity correlation coefficient \(g^{(2)}(0)\) and suppress intensity fluctuations [50; 51], thus realizing the exciting hybrid coherent photon states. The consequence of this behavior is a hierarchy in the contributing QD levels with a peak behavior of the emitted intensity as a function of temperature as illustrated by Fig. 4, which shows the experimental findings of the emitted intensity of the diode as a function of temperature. We can phenomenologically model the emitted power as a temperature-dependent Gaussian function with an offset \(\delta I\) \[\bar{I}(T)=\bar{I}e^{-(T-T_{0})^{2}/\sigma^{2}}+\delta I. \tag{10}\] The experimental data can be fitted well for \(\bar{I}=1.51\pm 0.13\), \(T_{0}=(197.1\pm 0.9)\) K, \(\sigma=(13.1\pm 1.0)\) K and \(\delta I=0.15\pm 0.03\). These thermal fitting parameters can be used to construct the temperature-dependent behaviour of \(g^{(2)}(0,\bar{I}(T))\) which is plotted in Fig. 5. As can be seen in Fig. 5, all nonlinear saturation parameter combinations \(\kappa\) show a suppression of intensity fluctuations around 190 K. With the parameters set to \(I_{s}=5\) and \(\kappa=0.35\) (blue), we achieve good agreement with the experimental data [35]. The calculations using the fitted data from Figure 4: Mean output intensity \(\bar{I}\) versus temperature \(T\). Experimental data (red, [49]) and model (blue, (10)) yield a maximum intensity at around 190 K, implying an increase in diode efficiency at this temperature. Figure 5: Central degree of second-order coherence \(g^{(2)}(0,\bar{I}(T))\) versus temperature \(T\) for \(I_{s}=5\) and varying saturation parameter values \(\kappa\). For \(\kappa=0.35\) (blue, dash-dotted), we are able to match the experimental data [35]. This agreement deteriorates for \(\kappa=0.1\) (red, solid). Within the limits of the model, the intensity noise suppression could even reach \(g^{(2)}(0)=1.07\) for \(\kappa=0.56\) (green, dashed). Fig. 4 do not reach a plateau of \(g^{(2)}(0)=2.0\) for high and low temperatures. This is due to the finite offset \(\delta I=0.15\pm 0.03\) of \(\bar{I}(T)\). With the saturation parameters set to \(I_{s}=5\) and \(\kappa=0.56\) (green), we are able to produce an intensity noise suppression below the experimentally reported value of \(g^{(2)}(0)=1.33\) with a minimum of about \(g^{(2)}(0)=1.07\). Having developed a good description of the experimentally observed \(g^{(2)}(0)\) reduction of hybrid light, we are now able to search even towards more reduction reaching eventually the Poissonian correlation limit of \(g^{(2)}=1\), still keeping the spectral broadband character. Adjusting our model parameters, we are able to show reductions of \(g^{(2)}(0)\) nearly down to a value of \(1.07\) for a \(\kappa=0.56\), very close to "real" Poissonian statistics, but now not for a laser but still for a broadband hybrid ASE light source. However, we admit that it is experimentally and technologically quite challenging to find appropriate QD level systems and QDSLD designs preventing stimulated modal emission, thus maintaining low first-order coherence, and avoiding a collapse of the spectral linewidth [52]. In conclusion, we have developed a quantum optical model for a thermally-tuned photon statistics transformation of broad-band THz-wide ASE radiation emitted from a quantum dot superluminescent diode from the thermal Bose-Einstein statistics towards Poissonian statistics, thus producing "silent white light". The two ingredients, nonlinear gain saturation and an increased recombination determined by the temperature dependence of the hierarchy of the quantum dot occupation allowed to account for the experimentally observed findings considering real world parameters. These results and their insight will promote further developments thus paving the avenue for novel, compact and fully integrated light sources emitting new tailored quantum states of light with on-demand tailored fluctuation properties opening a huge field of applications in sensing. The authors have no conflict of interest to disclose. The data that support the findings of this study are available from the corresponding author upon reasonable request. We gratefully acknowledge supporting discussions with Dr. Martin Blazek. RW and KH are supported by the DLR German Aerospace Center with funds provided by the Federal Ministry for Economic Affairs and Energy (BMWi) under Grant No. No. 50WM2250E.
2304.13418
Rational Function Simplification for Integration-by-Parts Reduction and Beyond
We present FUEL (Fractional Universal Evaluation Library), a C++ library for performing rational function arithmetic with a flexible choice of third-party computer algebra systems as simplifiers. FUEL is an outgrowth of a C++ interface to Fermat which was originally part of the FIRE code for integration-by-parts (IBP) reduction for Feynman integrals, now promoted to be a standalone library and with access to simplifiers other than Fermat. We compare the performance of various simplifiers for standalone benchmark problems as well as IBP reduction runs with FIRE. A speedup of more than 10 times is achieved for an example IBP problem related to off-shell three-particle form factors in $\mathcal N=4$ super-Yang-Mills theory.
Kirill Mokrov, Alexander Smirnov, Mao Zeng
2023-04-26T09:58:11Z
http://arxiv.org/abs/2304.13418v2
# Rational Function Simplification for Integration-by-Parts Reduction and Beyond ###### Abstract We present FUEL (Fractional Universal Evaluation Library), a C++ library for performing rational function arithmetic with a flexible choice of third-party computer algebra systems as simplifiers. FUEL is an outgrowth of a C++ interface to Fermat which was originally part of the FIRE code for integration-by-parts (IBP) reduction for Feynman integrals, now promoted to be a standalone library and with access to simplifiers other than Fermat. We compare the performance of various simplifiers for standalone benchmark problems as well as IBP reduction runs with FIRE. ###### Contents * 1 Introduction * 2 Problem statement * 3 Simplifiers, input data, and connecting to FIRE * 3.1 Overview * 3.2 Method for connecting with the simplifier * 3.3 Connected simplifiers * 3.3.1 CoCoA * 3.3.2 Fermat * 3.3.3 Form * 3.3.4 GiNaC * 3.3.5 Macaulay2 * 3.3.6 Maple * 3.3.7 Maxima * 3.3.8 Nemo (with a custom parser) * 3.3.9 PARI / GP * 3.3.10 Wolfram Mathematica * 4 Main benchmark tests * 4.1 Testing method Description of test data * 4.3 Results * 4.3.1 Running time * 4.3.2 Memory usage * 5 Additional tests with low parsing overhead * 6 Tests with FIRE * 7 Conclusions ## 1 Introduction Many problems of high energy physics and quantum field theory are difficult to solve without using a computer. An example of such a problem is the calculation of Feynman integrals in complicated scattering amplitudes and correlation functions. For cutting-edge problems involving a huge number of Feynman integrals, the standard calculation workflow consists of two stages: integration-by-parts (IBP) reduction [1, 2] to so-called master integrals and finding the values of these master integrals. The problem of IBP reduction with the _Laporta algorithm_[2] can be viewed as a problem of solving a huge system of sparse linear equations with polynomial coefficients. The coefficients generally become rational functions, i.e. fractions of polynomials, when solving the linear system via (variants of) Gaussian elimination. Because of the complex nature of the coefficients, they need to be stored in a special form, and most importantly, the coefficients need to be periodically simplified when solving the linear system. The simplifications include e.g. collecting similar terms in polynomials, writing sums of fractions as a single fraction with a common denominator, and simplifying the numerator and denominator by computing polynomial greatest common denominator (GCD). Without the simplifications, arithmetic operations on the coefficients will take more and more time, and their storage will require more and more memory, eventually making performance unacceptable. In this paper we consider programs (either standalone programs or libraries), called _simplifiers_, which are used to perform all necessary simplifying transformations of rational function coefficients. The list of simplifiers considered in this paper is: CoCoA [3, 4], Fermat [5], FORM [6], GiNaC [7], Macaulay2 [8], Maple [9], Maxima [10], Nemo [11], PARI / GP [12], and Wolfram Mathematica [13]. These programs are compared for three different sets of input data: a large set of rational functions in one variable, a large set of rational functions in three variables, and a set of a few dozen huge rational functions whose lengths range from tens of thousands to several hundred thousand characters when printed. The main performance indicators for comparison are the time spent on simplification and the amount of memory needed. At the end of the 20th century, the task of IBP reduction was done manually. Later, computer programs appeared that automated and speeded up this process. Some of the publicly available general-purpose programs are: FIRE [14, 15, 16], AIR [17], Reduze [18], LiteRed [19], and Kira [20, 21, 22]. The public version of FIRE was first published in 2014 and has been used by the scientific community to perform cutting-edge calculations, e.g. in Refs. [23, 24, 25]. Initially, Fermat was the only simplifier used by the C++ version of FIRE with the use of the gateToFermat library by M. Tentukov. In this work, several more simplifiers are connected to FIRE for the first time, through the standalone C++ library FUEL which provides access to the simplifiers. FIRE can run both on desktop computers and on specialized nodes with 32 or more computing cores and more than 1.5 TB of RAM, on 64-bit versions of the Linux operating system. Program running time and the required amount of RAM depend on the complexity of the task, and the running time can be up to several months for real-world tasks, of which up to 95% can be spent exclusively on the simplification of rational function coefficients when solving linear systems. In this regard, it is important to find the best programs for simplifying the coefficients, which would allow us to optimize this part of FIRE's performance. There are many other performance considerations relevant for IBP reduction computations with the Laporta algorithm while keeping analytic dependence on kinematic and spacetime dimension variables. Such considerations include e.g. the ordering of integrals and ordering of equations [20, 26], selection of IBP identities and Lorentz-invariance identities [27, 14], the use of reduction rules with abstract propagator powers (see e.g. [19, 28]), the choice of master integral bases that avoid spurious singularities [29, 30], block triangular form [31, 32], syzygy equations [33, 34, 35, 36, 26] and the related numerical unitarity method [37, 38, 39]. In this work, however, we focus exclusively on the simplification of rational functions in the process of solving linear systems. The FUEL library from this work is available from the following git repository: [https://bitbucket.org/feynmanIntegrals/fuel/src/main/](https://bitbucket.org/feynmanIntegrals/fuel/src/main/) ## 2 Problem statement The purpose of this work is to select and test existing third-party programs for simplifying rational functions, and develop a C++ library FUEL for accessing the simplification functionality. The third-party programs under consideration should be able to simplify complicated expressions and be compatible with the Linux operating system. The programs must be tested for the correctness of simplification. In order to achieve the goal, it is necessary to solve the following tasks: * Find programs, or _simplfiers_, that meet the requirements, and write the FUEL library for accessing the simplifiers from C++. * Test and compare the simplifiers in terms of rational function simplification performance, and select the best ones. * Connect FIRE with these simplifiers via FUEL to perform IBP reduction computations, and check if they work correctly. Caveat: our results **should not be** interpreted as a performance comparison between computer algebra systems for polynomial-oriented tasks. Rather, we test the computer algebra systems for the overall performance for tasks similar to IBP reduction when interfacing with FIRE. In particular, efficient bi-directional transfer, i.e. **parsing, printing, and transferring** of expressions in a text format is often an important performance bottleneck, and certain programs can be uncompetitive even when their the inherent simplification speed is excellent. Simplifiers, input data, and connecting to FIRE ### Overview In general, when solving a linear system with polynomial coefficients, e.g. by Gaussian elimination, an intermediate coefficient to be simplified is a sum of fractions, whose numerator and denominator can also contain fractions which, in turn, contain polynomials in their numerators and denominators, as written schematically in Eq. (1). \[\sum_{k}\frac{\frac{\text{Poly}_{k,1}}{\frac{\text{Poly}_{k,2}}{\text{Poly}_{k,4 }}}{\text{Poly}_{k,5}}}{\text{Poly}_{k,6}},\ \text{Poly}_{k,j}=\sum_{i}q_{i}\cdot x_{1}^{n_{i1}} \cdot x_{2}^{n_{i2}}\cdot\ldots\cdot x_{m}^{n_{im}},\ q_{i}\in\mathbb{Q},\ x_{h} \in X,\ n_{ih}\in\mathbb{Z}_{0}^{+}\,. \tag{1}\] A simplification is considered successful if the result is a polynomial or a fraction where the numerator and denominator are polynomials without a nontrivial GCD. The polynomials must be in either the expanded form or some nested form such as the Horner form. We do not considered other forms such as the factorized forms or partial-fractioned forms for polynomials and rational functions, for the scope of this study. There are three additional characteristics of the problem arising from IBP reduction by FIRE. First, rational numbers can be as large as desired, that is, the simplifier must support arbitrary-precision integer and rational number arithmetic. Second, the set of possible variables is known in advance (which are kinematic variables and the spacetime dimension variable). Third, the maximum level of nested fractions does not exceed three. The second and third points are important for simplifiers that require these parameters to be passed in advance before actual calculations. ### Method for connecting with the simplifier In order to connect the simplifier, the _fork-exec_ technique, popular on Unix systems, is used by FUEL to first create a copy of the current process, then run a new executable file in the context of the newly created process. If the simplifier is an executable file or comes with source code from which an executable file can be built, there is no need to do anything apart from downloading or compiling the simplifier before calling _exec_. If the simplifier is a library, we write wrapper code to use this library and then compile the code into an executable. This technique is as universal as possible, that is, it is applicable to almost any program written in any programming language. Communication with the spawned process is done through two specially created _pipes_, the first of which is used by the parent program (e.g. FIRE) to write messages to the simplifier, and the second of which is used to send messages in the opposite direction, both in the text format. These techniques make it possible to create a single universal interface for the simplifiers and separate their code from the main program which creates the rational function coefficients (e.g. from creating linear systems of IBP equations). The final procedure for connecting the new simplifier consists of the following steps: 1. Determine which command-line arguments should be passed when _exec_ is called. 2. Communicate the rational function to be simplified to the simplifier in an appropriate syntax understood by the simplifier. This trigger the simplifier to parse the expression into an internal representation, simplify the expression, and prints out the result which is then sent back as the response. 3. Write the code for parsing the simplifier response. 4. Determine which commands to pass to the simplifier process in order for it to terminate correctly. The second and third items are the most time-consuming to implement, because they are unique to each of the pluggable simplifiers, and it was necessary for us to study their documentation, examples, and sometimes advice on the Internet. ### Connected simplifiers This is the complete list simplifiers (and the languages they are written in) which can be accessed by the current version of FUEL: CoCoA (C++), Fermat (C), FORM (C), GiNaC (C++), Macaulay2 (C/C++, Macaulay2), Maple (C, Java, Maple), Maxima (Common Lisp), Nemo (C, Julia), PARI (C), Wolfram Mathematica (C/C++, Java, Wolfram Language). This list contains both open-source and proprietary software solutions. The issue of software licensing is an important concern, because it may affect, for example, the license under which a derived program can be distributed, the right to modify the code, and the right to distribute the code to third parties. Though we initially searched exclusively among open-source programs and libraries, we could not omit widely used proprietary computer algebra systems such as Maple and Wolfram Mathematica. A brief introduction to each simplifier is given below. #### 3.3.1 CoCoA CoCoA [3] is a computer algebra system for computing in polynomial rings. The development of the system began in 1987 in Pascal, hence its Pascal-like syntax, and it was later rewritten in C, and yet a little later a C++ library, CoCoALib [4], appeared, and the latter library was used in our work. CoCoA allows to perform calculations in rings of polynomials of many variables with rational or integer coefficients, as well as over ideals of these rings. The user can redefine the polynomial ring used as well as various homomorphisms for converting elements from one ring to another. According to CoCoA's authors, the Grobner basis is used as the key mechanism for efficient computations in commutative algebra. In order for CoCoA to handle a factor of the form Eq. (1) passed to it, we need to specify, in the C++ constructor, a field that it belongs to. To do this, we specify the appropriate _ring of integers_, _ring of fractions_, and _ring of polynomials_, combining and substituting one into the other to get the desired field. Then it is possible to supply a string representation of the rational function and get a simplified representation from it. In this paper, we use the version CoCoA 0.99715. #### 3.3.2 Fermat Fermat [5] is a computer algebra system developed by Robert Lewis, with the goal of being fast and memory efficient, covering "arithmetic of arbitrarily long integers and fractions, multivariate polynomials, symbolic calculations, matrices over polynomial rings, graphics, and other numerical calculations". Fermat has influenced research in fast rational function arithmetic in computer algebra [40]. Until recently, Fermat was the only simplifier used by the C++ version of FIRE. Fermat is also the main simplifier in two other IBP reduction programs, Kira and Reduze. In this paper, we use the version Fermat 5.17. #### 3.3.3 Form FORM [6] is a computer algebra system, which the authors themselves prefer to call a system for formula conversions. It has been in development since 1984, and its original goal was to simplify calculations in quantum field theory. FORM is written in C and accepts input in a special programming language, which is then interpreted and executed. In other words, FORM is not interactive but operates as a batch-processing program. FORM's language has many features: it has an advanced preprocessor with more than sixty commands, several types of variables: symbols, vectors, indices, functions, sets, more than a hundred commands controlling the execution, output and properties of variables, and more than eighty functions. Besides the regular version of the program, there are also two parallelized versions: ParFORM, which runs on a system with independent nodes, each using its own processor, memory and disk, and TFORM, which uses POSIX threads to better expose multiprocessor capabilities on shared-memory machines. FORM is distributed under the GNU GPL license. For some special cases it may be necessary to override the standard settings that control how FORM works, such as the maximum size of the substitution tree, the maximum size of a term that does not require additional allocations, the size of the I/O buffers, and other settings. In order to simplify a large expression, we need to redefine several settings, otherwise the program would stop due to insufficient memory. In this paper, we use the version FORM 4.2.1. #### 3.3.4 GiNaC GiNaC [7] is a C++ library for computer algebra, initially designed for Feynman diagram calculations. Contrary to many other computer algebra systems which come with their own proprietary interactive languages, GiNaC emphasizes programmatic use, extensibility and interoperability with other programs within a statically-typed compiled language (C++). Besides features commonly found in most systems, such as big integers and polynomial simplification, GiNaC offers functionalities useful for Feynman diagram calculations, such as handling of expressions involving Lorentz, Dirac, and color indices. In high energy physics research, GiNaC is perhaps most well known for its support for numerical evaluations of special functions known as multiple polylogarithms. A fork of GiNaC, PyNaC [41], was used as a core component of SageMath [42], a flagship open-source computer algebra system. In this paper, we use the version GiNaC 1.8.2. #### 3.3.5 Macaulay2 Macaulay2 [8] is system for computation in algebraic geometry and commutative algebra, covering functionalities such as Groebner bases, free resolutions of modules, Betti numbers, primary decomposition of ideals, etc. Macaulay2 has been used in research in applying computational algebraic geometry to IBP reduction [43]. In this paper, we use the Macaulay2 version 1.19.1+ds-6. #### 3.3.6 Maple Maple is a general-purpose computer algebra system. It is widely used in high energy physics. For example, the IBP reduction program AIR [17] is written in Maple. Compared with its competitor Wolfram Mathematica, Maple offers a more conventional ALGOL/C-like programming language. Maple has seen continuous and active developments in high-performance algorithms relevant for polynomials and rational functions (see e.g. [44, 40, 45]). We use the _normal_ function in Maple to simplify rational functions, with the option _expanded_ to prevent polynomials from being kept in factorized forms. In this paper, we use the version Maple 2023 on Machine I and Maple 2019 on Machine II. The versions are different due to licensing reasons, and any effects on benchmark results will be commented on. #### 3.3.7 Maxima Maxima [10] is a computer algebra system, a descendant of Macsyma, which allows many different operations on symbolic and numeric expressions. Maxima is self-described as a "fairly complete computer algebra system" and can be used to e.g. differentiate, integrate, solve Laplace transforms, and construct graphs. Maxima, like its ancestor, is written in Lisp. It is distributed under the GNU GPL license. SageMath [42] uses Maxima internally for certain nontrivial computations such as symbolic integration and taking limits. Maxima has rich functionality for simplifications; it offers several functions for the user to choose from: _rat_, _ratsimp_, _fullratsimp_, _radscan_ and many flags that affect how the functions work. Some functions are mainly intended to simplify rational expressions, while others can simplify expressions containing logarithmic, exponential and power functions. Some perform simplification once over the expression, while others do it until the resulting expression stops to change. In addition, several flags have been included to make it easier to parse the rational functions simplified by Maxima: _display2d:false_ disables 2D output, _stardisp:true_ removes unnecessary multiplication signs, and _nolabels:true_ allows to remove unnecessary I/O labels for entered and resulting expressions. In this paper, we use the version Maxima 5.45.1. #### 3.3.8 Nemo (with a custom parser) Nemo [11] is a computer algebra system for the Julia programming language [46], and it aims to "provide highly performant commutative algebra, number theory, group theory and discrete geometry routines." It provides a fast Julia interface to C/C++ libraries such as FLINT [47]. FLINT provides efficient operations for polynomials over a variety of number fields such as rational numbers and prime fields. Benefiting from the EU-funded OpenDreamKit project for open-source computer algebra, FLINT gained fast code for multivariate polynomials. Meanwhile, the Julia code in Nemo provides, among other functionalities, operations for rational functions that build upon polynomial operations of FLINT. In the current version of FUEL, we always use Nemo's sparse multivariate polynomials and associated rational functions. In the univariate case, specialized routines from Nemo and FLINT can be faster but have not been used in our work due to a lack of implementation effort on our side. A previous internal version of FUEL calls Nemo from a Julia REPL session (i.e. an interactive user session), and the performance was poor due to the overhead of parsing. The parser of the Julia REPL is designed to process arbitrary syntax in the Julia language and is relatively slow for our special purpose of parsing rational function expressions. Fortunately, taking advantage of Julia's JIT compilation, we are able to write a fast parser for mathematical expressions based on a variation of the well-known shunting-yard algorithm. The parser is included in a Julia package, _RationalCalculator_, bundled with FUEL. Additionally, the aforementioned package supports printing out calculation results in a format that is not human-readable but instead optimized for transfer of expressions between the simplifier and FUEL. Human readable output can be re-enabled by calling a routine in FUEL, e.g. when FIRE writes out IBP reduction tables, so that the final IBP reduction table from the FIRE run is unaffected. In this paper, we use the versions Nemo 0.33.1 and Julia 1.8.5. #### 3.3.9 Pari / GP PARI / GP is a computer algebra system focused on number theory, developed at the University of Bordeaux. It can compute "factorizations, algebraic number theory, elliptic curves, modular forms, L functions", etc. [12]. PARI is a C library, while GP is the front-end that allows interactive use. We use GP as the expression simplifier. In this paper, we use the version GP 2.13.3. #### 3.3.10 Wolfram Mathematica Wolfram Mathematica is the most widely used general-purpose computer algebra system as of today, at least in theoretical high energy physics research. It was initially developed by Stephen Wolfram, with influences from Maxima and Wolfram's earlier system, SMP. Mathematica offers a high-level language emphasizing functional programming and term rewriting (called _Replacement_ in Mathematica). As of today, Mathematica encompasses a huge range of functionalities in symbolic and numerical computing. Many software packages and research data in high energy physics are published in Mathematica's formats. FIRE, even when used in the C++ mode for the main computation, uses Mathematica for pre-processing user-supplied integral family information and various post-processing tasks such as loading reduction tables and finding symmetry rules relating master integrals. We use the _Together_ function in Mathematica to simplify rational functions. In this paper, we use the version Mathematica 13.1 on Machine I and Mathematica 13.0 on Machine II. The versions are different due to licensing reasons but there are no significant effects on benchmark results. ## 4 Main benchmark tests ### Testing method All the simplifiers under consideration are used as sequential programs, i.e. the simplifier process does not process more than one rational function expressions simultaneously. If parallel evaluation is desired, the user should spawn the required number of simplifier processes and organize parallel sending and receiving of rational function expressions. For example, FIRE has one main execution thread and several additional threads (named FLAME) that communicate with simplifiers. In order to test how fast the simplifiers work, a special benchmark program was written. It reads rational function expressions from a data file into memory, then with the help of a pre-selected simplifier, processes them individually: sending each expression to the simplifier by a _pipe_, waiting for a response, receiving and parsing the response, and finally printing out the total time taken to simplify all expressions from the file. The initialization of the simplifier (creating a process, loading its context and, for some simplifiers, passing some configuration parameters) is done once at the very beginning and takes at most several seconds. Then the process can in principle run for many hours, so the initialization cost of the simplifier is not included in the benchmark results. Information about memory usage by the simplifier process was collected during all test runs. The utility program in Ref. [48] was used to collect the information, by monitoring the memory usage of the simplifier process every half a second. In order to keep the results as general as possible, testing in all configurations was done on two machines whose main characteristics are tabulated below: ### Description of test data Testing was performed on three sets of input data. The rational function expressions from the first set have one variable, and the rational function expressions from the second set have three variables. The number of variables is important as it can significantly affect the running time of some simplifiers. The third set consists of oversized expressions; the expression size is also an important parameter that directly affects running time. The table (2) shows the main quantitative characteristics of the sets used: the minimum, maximum, and average length of expressions, and the number of expressions in the file. ### Results #### 4.3.1 Running time The results obtained for the running times of the simplifiers, _as accessed from FUEL_, on different test sets on the two machines are presented in Table 3. All values are in seconds and are rounded to one decimal place. We stress again that the running time includes not only the simplification itself, but also parsing, printing, and pipe communications, so the results **should not be** considered as indicators of the inherent quality of the tested simplifiers, especially when the simplifers are used in workflows different from ours. There are two ways to look at Table 3: to compare the running times of the same simplifier on different machines within one test set of expressions, or to compare the running times of different simplifiers on a particular machine and a particular test set. Let us first consider the results obtained on different machines. For almost all pairs of _simplifiers_\(\times\)_expression set_, the running time on the first system is about 1.2 times less than on the second. Since \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline No. & \multicolumn{3}{c|}{CPU} & RAM \\ \cline{2-5} & Name & Base Freq. GHz & Boost Freq. GHz & \\ \hline I & AMD Ryzen 7 PRO 4750U & 1.7 & 4.1 & 16 \\ \hline II & AMD Ryzen 7 3750H & 2.3 & 4.0 & 24 \\ \hline \end{tabular} \end{table} Table 1: Characteristics of CPUs and RAM of the machines used for testing. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline No. & Min. expr. length & Max. expr. length & Avg. expr. length & Number of coefs. \\ \hline 1 & 1 & 5’341 & \(\sim\)29 & 692’584 \\ \hline 2 & 1 & 2’133 & \(\sim\)33 & 971’330 \\ \hline 3 & 232’971 & 465’943 & \(\sim\)310’628 & 12 \\ \hline \end{tabular} \end{table} Table 2: Parameters of sets with rational function expressions, on which testing was conducted. this ratio of running time almost coincides everywhere, the conclusions obtained from the results of the runs should be largely transferable to any other machine. (The extra tests in Section 5 and 6 will only be tested on Machine II.) Within each set, the simplifiers are divided into three groups, from the ones with the best performance to the ones with the worst performance: the first group includes those whose running times differ from the minimum on a given set by no more than five times, the second group includes those whose times differ by no more than 10 times, and the third group includes the rest. For clarity, the cells of the Table 3 are colored according to this division: the first group in green, the second in yellow, and the third in red. Let us now consider each of the test sets in more detail: 1. The first set is characterized by the fact that there is not more than one variable in each of the expressions. Based on performance in simplifying expressions from this test set, The first group includes Fermat, GiNaC, Nemo, and PARI / GP, the second group includes CoCoAand FORM, and the third group includes all others: Macaulay2, Maple, Maxima, and Wolfram Mathematica. 2. The second set differs from the previous one in that the expressions can now contain up to three variables. As can be seen from the results, the distribution across groups has not changed relative to the distribution for the first set. 3. The third set of expressions is very different from the previous two sets, in that the average expression length has increased by a factor of about 10000, although there is a smaller total number of expressions to keep the total running time manageable. We comment on the performance of a few simplifiers: CoCoA has moved from the second group to the third, showing a slight deterioration. FORM and PARI / GP have the poorest performance (as usual, with the caveat that only in the workflow considered here). While in the previous sets FORM was about 6 times worse than the best simplifier, and PARI / GP was comparable to the best one, now FORM is worse than the best-performing simplifier by about 1515 times, and PARI / GP by about 466 times. It turns out that for these simplifiers, the expression length is a defining characteristic and with its growth their speed rapidly degrades. We have not been able to test \begin{table} \begin{tabular}{|c||c|c||c|c||c|c|} \hline Set No. & \multicolumn{2}{c||}{1} & \multicolumn{2}{c||}{2} & \multicolumn{2}{c|}{3} \\ \hline SimplifierComp. No. & I & II & I & II & I & II \\ \hline CoCoA & 56.4 & 69.8 & 105.0 & 122.1 & 75.3 & 82.7 \\ \hline Fermat & 7.7 & 8.8 & 13.8 & 14.6 & 1.3 & 1.7 \\ \hline FORM & 48.3 & 63.3 & 81.8 & 105.7 & 1969.9 & 1902.9 \\ \hline GiNaC & 18.8 & 21.4 & 34.5 & 44.4 & 4.6 & 5.9 \\ \hline Macaulay2 & 74.0 & 83.1 & 217.9 & 225.2 & - & - \\ \hline Maple & 125.3 & 163.0 & 231.7 & 286.6 & 53.0 & 61.9 \\ \hline Maxima & 84.3 & 101.4 & 146.0 & 172.4 & 9.4 & 11.0 \\ \hline Nemo & 14.0 & 17.8 & 26.7 & 33.5 & 2.4 & 2.7 \\ \hline PARI / GP & 8.0 & 8.5 & 15.1 & 18.5 & 606.5 & 721.8 \\ \hline Wolfram & 319.4 & 461.9 & 528.0 & 625.2 & 39.3 & 26.7 \\ Mathematica & & & & & & \\ \hline \end{tabular} \end{table} Table 3: Simplifier running times (in seconds) for each of the machines for three sets of input data. Macaulay2 on this set due to technical problems with pipe communications. For the third set, the division into groups is as follows: the first includes Fermat, GiNaC, and Nemo, the second includes Maxima, and the third includes CoCoA, FORM, Macaulay2, Maple, PARI / GP, and Wolfram Mathematica. In order to draw a conclusion based on these time measurements, we apply the following heuristic: if a simplifier is in the first group (i.e. the fastest group) for all the three test sets, then we will call it "best". If it is in the third group, that is, in the group with the slowest, then we will call it "bad". If neither is the case, we call it "good". It is important to stipulate that these labels should be understood only in conjunction with the phrase "for this class of tasks". According to the results of testing, the best simplifiers are Fermat, GiNaC, and Nemo, the group of good ones is empty, and the bad ones are CoCoA, FORM, Macaulay2, Maple, Maxima, PARI / GP, and Wolfram Mathematica. When choosing a program or library for simplifying polynomial expressions, e.g. for IBP reduction computations with FIRE, you should first choose from the "best" programs; if for some reason none of them suits you, only then consider the others. It is also noted that two proprietary programs, Maple and Wolfram Mathematica, are grouped with the slowest simplifiers for all the three test sets. This suggests that they are unlikely to be good candidates for use as simplifiers in FIRE runs. However, it would be misleading to conclude that these two programs are inherently poor in simplifying rational functions in general, as they perform well in a different benchmark problem in Section 5 which mainly measures inherent simplification time with less overhead in other tasks like parsing text. #### 4.3.2 Memory usage During all test runs, we collected information on memory usage and compiled the information into various statistics, but only one statistic is important: the maximum memory usage, which determines whether or not the _out-of-memory killer_ of the operating system will terminate the simplifier. The results for maximum memory usage by each simplifier for each of the three sets of expressions are given in Table 4. There is no additional breakdown in the table for the first and second machines, since the results are the almost the same for them. All values are rounded to integers. \begin{table} \begin{tabular}{|c||c|c|c|} \hline Stat. & \multicolumn{3}{c|}{Max} \\ \hline SimplifierSet & 1 & 2 & 3 \\ \hline CoCoA & 2 & 2 & 6 \\ \hline Fermat & 21 & 21 & 22 \\ \hline FORM & 3 & 3 & 164 \\ \hline GiNaC & 4 & 4 & 12 \\ \hline Macaulay2 & 296 & 402 & - \\ \hline Maple & 20 & 20 & 34 \\ \hline Maxima & 598 & 599 & 599 \\ \hline Nemo & 457 & 442 & 393 \\ \hline PARI / GP & 6 & 6 & 56 \\ \hline Wolfram & 130 & 130 & 138 \\ Mathematica & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 4: The maximum amount of used RAM by simplifiers when accessed via FUEL, in megabytes, for three sets of input expressions. For almost all simplifiers, the memory usage hardly depends on which set of expressions is being simplified. The exceptions are FORM and PARI / GP, whose memory consumption for the third set are significantly higher than for previous sets, though starting from a low base point. Based on the results from the table, you can see that different simplifiers can require from 2MB to 600MB of memory. Although these two figures differ by a factor of 300, the latter is not necessarily too large for practical use, because even standard laptops now have at least 8GB of RAM and server systems can have several hundred GB of memory. If expressions are to be simplified in parallel in your program, then several worker processes should be spawned. In this case, to estimate the amount of memory needed, the values in the table should be multiplied by the number of processes used. ## 5 Additional tests with low parsing overhead The CPU time consumed by a simplifier process consists of three parts: 1. Parsing the mathematical expression in a text format passed from an external program such as FIRE. 2. Simplifying the expression. 3. Printing out the expression. The "inherent performance" of the simplifier is the measured by the time spend on part (2) above, but this may not be the performance bottleneck depending on usage pattern. For example, part (1), the parsing process, can often be a bottleneck, considering that an IBP reduction run with FIRE can involve hundreds of thousands of expressions sent to the simplifier, some of which could be rather small and not inherently difficult to simplify. We will not carry out a full investigation of this issue. However, to shine some light on the impact of parsing performance, we present results from another set of test data, where the task is simplifying a single expression in 6 variables, \[\frac{(a+b+c+d+f+g)^{14}+3}{(2a+b+c+d+f+g)^{14}+4}-\frac{(3a+b+c+d+f+g)^{14}+ 5}{(4a+b+c+d+f+g)^{14}+6}\,. \tag{2}\] The expression is given in the test data file as the following line: ((a+b+c+d+f+g)^14+3)/((2*a+b+c+d+f+g)^14+4)-((3*a+b+c+d+f+g)^14+5)/((4*a+b+c+d+f+g)^14+6) The time taken to parse this short expression is negligible, but simplification of this expression, involving e.g. expanding polynomials and finding polynomial GCDs, is computationally demanding due to high powers of sub-expressions.1 The times taken by the benchmark runs on machine II are in Table 5 This test is drastically different from the tests in Section 4 since it artificially involves negligible parsing overhead, while the simplification itself is very demanding. (The test still includes the time taken for the simplifiers to print out the results, but printing usually has a smaller CPU footprint than parsing when large expressions are involved.) The results are also very different from those in Section 4. For example, Maple (version 2019 as used on Machine II) is now among the most performant programs in this test, either because it suffered from significant parsing overhead in previous tests or because Maple may have a relative advantage is simplifying very large rational expressions. Footnote 1: Note that such expressions do not arise from FIRE: even though high-degree expressions can be generated when solving IBP linear systems, the intermediate expressions are always simplified so that polynomials are in an expanded form or a nested form, and therefore there will be no explicit appearances of a single expression raised to a high power. ## 6 Tests with FIRE We run the double box IBP reduction example in FIRE6. This IBP problem is very simple by current standards and should be considered as a preliminary test, as the main focus of this work is presenting FUEL and standalone benchmark tests. The double box diagram is shown in Fig. 1. We reduce a rank-2 tensor integral with numerator \[(k_{2}+p_{1})^{2}(k_{1}-p_{3})^{2}\] using only one worker thread. The test is only run on Machine II. The statistics printed out at the end of FIRE runs are in Table 6. In addition to the "total time", we have separately shown the "substitution time" which can be the dominant part in some more complicated IBP reduction runs. In this test, the performance of the simplifiers relative to each other is very similar to the situation in test sets 1 and 2 in Section 4. Based on the data for test 3 in Section 4, it is likely that the situation can change dramatically for highly demanding IBP reduction problems. ## 7 Conclusions We have presented a new C++ library FUEL for simplifying rational function expressions, in light of ongoing efforts to improve the performance of integration-by-parts reduction for complicated Feynman integral calculations. As a standalone library, FUEL can also potentially find applications in other areas. Under a universal interface, FUEL allows a flexible choice of _simplifiers_, i.e. existing Figure 1: The double box diagram to be tested for IBP reduction in FIRE6 calling various different simplifiers. \begin{table} \begin{tabular}{|c|c|} \hline Simplifier & Time (seconds) \\ \hline Nemo & 6.6 \\ \hline Maple (2019) & 60 \\ \hline Fermat & 93 \\ \hline Maxima & 110 \\ \hline Wolfram & \\ Mathematica & 165 \\ \hline CoCoA & 384 \\ \hline \end{tabular} \end{table} Table 5: Time taken, in seconds, by various simplifier to run the test of this section. The numbers are rounded to the nearest integer, or one decimal place if it is less than 10. Only 6 simplifiers are shown in the table. The remaining ones, when accessed from FUEL, are not able to finish the test within 600 seconds. computer algebra programs or libraries, as the underlying computation engine. FUEL grew out of FIRE's original interface to Fermat, the latter being a computer algebra system written by Robert Lewis. FUEL is based on inter-process communication over Linux pipes, sending and receiving text expressions to a third-party simplifier program in a text format. We have not yet explored alternative setups such as directly linking with third-party libraries without spawning child processes. Although not without performance costs, the current setup allows for maximum flexibility in connecting with any simplifier written in any programming language. Parallel computation is achieved by running multiple processes of the same simplifer (or even different simplifiers catering to different types of expressions, experimentally supported by FUEL). This setup has already been time-tested with the success of the FIRE program in Feynman integral computations. Good performance under this setup requires the simplifier to be fast in both the key task of simplifying mathematical expressions and overhead tasks such as parsing text inputs, which makes certain simplifiers (such as recent versions of Maple) uncompetitive for our purpose even when they have a reputation for fast manipulations of polynomials and rational functions. In the current version of FUEL, We have implemented connections with 10 different simplifiers, including CoCoA, Fermat, FORM, GiNaC, Macaulay2, Maple, Maxima, Nemo, PARI / GP, and Wolfram Mathematica. For Nemo, we have also written a dedicated Julia package (distributed with FUEL) to enable fast parsing of mathematical expressions. Artificial benchmark tests, which somewhat mimic the workload under FIRE runs, are presented in Section 4. Fermat is the fastest simplifer for all the three test sets, while Nemo and GiNaC are also consistently among the fastest simplifiers. Pari / GP is very fast for the first two sets of test data involving shorter expressions (which likely mimic less demanding FIRE runs), but performs very poorly in the third data set involving large expressions (which may mimic FIRE runs for very complicated integral families, e.g. high-loop non-planar ones). By running each test set on two different machines, we have found the relative performance between different simplifiers to be largely independent of the machine used. An additional special-purpose test is presented in Section 5. Compared with the main tests discussed above, this test minimizes the overhead of text parsing but is extremely computationally intensive in the simplification itself. Here Fermat has dropped to the third place in the ranking of the fast programs, led by Nemo and Maple. Both Pari / GP and GiNaC (among others) have failed to complete the test before the 10-minute timeout. We plan to explore using more than one simplifiers \begin{table} \begin{tabular}{|c|c|c|} \hline Simplifier & Total Time & Substitution time \\ \hline Fermat & 22.5 & 1.61 \\ \hline Pari / GP & 24.8 & 3.00 \\ \hline Nemo & 31.7 & 2.24 \\ \hline GiNaC & 43.8 & 5.8 \\ \hline FORM & 83.5 & 6.8 \\ \hline CoCoA & 92.3 & 5.8 \\ \hline Maxima & 130.0 & 6.3 \\ \hline Macaulay & 136.3 & 10.6 \\ \hline Maple (2019) & 197.2 & 8.0 \\ \hline Wolfram & 605.0 & 22.2 \\ Mathematica & & \\ \hline \end{tabular} \end{table} Table 6: Performance of various simplifier when used by FIRE to reduce a rank-2 tensor integral for the massless two-loop double box. in a single C++ program, e.g. FIRE, given their different performance characteristics for different problems. A private experimental version of FIRE has been linked with FUEL to perform IBP reduction of Feynman integrals. We have demonstrated a very simple IBP reduction example for the two-loop double box, which can be completed by FIRE with any of the 10 connected simplifiers. The time required by the run for each simplifier has been tabulated, and the results are broadly consistent with those from the simpler test sets in the artificial benchmarks of Section 4. We leave further improvements and applications to physics calculations to future work. ## Acknowledgments We thank the authors of FLINT, Nemo and FORM for help with our questions about the software in mailing lists and/or private communications. The work of Alexander Smirnov was supported by the Russian Science Foundation under the agreement No. 21-71-30003 (the possibility to use different simplifiers in the FIRE program) and by the Ministry of Education and Science of the Russian Federation as part of the program of the Moscow Center for Fundamental and Applied Mathematics under Agreement No. 075-15-2019-1621 (development of the FUEL library). M.Z.'s work is supported in part by the U.K. Royal Society through Grant URF\(\backslash\)R1\(\backslash\)20109. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.
2308.13915
Break-Point Date Estimation for Nonstationary Autoregressive and Predictive Regression Models
In this article, we study the statistical and asymptotic properties of break-point estimators in nonstationary autoregressive and predictive regression models for testing the presence of a single structural break at an unknown location in the full sample. Moreover, we investigate aspects such as how the persistence properties of covariates and the location of the break-point affects the limiting distribution of the proposed break-point estimators.
Christis Katsouris
2023-08-26T16:31:37Z
http://arxiv.org/abs/2308.13915v1
# Break-Point Date Estimation for Nonstationary Autoregressive and Predictive Regression Models+ ###### Abstract In this article, we study the statistical properties and asymptotic behaviour of break-point estimators in nonstationary autoregressive and predictive regression models for testing the presence of a single structural break at an unknown location in the full sample. Moreover, we investigate aspects such as how the persistence properties of covariates and the location of the break-point affects the limiting distribution of the proposed break-point estimators. **Keywords:** nonstationary processes, persistence, local-to-unit root, structural break, break-point estimator, break magnitude, single break, multiple breaks, convergence rates, asymptotic distribution. JEL Classification: C12, C22 ## 1 Introduction Structural break inference is an important task when the robustness of econometric estimation is concerned. Although various existing methodologies in the time series econometrics literature focus on testing for the presence of parameter instability in regression coefficients, the estimation of the exact break-points and their statistical properties is more challenging, especially under the assumption of regressors nonstationarity. We develop point estimation and asymptotic theory for break-point estimators in nonstationary autoregressive and predictive regression model with nonstationary regressors. We first present the structural break test for the model coefficient of the predictive regression model based on the usual least squares estimate of the coefficient of the first-order autoregressive model as well as the predictive regression model. Second we using the IVX instrumentation which is found to be robust to the abstract degree of persistence (see, Kostakis et al. (2015)), we construct IVX-based structural break tests1 (see, Katsouris (2023a,c,d)) and corresponding break-point estimators. Overall, the accurate dating of a structural break is of paramount important especially since predictive regression models are commonly used to detect the so-called "pockets of predictability". Thus, correctly identifying periods of predictability regardless of the presence of a structural break is crucial for asset pricing and risk management purposes. Footnote 1: Relevant asymptotic theory analysis for the Wald-type statistics we employ in this article are presented by Katsouris (2023a,d). Determining the asymptotic distribution of the break-point estimator in nonstationary regression models is a challenging task. Specifically, for the case of a shift with fixed magnitude it can be shown that the limiting distribution of the change-point estimator depends on the underlying distribution of the innovation in a complicated manner (see, Hinkley (1970)). The least squares estimation of the change-point in mean and variance in a linear time series regression model is obtained by Pitarakis (2004), although the nonstationary properties of regressors are not considered. Furthermore, Pitarakis (2012, 2014) develops suitable econometric frameworks for jointly testing the null hypothesis of no structural change in nonstationary times series regression models. More recently Dalla et al. (2020) and Stark and Otto (2022) consider relevant aspects for structural break testing and dating in regression models under dependence. The commonly used assumption is that the break occurs at an unknown time location within the full sample, such that, \(t=\lfloor\tau T\rfloor\), where \(\tau\in(0,1)\) and these limits are also used when obtaining moment functionals. Specifically, predictive regressions with regressors generated via the local-to-unity parametrization has recently seen a growing attention in the literature (see, Phillips and Magdalinos (2009), Gonzalo and Pitarakis (2012), Phillips (2014), Kostakis et al. (2015), Kasparis et al. (2015), Demetrescu et al. (2020) and Duffy and Kasparis (2021)). The main feature of these time series regression models, is that an \(I(0)\) integrated dependent variable (such as stock returns) is regressed against a persistent predictor (such as the divided-price ratio) and this allows to construct predictability tests (see, also Zhu et al. (2014)). In statistical terms, the process \(\left(Y_{t},\mathbf{X}_{t-1}\right)_{t\in\mathbb{Z}}\) with a martingale difference sequence \(\xi_{t}=\left(u_{t},v_{t}\right)^{\prime}\) such that \(\left(Y_{t},\mathbf{X}_{t-1},\xi_{t}\right)_{t\in\mathbb{Z}}\) is generated by a predictive regression model and \(\mathbf{X}_{t}\) is an autoregressive process expressed using the local-to-unity parametrization, it allows us to examine the persistence properties of regressors across various regimes. On the other hand, these frameworks operate under the assumption of a parameter constancy in the full sample. In this paper, we study the break-point estimators for structural break tests in predictive regression models with possible nonstationary regressors. The stability of the autoregressive processes is determined by the local-to-unity parametrization. Specifically, we focus on autoregressive processes which are close to the unit boundary but have different order of convergence, namely high persistent regressors which are \(o_{P}\left(n^{-1/2}\right)\) and mildly integrated regressors which are \(o_{P}\left(n^{-\gamma/2}\right)\) for some \(\gamma\in(0,1)\) and a positive persistence coefficient \(c>0\). These features do complicate the asymptotic theory analysis in some extend but their properties are useful for break-point estimation and dating in the aforementioned settings. For instance, conventional structural break tests for the parameters of linear regression models employ the widely used sup-Wald test proposed by Andrews (1993). However, the distributional theory of the Andrews's test crucially depends on the strict stationarity assumption of regressors. In contrast to the literature that focuses on structural break testing in linear regressions, the predictive regression model is usually fitted to economic datasets which contain time series that are highly persistent. Therefore, within such econometric environment the traditional law of large numbers and central limit theorems can invalidate the standard econometric assumptions of linear regression models, which affect the large sample approximations. As a result distorted inferences can occur when testing for parameter instability in predictive regressions when these features are not accommodated in the asymptotic theory of the tests. Our first objective is to theoretically demonstrate the impact of the presence of the nuisance parameter of persistence to the limiting distribution of test statistics and break-point estimators. Specifically, the limit result obtained by Andrews (1993) implies the use of the supremum functional on the Brownian Bridge process defined as \(\mathfrak{sup}_{s\in[0,1]}\big{[}W_{n}(s)-sW_{n}(1)\big{]}\). Under weak convergence, we have process convergence where \(B\) is a Brownian motion, that is, a pivot process, hence enabling the practitioner to use standard tabulated critical values. Conveniently, Katsouris (2023a,d) show that the OLS based sup-Wald statistic in the presence of nearly integrated regressors weakly converges into the standard NBB limit, however in the case of regressors with high persistence the same limit is no longer valid. As a result, this complicates the asymptotic theory analysis of break-point estimators especially since usually these nonstationary features are not a prior known. However, before proceeding to the limit behaviour of break-point estimators we discuss an alternative estimation procedure which is found to be robust to the unknown persistence properties can be applied to a structural-break testing while producing similar asymptotic behaviour. In this direction, one can employ the IVX instrumental variable estimation approach proposed by Phillips and Magdalinos (2009) and extensively examined by Kostakis et al. (2015) in the context of predictability tests and by Gonzalo and Pitarakis (2012) in the context of predictability tests for threshold predictive regression models2. Footnote 2: More specifically, the limit theory of Kostakis et al. (2015) provides a unified framework for robust inference and testing regardless the persistence properties of regressors. A simple example is the application of the IVX-Wald test. Thus, we study the statistical inference problem of structural break testing at an unknown break-point therefore the supremum functional is implemented and two test statistics are considered based on two different parameter estimation methods. The first estimation method considers the OLS estimator, while the second method considers an instrumental variable based estimator, namely the IVX estimator proposed by Phillips and Magdalinos (2009). These two estimators of the predictive regression model have different finite-sample and asymptotic properties, which allows us to compare the limiting distributions of the proposed tests for the two different types of persistence of the regressors. For both test statistics we assume that the regressors included in the model are permitted to be only one of the two persistence types which simplifies the asymptotic theory of the tests, however the presence of nuisance parameters under the null of parameter constancy, that is, the unknown break-fraction and the coefficient of persistence, requires careful examination of the asymptotic theory. An additional caveat is the inclusion of an intercept in the predictive regression model which induces different limiting distributions when is assumed to be stable vis-a-vis the case in which is permitted to shift. In summary, we are interested to examine the consistency and convergence rates of the break-point estimators that correspond to these two estimation methodologies for the coefficients of the predictive regression model. The asymptotic theory of the present paper hold due to the invariance principle of the partial sum process of \(x_{t}\), where \(x_{t}=\left(1-\frac{c}{n^{\gamma}}\right)x_{t-1}\), as proposed by Phillips (1987) and is considered to be the building block for related limit results when considering time series models. We denote with \(\hat{\mathcal{U}}_{n}(s):=\frac{1}{\sqrt{n}}\sum_{t=1}^{\lfloor ns\rfloor}x_{t}\), for some \(s\in[0,1]\) and with \(\hat{\mathcal{U}}_{n^{\gamma}}(s):=\frac{1}{T^{\gamma/2}}\sum_{t=1}^{\lfloor n ^{\gamma}s\rfloor}x_{t}\), for some \(s\in[0,1]\) and \(0<\gamma<1\) for the invariance principle of the partial sum process of \(x_{t}\) in the case of mildly integrated processes, for the corresponding limit as proposed by Phillips and Magadalinos (2005). Motivated from the aforementioned seminal work, as well as the framework of Phillips and Magdalinos (2009), similarly in this paper we use these invariance principles of the partial sums processes that correspond to the instrumental variable, IVX, proposed by Phillips and Magdalinos (2009), specifically within a structural break testing framework. These results allow us to formally obtain the limiting distributions of the proposed tests with respect to the nuisance parameter of persistence along with the unknown break-point location, and observe in which cases we obtain nuisance-free inference that can simplify significantly the hypothesis testing procedure. All random elements are defined with a probability space, denoted by \((\Omega,\mathcal{F},\mathbb{P})\). All limits are taken as \(n\rightarrow\infty\), where \(n\) is the sample size. The symbol "\(\Rightarrow\) " is used to denote the weak convergence of the associated probability measures as \(n\rightarrow\infty\). The symbol \(\overset{d}{\rightarrow}\) denotes convergence in distribution and \(\overset{\text{plim}}{\rightarrow}\) denotes convergence in probability, within the probability space. Let \(\left\{Y_{t},\boldsymbol{X}_{t}\right\}_{t=1}^{n}\), where \(\boldsymbol{X}_{t}=\left(X_{1t},...,X_{pt}\right)\) denote the corresponding random variables of the underline joint distribution function. The rest of the paper is organized as follows. Section 2 presents the asymptotic theory for break-point detection in AR(1) models. Section 3 considers the asymptotic theory for break-point detection predictive regressions. ### Contributions to the literature Our study contributes to the time series econometrics literature in several ways. Firstly, we propose a set of Wald type statistics for detecting parameter instability in predictive regression models robust to the persistence properties of regressors. We derive analytical forms of the limit distributions of these test statistics and show under which conditions these limit results correspond to the conventional NBB result providing this way a clear estimation and inference strategy to practitioners interested to implement a structural break test in predictive regression models with possibly nonstationary regressors. Therefore the significance of the proposed framework in the broader econometric literature includes the provision of detailed asymptotic results for these structural break tests as well as necessary data transformations and functional forms which are can be implemented so that statistical inference can be simplified. Therefore, the construction of structural break tests in predictive regression models would not have been possible without one first consider the asymptotic behaviour of estimators of the predictive regression model. Secondly, in a similar spirit as in Saikkonen et al. (2006), we aim to investigate the properties of estimators of the time period where a shift has taken place. In particular, under the assumption of a possible single structural break, we point identify the shift in the model coefficients of the first order autoregressive and predictive regression model. Moreover, two alternative estimators for the break date are considered, and their asymptotic properties are derived under various assumptions regarding the local alternatives in which case the size of the shift is considered. These results have various further applications as they can then be used to explore the implications of inference in predictive regression models after estimation of breaks. Lastly, we aim to perform a more detailed and more insightful investigation of the small-sample properties of the break date estimators and the resulting structural break tests by extending the simulation design and empirical findings of Katsouris (2023a). Notice that the break-point date estimator denoted with \(k=\uptau n\), where \(\uptau\in(0,1)\), needs to be estimated using a criterion function. However, the estimation approach of the econometric model will also consequently affect the statistical properties of the corresponding break-point estimator. Rearranging the break-point estimators gives the following expression \[\hat{\uptau}=\frac{\hat{k}}{n},\quad\text{where}\quad(\hat{\uptau}-\uptau)=o_{ p}(1).\] A relevant asymptotic theory question of interest is under which conditions the above convergence in probability holds. Roughly speaking, the main idea of the framework here is that asymptotically the break date can then be located at the true break date or within a neighbourhood of the true break date. Therefore, one will need to carefully consider the required assumptions that provide identification conditions for the break date. In addition, we will need to consider possible aspects of the structural break testing environment under regressors nonstationarity that can potentially make the break date estimator \(\hat{\uptau}\) inconsistent, resulting to an incorrect estimation of the break date for the modelling environment under consideration. In terms of the stochastic integral approximations we consider in this paper, a relevant literature include Phillips (1987a,b, 1988) as well as Kurtz and Protter (1991) and Hansen (1992) who present various examples of weak convergence of stochastic integrals and stochastic differential equations upon which the limit theory of this paper is based on. The related limit theory proves that the continuous time OU diffusion process given by \[dJ(t)=cJ(t)dt+\sigma dW(t),\ J(0)=b,t>0 \tag{1.1}\] where \(c\) and \(\sigma>0\) are unknown parameters and \(W_{t}\) is the standard Wiener process, has a unique solution to \(\{J(t)\}_{t\in[0,1]}\), such as \(J(t)=\mathsf{exp}\left(ct\right)b+\sigma\int_{0}^{t}\mathsf{exp}\left\{c(t-s )\right\}dW(s)\equiv\exp\left(ct\right)+\sigma J_{c}\left(t\right)\)(see, (Perron, 1991)). This representation provides a way of determining the asymptotic convergence of each of the component which the expression of the Wald type statistic can be decomposed to. ### Related Literature When the break-point is known one can apply a Chow-type test statistic. In particular, Sun and Wang (2022) investigate the asymptotic distribution of a Chow-based test in the presence of heteroscedasticity and autocorrelation. However, when the break-point is unknown, the estimation procedure for break-point date estimation is usually based on the optimization of a criterion function. Moreover, the presence of a possible single against multiple-breaks will require modifications to the formulation of the relevant hypotheses, the criterion function as well as the asymptotic theory analysis. Specifically, in the literature of structural break testing for linear regression models, Bai (1997) propose a statistical procedure for detecting multiple breaks sequentially (one-by-one) rather than using a simultaneous estimation approach of multiple breaks. In summary while Bai (1997) proposed a sample-splitting method to estimate the breaks one at a time by minimizing the residual sum of squares, Bai and Perron (1998) proposed to estimate the breaks simultaneously by minimizing the residual sum of squares. The advantages of the former method lies in its computational savings and its robustness to misspecification in the number of breaks. A number of issues arise in the presence of multiple breaks. More precisely, the determination of the number of breaks, the estimation of the break points given the number as well as the statistical analysis of the resulting estimators. Simultaneous and sequential methods are fundamentally different methodologies that yield different break-point estimators. However, one of the drawbacks of sequential break-point algorithms is the complexity in deriving convergence rates of the estimated break-point. In particular, Bai (1997) demonstrate that sequentially obtaining estimated break points are \(n-\)consistent, which corresponds to the same rate as in the case of simultaneous estimation3. Although these features have not been investigated in the case of predictive regression models with possible nonstatationary regressors or regressors with mixed integration order. In this article, we aim to provide some insights on some relevant cases (not all of the nonstationary regimes). Within our framework, the first stage of the procedure implies testing for the possible presence of a structural break under high persistence, that is, the regressors of the autoregressive and the predictive regression models are parametrized using the local-to-unity parameter which induces a nearly-integrated process. Furthermore, the accuracy of the estimator depends on whether the break-point estimator is bounded. Thus, the robustness of the break-point estimator can be verified by its ability to be as close as possible to the true break-point within the full sample. From the unit root and structural break literature perspective, relevant testing methodologies include Saikkonen et al. (2006) as well as the single-break homoscedasticity-based persistence persistence change tests proposed by Harvey et al. (2006). Testing for multiple break-points are discussed in the studies of Lumsdaine and Papell (1997) and Carrion-i Silvestre et al. (2009). Moreover, Kejriwal and Perron (2008) study estimation and inference in cointegrated regression models with multiple structural changes allowing both stationary and integrated regressors; by deriving the consistency, rate of convergence and the limit distribution of the estimated break fractions. If the coefficients of the integrated regressors are allowed to change, the estimated break fractions are asymptotically dependent so that confidence intervals need to be constructed jointly. Recently, Kejriwal et al. (2020) propose bootstrap procedures for detecting multiple persistence shifts in heteroscedastic time series such as cointegrating regressions. The bootstrap procedure proposed by Kejriwal et al. (2020) for detecting multiple breaks as an example here is as below: 1. If the null hypothesis is not rejected at the desired level of significance, stop the procedure and conclude there is no evidence of instability. Otherwise, obtain the break date estimate \(\hat{\lambda}\) by minimizing the sum of squared residuals and proceed to the following step. 2. Conduct an F-test using chi-squared critical values for the equality of the coefficients across regimes on the subset of coefficients of interest allowing the others to change at the estimated breakpoint. Upon a rejection, conclude in favor of a structural change in the subvector of interest, otherwise the stability cannot be rejected. The asymptotic validity of the two-step procedure follows from _(i)_ the test in the first step is asymptotically pivotal under the null and consistent against alternatives involving a change in at least one parameter and _(ii)_ the break fraction is consistently estimated as long as any of the parameters are subject to a break. In particular, the second fact ensures that the F-test in the second step converges to a chi-square distribution under the null hypothesis of no structural change in the subvector of interest. More precisely, this result follows since the estimate of the break fraction is fast enough to ensure that the limiting distribution of the parameter estimate is the same that would prevail if the break date was known. In summary, the particular framework reveals that existing partial break sup-Wald tests diverge with \(n\) when the coefficients are not being tested are subject to change. Thus, Kejriwal et al. (2020) propose a simple two-step procedure which first tests for joint parameter stability and subsequently conducts a standard chi-squared stability test on the coefficients of interest allowing the other coefficients to change at the breakpoints estimated by minimizing the sum of squared residuals in the pure structural change model. The procedure proposed by Kejriwal et al. (2020) estimates the number of breaks using a sequential test of the null hypothesis of (\(\ell\geq 1\)) breaks against the alternative of \((\ell+1)\) breaks. The particular approach is useful especially in cases when the null of no break could be rejected against the alternative hypothesis of at least one break. Lastly, Casini and Perron (2022) propose a generalized laplace inference in multiple change-points models framework, although not suitable for detecting multiple-breaks in cointegrating and predictive regression models. Generally speaking, estimation and inference procedures of econometric models that do not account for such data-driven selection of nuisance parameters such as an unknown structural break or an unknown threshold variable can perform poorly when used for empirical studies (Andrews et al., 2021) (see, also Gonzalo and Pitarakis (2012, 2017)). In this direction, Zhu et al. (2022) propose a framework which considers the possible presence of both a structural break and a threshold effect in predictive regression models, robust to the degree of persistence. In the self-normalization literature two relevant studies include Choi and Shin (2022) and Zhang and Lavitas (2018). Then, to correct for serial dependence, it often requires a consistent estimator of the long-run variance, namely, the spectral density at zero frequency. The particular quantity involves autocovariances of all orders, and a data-driven bandwidth is usually needed for its estimator to be adaptive to the underlying dependence. Specifically, Shao and Zhang (2010) uses the self-normalization approach for single change-point testing in time series. Note that, the self-normalization approach implies that instead of restoring to a consistent estimator of the long-run variance, one relies on a sequence of recursive estimators to form the normalizer and in turn pivotalize the asymptotic distribution of the test statistic. A framework for single structural break testing based on the self-normalization approach is given by Ling (2007). Lastly, Andrews et al. (2021) study methodologies for inference after estimation of structural breaks (see, also Fiteni (2002) and Busetti and Taylor (2003)). ### Illustrative Examples We present some relevant illustrative examples to the modelling environment under consideration. **Example 1**.: Consider the following predictive regression model \[y_{t}=(\alpha_{1}+\boldsymbol{\beta}_{1}x_{t-1})\mathbf{1}\left\{t\leq k \right\}+(\alpha_{2}+\boldsymbol{\beta}_{2}x_{t-1})\mathbf{1}\left\{t>k\right\} +u_{t} \tag{1.2}\] Consider testing the joint null hypothesis4\(H_{0}:\boldsymbol{\beta}_{1}=\boldsymbol{\beta}_{2}=\mathbf{0}\). Under the null hypothesis, the predictive regression model reduces to a change in mean model as below (see, Katsouris (2022)): Footnote 4: Note that we can also test predictability in the pre-break and post-break subsamples, using \(H_{0}:\boldsymbol{\beta}_{1}=0\) and \(H_{0}:\boldsymbol{\beta}_{1}=0\), respectively. \[y_{t}=\alpha_{1}\mathbf{1}\left\{t\leq k\right\}+\alpha_{2}\mathbf{1}\left\{t >k\right\}+u_{t} \tag{1.3}\] **Assumption 1**.: The following conditions hold: 1. The magnitude of the level shift can be expressed as below \(|\alpha_{2}-\alpha_{1}|=n^{-1/2}\delta_{n}\) for a sequence \(\delta_{n}\) which is a function of the sample size \(n\) such that \(\delta_{n}=\mathcal{O}(n^{s})\), for some \(s\in(0,1/2]\). 2. Suppose that \(k=\lfloor\uppi n\rfloor\), where \(\uptau\in\Pi:=(\pi_{0},1-\pi_{0})\) for some \(\pi_{0}\in(0,1/4)\). Under the above assumption (identification assumption) the magnitude of the level shift either is independent of the sample size or shrinks to zero at a rate slower than \(n^{-1/2}\). The break-fraction can be consistently estimated regardless of whether \(x_{t}\) is either stationary or nearly integrated under the null hypothesis (see, also Kejiriwal and Perron (2008)). **Example 2** (see,Casini and Perron (2022)).: Some relevant results include: For \(r\in[0,1]\), it holds \[\frac{1}{\sqrt{n_{b}^{0}}}\sum_{t=1}^{\lfloor rn_{b}^{0}\rfloor}z_{t}u_{t} \Rightarrow\mathcal{G}_{1}(r)\quad\frac{1}{\sqrt{(n-n_{b}^{0})}}\sum_{t=n_{b }^{0}+1}^{n_{b}^{0}+\lfloor r\left(n-n_{b}^{0}\right)\rfloor}z_{t}u_{t} \Rightarrow\mathcal{G}_{2}(r) \tag{1.4}\] where \(\mathcal{G}_{i}(\cdot)\) is a multivariate Gaussian process on \([0,1]\) with zero mean and covariance such that \(\mathbb{E}\Big{[}\mathcal{G}_{i}(\mathsf{u})\mathcal{G}_{i}(\mathsf{s})\Big{]} =\mathsf{min}\left\{\mathsf{u},\mathsf{s}\right\}\times\Sigma_{i}\), where \(i\in\{1,2\}\). \[\Sigma_{1}=\lim_{n\to\infty}\ \mathbb{E}\left[\frac{1}{\sqrt{n_{b}^{0}}}\sum_{t =1}^{\lfloor rn_{b}^{0}\rfloor}z_{t}u_{t}\right]^{2}\quad\Sigma_{2}=\lim_{n \to\infty}\ \mathbb{E}\left[\frac{1}{\sqrt{(n-n_{b}^{0})}}\sum_{t=n_{b}^{0}+1}^{n_{b}^{0 }+\lfloor r\left(n-n_{b}^{0}\right)\rfloor}z_{t}u_{t}\right]^{2} \tag{1.5}\] For any \(0<r_{0}<1\), \(r_{0}<\uptau_{0}\), \(\frac{1}{n}\sum_{\lfloor rn\rceil+1}^{\lfloor\uptau_{0}n\rfloor}z_{t}z_{t}^{ \prime}\stackrel{{ p}}{{\to}}(\uptau_{0}-r_{0})V_{1}\), and \(\uptau_{0}<r_{0}\), \(\frac{1}{n}\sum_{\lfloor\uptau_{0}n\rfloor+1}^{\lfloor\uptau_{0}n\rfloor}z_{ t}z_{t}^{\prime}\stackrel{{ p}}{{\to}}(r_{0}-\uptau_{0})V_{2}\). A suitable stopping rule used for the change-point detection rely either on thresholding or on the optimization of a model selection criterion. Various methods for multiple change-point detection exist in the literature which include the dynamic programming method, to detect multiple change points in the exponential family of distributions. According to Casini and Perron (2022), for multiple break-points such that \(m\) change-points, the inference framework can be constructed by denoting with \(j\in\{1,...,m+1\}\), where by convention \(n_{0}^{0}=0\) and \(n_{m+1}^{0}=n\). In the multiple break-point setting, such that \(\left(n_{1}^{0},...,n_{m}^{0}\right)\), there are \((m+1)\) regimes, each corresponding to a distinct parameter value \(\delta_{j}^{0}\), which needs to be estimated. Therefore, when the full sample has \(n\) available observations, the aim is to simultaneously estimate the unknown regression coefficients together with the break points. Moreover, the asymptotic theory analysis for the multiple break-points case follow directly from the single break case. However, especially for nonstationary time series regressions the existence of multiple break points might be problematic due to possible changes in the persistence properties of regressors from one regime to the other. Neverthless, the procedure and estimation criterion proposed by Casini and Perron (2022) can be employed to identify these multiple structural breaks in predictive regression models as well (see, Barigozzi et al. (2018)). Therefore, the class of estimators for inference in multiple change-points regressions relies on a certain criterion function (see, Casini and Perron (2022)) \[Q_{n}\big{(}\delta(\tau_{b}),\tau_{b}\big{)}=\sum_{i=1}^{m+1}\sum_{t=n_{i}+1}^{n_ {i}}\big{(}y_{t}-f(x_{t-1})\big{)} \tag{1.6}\] As a result, in order to establish the large-sample properties of break-point estimators and test statistics, in a similar spirit as in Casini and Perron (2022), we can consider the shrinkage theoretical framework of Bai and Perron (1998) and Qu and Perron (2007) (see, also Lavielle and Moulines (2000) and Nkurunziza (2020)). On the other hand, necessary modifications of the asymptotic theory analysis is required in order to incorporate the features of the local-to-unity parametrization as well as the use of the proposed sup-Wald type statistics based on the OLS and the IVX estimation. ## 2 Structural Change in First Order Autoregressive Models Break point estimation is an important component in change-point detection problems. In order to obtain some useful insights we begin by considering the statistical properties that correspond to structural break tests and break-point estimators for an AR(1) autoregressive model, where the local-to-unity parametrization is omitted rendering this way a stationary AR(1) model under suitable parameter space restrictions. We follow the study of Pang et al. (2021). **Assumption 2**.: We assume that it holds that \[\mathbb{E}\left(u_{t}|\mathcal{F}_{t-1}\right)=0\quad\mathbb{E}\left(u_{t}^{2 }|\mathcal{F}_{t-1}\right)=1,\text{ {almost surely}}. \tag{2.1}\] **Remark 1**.: Notice that the martingale difference assumption does not allow to capture conditional heteroscedasticity while it is assumed to be weaker than assuming the error sequence \(\varepsilon_{t}\) is independent. Moreover, we are interested to derive the convergence rate of estimates under different econometric conditions. In a linear regression model a relevant framework is given by Shimizu (2023) (see, also the study of Self and Liang (1987) who investigate the asymptotic properties of MLE estimators and likelihood ratio tests under nonstandard conditions. **Corollary 1**.: Given the LUR parametrization of the autoregressive equation in a predictive regression model, then the following joint convergence result holds: \[\left(\frac{1}{n}\sum_{t=1}^{\lfloor nr\rfloor}x_{t-1}u_{t},\frac{1}{n^{2}} \sum_{t=1}^{\lfloor nr\rfloor}x_{t-1}^{2}\right)\Rightarrow\left(\int_{0}^{ r}J_{c}(s)dB_{u}(s),\int_{0}^{r}J_{c}(s)^{2}ds\right). \tag{2.2}\] **Remark 2**.: Notice that Corollary 1 gives the joint weakly convergence result for the partial sum processes based on a local-to-unity parametrization. Under the maintained hypothesis that the shift exists, that is, \(\delta\neq 0\), we will need to derive corresponding moment functionals. MotivationKatsouris (2023a,d) established the asymptotic behaviour of structural break tests in predictive regression models for the sup-Wald OLS and sup-Wald IVX statistic. 1. the limits for both the sup Wald OLS and sup Wald IVX statistics under MI predictors for a predictive regression model with an intercept that shifts converges to the standard NBB. 2. the limits for both statistics under LUR predictors for a predictive regression model with an intercept that shifts (after demeaning) converges to non-standard distributions given by Theorem 1 and Theorem 2 in Katsouris (2023a) respectively. Corresponding results: (a) when the model includes no intercept (i.e., testing only the stability of the slopes) and (b) when the model includes a stable intercept, are established. We also proved that under the assumption of a known break-point the limiting distributions converge to a nuisance parameter free distribution \(\chi^{2}\) regardless of the persistence properties. However, the studies of Katsouris (2023a,d) didn't consider the asymptotic theory of break-point date estimation, which we aim to present in this article. ### Consistency and Limiting Distributions of model coefficients **Example 3**.: Consider the following AR(1) model without a model intercept, with a possible shift in the AR parameter at an unknown break-point location in the full sample \(k_{0}=\lfloor\tau_{0}n\rfloor\), \[y_{t}=\beta_{1}y_{t-1}\mathbf{1}\left\{t\leq k_{0}\right\}+\beta_{2}y_{t-1} \mathbf{1}\left\{t>k_{0}\right\}+\epsilon_{t},\ \ t=1,2,...,n, \tag{2.3}\] where \(\mathbf{1}\left\{.\right\}\) denotes the indicator function, and \(\left\{\epsilon_{t},t\geq 1\right\}\) is a sequence of _i.i.d_ random variables. For a given \(\tau\), the ordinary least squares estimators of parameters \(\beta_{1}\) and \(\beta_{2}\) are \[\hat{\beta}_{1}(\tau)=\frac{\sum_{t=1}^{[n\tau]}y_{t}y_{t-1}}{\sum_{t=1}^{[n \tau]}y_{t-1}^{2}}\ \ \text{and}\ \ \hat{\beta}_{2}(\tau)=\frac{\sum_{t=[n\tau]+1}^{n}y_{t}y_{t-1}}{ \sum_{t=[n\tau]+1}^{n}y_{t-1}^{2}} \tag{2.4}\] respectively. Then the change-point estimator satisfies \(\mathbf{\hat{\tau}}_{n}=\underset{\tau\in(0,1)}{\arg\min}\ \text{RSS}_{n}(\tau)\), where \[\text{RSS}_{n}(\tau)=\sum_{t=1}^{[n\tau]}\left(y_{t}-\hat{\beta}_{1}(\tau)y_{ t-1}\right)^{2}+\sum_{t=[n\tau]+1}^{n}\left(y_{t}-\hat{\beta}_{2}(\tau)y_{t-1} \right)^{2}. \tag{2.5}\] This expression gives the estimation procedure of the model parameters under the presence of a possible structural break in the full-sample (see, Chong (2001) and Pang et al. (2021)). The estimation method corresponds to the least squares method proposed by Bai (1994) and Bai (1993). Therefore, we are interested to estimate the structural parameters \(\beta_{1}\) and \(\beta_{2}\) and the time (or location) of change \(\uptau_{0}\). Furthermore, we estimate the following model: \[\hat{y}_{t}=\hat{\beta}_{1}y_{t-1}\mathbf{1}\left\{t\leq[n\hat{\uptau}]\right\}+ \hat{\beta}_{2}y_{t-1}\mathbf{1}\left\{t>[n\hat{\uptau}]\right\}+\epsilon_{t}, \quad t\in\left\{1,...,n\right\}. \tag{2.6}\] We could also denote with \(\mathcal{S}_{n}=\mathcal{S}_{1n}(\beta_{1},\uptau)+\mathcal{S}_{2n}(\beta_{2},\uptau)\) where \(\mathcal{S}_{n}\equiv\text{RSS}_{n}(\uptau)\) such that \[\mathcal{S}_{1n}(\beta_{1},\uptau) =\sum_{t=1}^{[n\uptau]}\biggl{(}y_{t}-\hat{\beta}_{1}(\uptau)y_{t -1}\biggr{)}^{2} \tag{2.8}\] \[\mathcal{S}_{2n}(\beta_{1},\uptau) =\sum_{t=[n\uptau]+1}^{n}\biggl{(}y_{t}-\hat{\beta}_{2}(\uptau)y_ {t-1}\biggr{)}^{2}. \tag{2.7}\] Therefore, in practice the criterion function which will need to be employed to estimate the breakpoint date estimators requires an iterative procedure. In other words, for each \(\tau\in\Pi\), we obtain the regression parameter estimators \(\text{pre}-\lfloor n\tau\rfloor\) and \(\text{post}-\lfloor n\tau\rfloor\) such that \[\hat{\beta}_{jn}(\tau)=\underset{\beta\subset\Theta}{\text{arg min}}\ \mathcal{S}\left(\beta,\tau\right)\] for \(j\in\left\{1,2\right\}\) respectively. Furthermore, in practice the shift point will be estimated as the sample partition that minimizes the objective function concentrated in \(\tau\) such that \[\hat{\uptau}_{n}=\underset{\uptau\in\Pi}{\text{arg min}}\Bigl{\{}\mathcal{S }_{1n}\left(\hat{\beta}_{1n}(\uptau),\uptau\right)+\left(\hat{\beta}_{2n}( \uptau),\uptau\right)\Bigr{\}}. \tag{2.9}\] Moreover, we have that \(\hat{\beta}_{n}:=\left(\hat{\beta}_{1n},\hat{\beta}_{2n},\hat{\uptau}_{n} \right)^{\prime}\), where \(\hat{\beta}_{jn}=\hat{\beta}_{jn}\left(\hat{\uptau}_{n}\right)\) are the coefficient estimators for \(j\in\left\{1,2\right\}\). Moreover, the size of the jump will be estimated and denoted with \(\hat{\delta}_{n}=\left(\hat{\beta}_{1n}-\hat{\beta}_{2n}\right)\). Notice that \(\hat{\beta}_{1}\) has a scaled Dickey-Fuller distribution. The term \(\hat{\beta}_{2}\) can be described as being asymptotically normally distributed with a random variance, which occurs due to the asymptotic limit given by the following expression: \[\sqrt{n}\left(\hat{\beta}_{2}-\beta_{2}\right)=\frac{\frac{1}{ \sqrt{n}}\sum_{t=\lfloor\uptau_{0}n\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{\frac{1 }{n}\sum_{t=\lfloor\uptau_{0}n\rfloor+1}^{n}y_{t-1}^{2}}+o_{p}(1), \tag{2.10}\] The numerator follows a CLT and the denominator converges to a random variable. Moreover, the maintained hypothesis is that the shift exists, which implies that \(\delta\neq 0\). Consider for example the set of these indicator functions such that \(\mathbf{1}\left\{t\leq\uptau n\right\}\) and \(\mathbf{1}\left\{t\leq\hat{\uptau}n\right\}\) where \(\hat{\uptau}\) is an estimator of the unknown break fraction \(\uptau\). Then, showing that for the OLS estimator of the predictive regression model it holds that \(n\left(\hat{\uptau}-\uptau\right)=\mathcal{O}_{p}(1)\) implies also that \(\lfloor\hat{\uptau}n\rfloor=\lfloor(\hat{\uptau}-\uptau)n+\uptau n\rfloor= \lfloor O_{p}(1)+\uptau n\rfloor=\lfloor\uptau n\rfloor+o_{p}(1)\). **Theorem 1** (Chong (2001), Pang et al. (2021)).: In model (1), if \(|\beta_{1}|<1\), \(\beta_{2}=\beta_{2n}=1-c/n\), where \(c\) is a fixed constant, and assumptions C1-C3 are satisfied, then the estimators \(\hat{\uppi}_{n}\), \(\hat{\beta}_{1}(\hat{\uppi}_{n})\) and \(\hat{\beta}_{2}(\hat{\uppi}_{n})\) are all consistent, and \[\begin{cases}|\hat{\uppi}_{n}-\uptau_{0}|=\mathcal{O}_{p}(1/n),\\ \\ \sqrt{n}\left(\hat{\beta}_{1}(\hat{\uppi}_{n})-\beta_{1}\right) \Rightarrow\mathcal{N}\left(0,(1-\beta_{1}^{2})/\uptau\right),\\ \\ n\left(\hat{\beta}_{2}(\hat{\uppi}_{n})-\beta_{2}\right) \Rightarrow\frac{1}{2}F^{2}(W,c,\uptau_{0},1)+c\int_{\uptau_{0}}^{1}e^{2c(1- t)}F^{2}(W,c,\uptau_{0},t)dt-\frac{1}{2}(1-\uptau)\\ \int_{\uptau_{0}}^{1}e^{2c(1-t)F^{2}(W,c,\uptau_{0},t)dt}\end{cases}\] **Remark 3**.: When \(\underline{\uptau}_{0}\leq\tau\leq\bar{\uptau}_{0}\), \(\hat{\beta}_{1}(\uptau)\), then \(\hat{\beta}_{1}(\uptau)\) converges uniformly to \(\beta_{1}\). The particular result is not surprising because the estimator \(\hat{\beta}_{1}(\uptau)\) is obtained based on the data generating process that corresponds to \(y_{t}=\beta_{1}y_{t-1}+u_{t}\). Moreover, the estimator \(\hat{\beta}_{2}(\uptau)\) converges uniformly to a weighted average of \(\beta_{1}\) and \(\beta_{2}\). The weight depends on the true change point, the true preshift and postshift parameters as well as the location of \(\uptau\). Furthermore, if both \(\beta_{1}\) and \(\beta_{2}\) are within the unit boundary then the process is said to be stationary. Thus, it is not difficult to show that all the OLS estimators are consistent in this case. For instance, Bai (1993, 1994) shows that in the conventional stationary case, the change-point estimator is \(n-\)consistent. This convergence rate is fast enough to make the limiting distributions of \(\hat{\beta}_{1}\) and \(\hat{\beta}_{2}\) behave as if the true change point \(\uptau_{0}\) is known. Theorem 2 below establishes the asymptotic normality of \(\hat{\beta}_{1}\) and \(\hat{\beta}_{2}\). Relevant frameworks which consider a structural change type estimation and inference of a first-order autoregressive model includes the framework proposed by Kurozumi (2023). Although the particular framework corresponds to a fluctuation type monitoring test5 for detecting the presence of explosive behaviour in time series data (see, Arvanitis and Magdalinos (2018) and Skrobotov (2023)). On the other hand, in this article we consider the statistical properties of break-point estimators in predictive regression models within the full-sample, so these two aspects are considered to be the main contributions of our study. Furthermore, our work can be useful in relevant applications from the financial economics literature since knowing the exact distributional properties of the break-point estimators when the econometrician employs a predictive regression model either for detecting slope instabilities or testing for predictability robust against parameter instability. Footnote 5: Notice that the monitoring testing approach (see, Chu et al. (1996), Leisch et al. (2000), Aue et al. (2009) and Horváth et al. (2020) among others) corresponds to a different implementation and estimation procedure of structural change in time series regressions. One of the main difference is the use of a historical and a monitoring period during which model estimates and residuals are constructed with the purpose of detecting structural breaks. We leave these considerations as future research. Another aspect worth emphasizing is that in this article the sup-Wald type statistics correspond ton an iterative estimation step in order to construct a sequence of test statistics based on fitting the regression model within the full-sample and comparing the model estimates across two subsamples that correspond to the pre-break and post-break part. **Theorem 2** (Chong (2001), Pang et al. (2021)).: Under regularity conditions, if \(|\beta_{1}|<1\) and \(|\beta_{2}|<1\), the OLS estimators \(\hat{\uppi}_{n}\), \(\hat{\beta}_{1}(\hat{\uppi}_{n})\) and \(\hat{\beta}_{2}(\hat{\uppi}_{n})\) are consistent estimators and it holds that \[|\hat{\uppi}_{n}-\uppi_{0}| =\mathcal{O}_{p}\left(\frac{1}{n}\right), \tag{2.12}\] \[\sqrt{n}\left(\hat{\beta}_{1}(\hat{\uppi}_{n})-\beta_{1}\right) \rightarrow^{d}\mathcal{N}\left(0,\frac{1-\beta_{1}^{2}}{\uppi_{0}}\right)\] (2.13) \[\sqrt{n}\left(\hat{\beta}_{2}(\hat{\uppi}_{n})-\beta_{2}\right) \rightarrow^{d}\mathcal{N}\left(0,\frac{1-\beta_{2}^{2}}{1- \uppi_{0}}\right) \tag{2.11}\] Therefore, we can see from the limit result given on Theorem 1 that \(\hat{\beta}_{1}(\hat{\uppi}_{n})\) and \(\hat{\beta}_{2}(\hat{\uppi}_{n})\) are both asymptotically normally distributed with variance depending on \(\beta_{1}\), \(\beta_{2}\) and \(\uppi_{0}\). ### The Asymptotic Criterion Function Notice that in the model the change point \(\uppi_{0}\) is unknown and has to be estimated by \(\hat{\uppi}_{n}\). Thus, to show the consistency of \(\hat{\uppi}_{n}\), the common practice in the structural break literature is to show that \((1/n)RSS_{n}(\uptau)\) converges uniformly to a nonstochastic function that has a unique minimum at \(\uptau=\uppi_{0}\). Therefore, we focus on the asymptotic behaviour of the quantity \((1/n)RSS_{n}(\uptau)\). The following Lemma is useful in deriving the limiting behaviour of the criterion function \((1/n)RSS_{n}(\uptau)\) and in proving Theorem 1, which follows. **Lemma 1** (Pang et al. (2021)).: Let \(\left\{y_{t}\right\}_{t=1}^{n}\) be generated according to model (1), with \(|\beta_{1}|<1\) and \(|\beta_{2}|<1\). We have the following asymptotic results \[\underset{0\leq\uppi_{1}\leq\uppi_{2}\leq 1}{sup}\ \frac{1}{n}\left| \sum_{|n\uppi_{1}|}^{\left[n\uppi_{2}\right]}y_{t-1}\epsilon_{t}\right|=o_{p} (1), \tag{2.15}\] \[\frac{1}{n}\sum_{t=1}^{\left[n\uppi_{0}\right]}y_{t-1}^{2} \overset{p}{\rightarrow}\frac{\uppi_{0}\sigma^{2}}{1-\beta_{1}^{2}},\] (2.16) \[\frac{1}{n}\sum_{t=\left[n\uppi_{0}\right]+1}^{n}y_{t-1}^{2} \overset{p}{\rightarrow}\frac{(1-\uppi_{0})\sigma^{2}}{1-\beta_{1}^{2}},\] (2.17) \[\underset{0\leq\uppi_{1}\leq\uppi\leq\uppi_{0}}{sup}\ \left|\sum_{\left[n\uppi_{0}\right]}^{\left[n\uppi_{0}\right]}y_{t-1}^{2}- \frac{(\uppi_{0}-\uptau)\sigma^{2}}{1-\beta_{1}^{2}}\right|=o_{p}(1),\] (2.18) \[\underset{0\leq\uppi_{1}\leq\uppi\leq\uppi_{0}}{sup}\ \left|\hat{\beta}_{1}(\uptau)-\beta_{1}\right|=o_{p}(1),\] (2.19) \[\underset{0\leq\uppi_{1}\leq\uppi\leq\uppi_{0}}{sup}\ \left|\hat{\beta}_{2}(\uptau)-\frac{(\uppi_{0}-\uptau)(1-\beta_{2}^{2})\beta_ {1}+(1-\uppi_{0})(1-\beta_{1}^{2})\beta_{2}}{(\uppi_{0}-\uptau)(1-\beta_{2}^ {2})+(1-\uppi_{0})(1-\beta_{1}^{2})}\right|=o_{p}(1), \tag{2.14}\] Consider the criterion function \(\left(1/n\right)RSS(\uptau)\) when \(\beta_{2}=1\) behaves very differently. Then, under Assumptions (A1)-(A3) we have that \[\left|\frac{1}{n}\sum_{t=1}^{\lfloor n\uptau_{0}\rfloor}y_{t-1} \epsilon_{t}\right|=o_{p}(1), \tag{2.21}\] \[\frac{1}{n}\sum_{t=1\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1} \epsilon_{t}\Rightarrow\frac{(1-\uptau_{0})\sigma^{2}}{2}\left(B^{2}(1)-1 \right)=\mathcal{O}_{p}(1), \tag{2.20}\] Following the econometric framework of Chong (2001) and Pang et al. (2021), further limit results which will need to be established in the case one replaces the assumption of a stationary autoregressive coefficient with the local-to-unity parametrization includes: \[\sup_{\uptau_{1}\leq\uptau\leq\uptau_{0}}\left|\hat{\beta}_{1}( \uptau)-\beta_{1}\right|=o_{p}(1),\quad\sup_{\uptau_{1}\leq\uptau\leq\uptau_{0 }}\left|\hat{\beta}_{2}(\uptau)-1\right|=\mathcal{O}_{p}\left(\frac{1}{n} \right),\quad\sup_{\uptau_{1}\leq\uptau\leq\uptau_{0}}\left|\hat{\beta}_{2}( \uptau)-\beta_{1}\right|=\mathcal{O}_{p}(1),\] \[\sup_{\uptau_{1}\leq\uptau\leq\uptau_{0}}\frac{1}{n}\left|\sum_ {\lfloor n\uptau_{0}\rfloor+1}^{\lfloor n\uptau\rfloor}y_{t-1}\epsilon_{t} \right|=\mathcal{O}_{p}(1),\quad\sup_{\uptau_{1}\leq\uptau\leq\uptau_{0}} \frac{1}{n}\left|\sum_{\lfloor n\uptau\rfloor+1}^{n}y_{t-1}\epsilon_{t} \right|=\mathcal{O}_{p}(1),\] \[\left|\hat{\beta}_{1}(\uptau)-1\right|=\mathcal{O}_{p}\left( \frac{1}{\sqrt{n}}\right),\quad\left|\hat{\beta}_{2}(\uptau)-1\right|= \mathcal{O}_{p}\left(\frac{1}{n}\right),\] Notice that there is an asymptotic gap between \((1/n)RSS_{n}(\uptau_{0})\) and \((1/n)RSS(\uptau)\). Thus to examine the consistency of \(\hat{\beta}_{1}\), we have to investigate the transitional behaviour of \((1/n)RSS_{n}(\uptau)\). Note that for any constant \(c>0\), \[\hat{\beta}_{1}\left(\uptau_{0}+cn^{\alpha-1}\right)=\theta_{n} \left(\alpha,c\right)\left(\beta_{1}+\frac{\sum_{t=1}^{k_{0}}y_{t-1} \epsilon_{t}}{\sum_{t=1}^{k_{0}}y_{t-1}^{2}}\right)+\left(1-\theta_{n}\left( \alpha,c\right)\right)\left(1+\frac{\sum_{t=k_{0}+1}^{k_{0}+\lfloor cn^{ \alpha}\rfloor}y_{t-1}\epsilon_{t}}{\sum_{t=k_{0}+1}^{k_{0}+\lfloor cn^{ \alpha}\rfloor}y_{t-1}^{2}}\right), \tag{2.22}\] where \[\theta_{n}\left(\alpha,c\right)=\left(\frac{\sum_{t=1}^{k_{0}}y_{t-1}^{2} \epsilon_{t}}{\sum_{t=1}^{k_{0}}y_{t-1}^{2}+\sum_{t=k_{0}+1}^{k_{0}+\lfloor cn ^{\alpha}\rfloor}y_{t-1}^{2}}\right) \tag{2.23}\] When \(\alpha<1/2\), it holds that \(\theta_{n}\left(\alpha,c\right)\overset{p}{\rightarrow}1,\quad\hat{\beta}_{1 }\left(\uptau_{0}+cn^{\alpha-1}\right)\overset{p}{\rightarrow}\beta_{1}\), and \(\frac{1}{n}RSS_{n}\left(\uptau_{0}+cn^{\alpha-1}\right)\overset{p}{ \rightarrow}\sigma^{2}\). Moreover, for \(\frac{1}{2}<\alpha<1\) we have that \[\theta_{n}\left(\alpha,c\right)\overset{p}{\rightarrow}0,\quad \hat{\beta}_{1}\left(\uptau_{0}+cn^{\alpha-1}\right)\overset{p}{\rightarrow}1, \tag{2.25}\] \[\frac{1}{n}RSS_{n}\left(\uptau_{0}+cn^{\alpha-1}\right)\overset{p} {\rightarrow}\sigma^{2}+\frac{(1-\beta_{1})\uptau_{0}\sigma^{2}}{1+\beta_{1}}. \tag{2.24}\] The above results simply imply that if the convergence rate of \(\hat{\uptau}_{n}\) is faster that \(n^{1/2}\) then \(\hat{\beta}_{1}\) will be consistent, otherwise it will be inconsistent. Next, we derive the asymptotic behaviour of the criterion function \((1/n)RSS_{n}(\uptau)\). Another useful quantity is as below: \[\hat{\beta}_{1}\left(\uptau_{0}+\frac{c}{\sqrt{n}}\right) =\theta_{n}\left(\frac{1}{2},c\right)\left(\beta_{1}+\frac{\sum \limits_{t=1}^{k_{0}}y_{t-1}\epsilon_{t}}{\sum\limits_{t=1}^{k_{0}}y_{t-1}^{2} }\right)+\left[1-\theta_{n}\left(\frac{1}{2},c\right)\right]\left(1+\frac{\sum \limits_{t=k_{0}+1}^{\left\lfloor k_{0}+c\sqrt{n}\right\rfloor}y_{t-1}\epsilon _{t}}{\sum\limits_{t=k_{0}+1}^{k_{0}+c\sqrt{n}}y_{t-1}^{2}}\right), \tag{2.26}\] \[\theta_{n}\left(\frac{1}{2},c\right) =\left(\frac{\sum\limits_{t=1}^{k_{0}}y_{t-1}^{2}\epsilon_{t}}{ \sum\limits_{t=1}^{k_{0}}y_{t-1}^{2}+\sum\limits_{t=k_{0}+1}^{\left\lfloor k _{0}+c\sqrt{n}\right\rfloor}y_{t-1}^{2}}\right)\] Clearly, the challenging task here is that when we impose that assumption that the autoregressive coefficient of the first order autoregression model is expressed via the local-to-unity parametrization, then we expect that the break-point estimators will depend on the nuisance parameter of persistence. In other words, the fact that the asymptotic distribution of these break-point estimators will not be nuisance parameter-free can be challenging when critical values are needed (e.g., case of the monitoring scheme). We leave these considerations for future research. Following the existing literature to find the limiting distribution of \(\hat{\beta}_{1}(\hat{\uptau}_{n})\), in the case of the stationary AR(1) model, notice that \((\hat{\uptau}_{n}-\uptau_{0})=\mathcal{O}_{p}\left(n^{-1}\right)\) and it holds that \[\sqrt{n}\left(\hat{\beta}_{1}(\hat{\uptau}_{n})-\hat{\beta}_{1}(\uptau_{0}) \right)=\sqrt{n}\left(\frac{\sum\limits_{t=1}^{\left\lfloor n\uptau_{n} \right\rfloor}y_{t}y_{t-1}}{\sum\limits_{t=1}^{\left\lfloor n\uptau_{n} \right\rfloor}y_{t-1}^{2}}-\frac{\sum\limits_{t=1}^{\left\lfloor n\uptau_{0} \right\rfloor}y_{t}y_{t-1}}{\sum\limits_{t=1}^{\left\lfloor n\uptau_{0} \right\rfloor}y_{t-1}^{2}}\right) \tag{2.27}\] From the above derivations we can see that \(\hat{\beta}_{1}(\hat{\uptau}_{n})\) and \(\hat{\beta}_{1}(\uptau_{0})\) have the same asymptotic distribution, since \(\{y_{t-1}\epsilon_{t},\mathcal{F}_{t}\}_{t=1}^{\left\lfloor n\uptau_{0}\right\rfloor}\) is a martingale difference sequence, with \(\mathbb{E}\left[y_{t-1}\epsilon_{t}|\mathcal{F}_{t-1}\right]=0\) and \(\sum_{t=1}^{\left\lfloor n\uptau_{0}\right\rfloor}\mathbb{E}\left[\left(y_{t- 1}\epsilon_{t}\right)^{2}|\mathcal{F}_{t-1}\right]\overset{p}{\rightarrow}\frac {\sigma^{4}}{1-\beta_{1}^{2}}<\infty\). Applying the central limit theorem for martingale difference sequences and the fact that \[\frac{1}{n}\sum\limits_{t=1}^{\left\lfloor n\uptau_{0}\right\rfloor}y_{t-1}^{ 2}\overset{p}{\rightarrow}\frac{\uptau_{0}\sigma^{2}}{1-\beta_{1}^{2}}, \tag{2.28}\] \[\sqrt{n}\left(\hat{\beta}_{1}(\hat{\uptau}_{n})-\beta_{1}\right)\overset{d}{= }\sqrt{n}\left(\hat{\beta}_{1}(\hat{\uptau}_{0})-\beta_{1}\right)=\frac{\frac{ 1}{\sqrt{n}}\sum\limits_{t=1}^{\left\lfloor n\uptau_{0}\right\rfloor}y_{t-1} \epsilon_{t}}{\frac{1}{n}\sum\limits_{t=1}^{\left\lfloor n\uptau_{0} \right\rfloor}y_{t-1}^{2}}\overset{d}{\rightarrow}\mathcal{N}\left(0,\frac{1 -\beta_{1}^{2}}{\uptau_{0}}\right). \tag{2.29}\] Next, to find the limiting distribution of \(\hat{\beta}_{2}(\hat{\uptau}_{n})\), notice that \((\hat{\uptau}_{n}-\uptau_{0})=\mathcal{O}_{p}(\frac{1}{n})\) and \[n\left(\hat{\beta}_{2}(\hat{\uptau}_{n})-\hat{\beta}_{2}(\uptau_{0})\right)=n \left(\frac{\sum_{t=\lfloor n\uptau_{n}\rfloor+1}^{n}y_{t}y_{t-1}}{\sum_{t= \lfloor T\uptau_{n}\rfloor+1}^{n}y_{t-1}^{2}}-\frac{\sum_{t= \lfloor n\uptau_{0}\rfloor+1}^{n}y_{t}y_{t-1}}{\sum_{t= \lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}^{2}}\right)\] \[=\mathbf{1}\left\{\hat{\uptau}_{n}\leq\uptau_{0}\right\}n\left( -\frac{\sum_{t=\lfloor n\uptau_{n}\rfloor+1}^{\lfloor n\uptau_{0}\rfloor}y_{ t-1}^{2}\sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{ \sum_{t=\lfloor n\uptau_{n}\rfloor+1}^{n}y_{t-1}^{2}}\frac{\sum_{t= \lfloor n\uptau_{0}\rfloor+1}^{\lfloor n\uptau_{0}\rfloor}y_{t-1}}{\sum_{t= \lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}^{2}}+\frac{\sum_{t= \lfloor n\uptau_{0}\rfloor+1}^{\lfloor n\uptau_{0}\rfloor}y_{t-1}^{2}}{\sum_{ t=\lfloor n\uptau_{n}\rfloor+1}^{n}y_{t-1}^{2}}\right)+\mathbf{1}\left\{\hat{\uptau}_{n} \leq\uptau_{0}\right\}n\left(\beta_{1}-\beta_{2}\right)\frac{\sum_{t= \lfloor n\uptau_{n}\rfloor+1}^{\lfloor n\uptau_{0}\rfloor}y_{t-1}^{2}}{\sum_{ t=\lfloor n\uptau_{n}\rfloor+1}^{\lfloor n\uptau_{0}\rfloor}y_{t-1}^{2}}\] It can be prove that both \(\hat{\beta}_{1}(\hat{\uptau}_{n})\) and \(\hat{\beta}_{2}(\uptau_{0})\) have the same asymptotic distribution, which implies \[n\left(\hat{\beta}_{2}(\hat{\uptau})-1\right)\overset{d}{=}n\left(\hat{\beta }_{2}(\uptau_{0})-\beta_{2}\right)=\frac{\frac{1}{n}\sum_{t =\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{\frac{1}{ n^{2}}\sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}^{2}}\Rightarrow\frac{B^{2}(1)-1}{2(1- \uptau_{0})\int_{0}^{1}B^{2}}. \tag{2.30}\] By the central limit theorem for martingale difference sequences and by independence of the two martingale differences given previously, we have that \[\frac{1}{\sqrt{n}}\sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t- 1}\epsilon_{t}\overset{d}{\rightarrow}\mathcal{N}\left(0,\frac{\sigma^{4}}{1- \beta_{2}^{2}}\right)\] \[=\left(\frac{y_{k_{0}}}{\sqrt{n}}\right)^{2}\sum_{t=\lfloor n \uptau_{0}\rfloor+1}^{\lfloor n\uptau\rfloor}\beta_{2}^{2(t-k_{0}-1)}+2\frac {y_{k_{0}}}{T}\sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{\lfloor n\uptau\rfloor} \left(\beta_{2}^{t-k_{0}-1}\sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{\lfloor n \uptau\rfloor}\beta_{2}^{t-i-1}\epsilon_{i}\right)+\frac{1}{n}\sum_{t=\lfloor n \uptau_{0}\rfloor+1}^{\lfloor n\uptau\rfloor}\left(\sum_{t=\lfloor n\uptau_{0} \rfloor+1}^{t-1}\beta_{2}^{t-i-1}\epsilon_{i}\right)^{2}.\] The first term above weakly converges to \(\left[\sigma^{2}B^{2}(\uptau_{0})\right]/\left(1-\beta_{2}^{2}\right)\). Furthermore, because \(\left|y_{k_{0}}/\sqrt{n}\right|=\mathcal{O}_{p}(1)\) and \((1/\sqrt{n})\sup_{t>k_{0}}\left|\epsilon_{t}\right|=o_{p}(1)\), then we can show that the second term is bounded by \[\leq 2\left|\frac{y_{k_{0}}}{\sqrt{n}}\right|\sup_{\tau\in\Pi}\left| \sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{\lfloor n\uptau\rfloor}\beta_{2}^{t-k _{0}-1}\frac{1-\beta_{2}^{t-k_{0}-1}}{1-\beta_{2}}\right|\frac{1}{\sqrt{n}} \underset{t>k_{0}}{\sup}\left|\epsilon_{t}\right|\] \[\leq 2\left|\frac{y_{k_{0}}}{\sqrt{n}}\right|\left(\sum_{t=k_{0}+1}^ {\infty}\left|\frac{\beta_{2}^{t-k_{0}-1}}{1-\beta_{2}}\right|+\sum_{t=k_{0}+1 }^{\infty}\left|\frac{\beta_{2}^{2(t-k_{0}-1)}}{1-\beta_{2}}\right|\right) \frac{1}{\sqrt{n}}\underset{t>k_{0}}{\sup}\left|\epsilon_{t}\right|\] \[\leq 2\left|\frac{y_{k_{0}}}{\sqrt{n}}\right|\left(\frac{1}{(1- \left|\beta_{2}\right|)^{2}}+\frac{1}{(1-\left|\beta_{2}\right|)(1-\beta_{2}^{ 2})}\right)\frac{1}{\sqrt{n}}\underset{t>k_{0}}{\sup}\left|\epsilon_{t}\right|=o _{P}(1).\] ## 3 Structural Change in the Predictive Regression Model Although in this paper we assume that the innovation sequence of the model has a linear process representation, other studies in the literature imposes a NED condition when developing structural break tests in stationary time series regression models (see, Ling (2007), Kim and Shin (2020) and Lee (2014)). Notice that the IVX estimator has the property that decorrelates the system and therefore conventional invariance principles and weak convergence results hold without requiring to consider a topological convergence in a different space. Furthermore, another important feature of the predictive regression model is that one can incorporate serial correlation in the error term and therefore using the IVX instrumentation mixed Gaussianity distributional converges still holds. **Example 4**.: Consider again the predictive regression model given below: \[y_{t}=(\alpha_{1}+\mathbf{\beta}_{1}x_{t-1})\mathbf{1}\left\{t\leq k\right\}+( \alpha_{2}+\mathbf{\beta}_{2}x_{t-1})\mathbf{1}\left\{t>k\right\}+u_{t} \tag{3.1}\] where the autoregressive coefficient has an autoregressive structure based on the local-to-unity parametrization given by the expression below: \[\mathbf{x}_{t}=\left(\mathbf{I}-\frac{\mathbf{C}_{p}}{n}\right)\mathbf{x}_{t-1}+\mathbf{v}_{t} \tag{3.2}\] At this point, we begin our asymptotic theory analysis by investigating the corresponding expressions of the criterion function when the functional form of the linear predictive regression model with a conditional mean function is employed. We write the residual sum of squares as below: \[\mathcal{S}_{n}(\uptau) =\sum_{t=1}^{\lfloor n\uptau\rfloor}\left(u_{t}-\left(\hat{\beta }_{1}(\uptau)-\beta_{1}\right)x_{t-1}\right)^{2}+\sum_{t=\lfloor T\uptau \rfloor+1}^{\lfloor n\uptau_{0}\rfloor}\left(u_{t}-\left(\hat{\beta}_{2}(\uptau )-\beta_{1}\right)x_{t-1}\right)^{2}\] \[+\sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}\left(u_{t}-\left(\hat {\beta}_{2}(\uptau)-\beta_{2}\right)x_{t-1}\right)^{2} \tag{3.3}\] which after expanding out, the RSS expression can be written as below: \[\mathcal{S}_{n}(\uptau) =\sum_{t=1}^{n}u_{t}^{2}-2\sum_{t=1}^{\lfloor n\uptau\rfloor} \left(\hat{\beta}_{1}(\uptau)-\beta_{1}\right)x_{t-1}+\left(\hat{\beta}_{1}( \uptau)-\beta_{1}\right)^{2}\sum_{t=\lfloor n\uptau\rfloor+1}^{\lfloor n \uptau\rfloor}x_{t-1}^{2}\] \[-2\left(\hat{\beta}_{2}(\uptau)-\beta_{1}\right)\sum_{t=\lfloor n \uptau\rfloor+1}^{\lfloor n\uptau_{0}\rfloor}x_{t-1}u_{t}+\left(\hat{\beta}_ {2}(\uptau)-\beta_{1}\right)^{2}\sum_{t=\lfloor n\uptau\rfloor+1}^{\lfloor n \uptau_{0}\rfloor}x_{t-1}^{2}\] \[-2\left(\hat{\beta}_{2}(\uptau)-\beta_{2}\right)\sum_{t=\lfloor n \uptau_{0}\rfloor+1}^{n}x_{t-1}u_{t}+\left(\hat{\beta}_{2}(\uptau)-\beta_{2} \right)^{2}\sum_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}x_{t-1}^{2}. \tag{3.4}\] Furthermore, we need to determine the validity of the following expansion in the case of the predictive regression model. \[=\sum_{t=1}^{n}\epsilon_{t}^{2}-\frac{\left(\sum_{t=1}^{\left|n \tau\right|}y_{t-1}\epsilon_{t}\right)^{2}}{\sum_{t=1}^{\left|n \tau\right|}y_{t-1}^{2}}-2\left(\hat{\beta}_{2}(\uptau)-\beta_{1}\right)\sum_ {t=\left|n\uptau\right|+1}^{\left|n\uptau\right|}y_{t-1}\epsilon_{t}\] \[+\left(\hat{\beta}_{2}(\uptau)-\beta_{1}\right)^{2}\sum_{t=\left| n\uptau\right|+1}^{\left|n\uptau\right|}y_{t-1}^{2}-2\left(\hat{\beta}_{2}( \uptau)-\beta_{2}\right)\sum_{t=\left|n\uptau\right|+1}^{\left|n\uptau\right| }y_{t-1}\epsilon_{t} \tag{3.5}\] \[+\left(\hat{\beta}_{2}(\uptau)-\beta_{2}\right)^{2}\sum_{t=\left| n\uptau\right|+1}^{n}y_{t-1}^{2}.\] Furthermore, we consider the asymptotic behaviour of the break-estimators when we are testing the stability of the model parameters of the predictive regression model based on the IVX estimator of the predictive regression model. In particular, within our framework we implement the IVX filter proposed by PM (2009) which implies the use of a mildly integrated instrumental variable that has the following form \[Z_{tn}=\sum_{j=0}^{t-1}\left(1-\frac{c_{z}}{n^{\delta}}\right)\left(x_{t-j}-x _{t-j-1}\right), \tag{3.6}\] for some \(c_{z}>0\) and \(0<\delta<1\), where \(\delta\) is the exponent rate of the persistence coefficient which corresponds to the instrumental variable. Notice that the above filtering method transforms the autoregressive process \(x_{t}\), which can be either stable or unstable, into a mildly integrated process which is less persistent a Nearly Integrated array such as the case of \(x_{t}\). **Remark 4**.: Under the alternative hypothesis, we assume the possible presence of a single structural break on the parameters of the regressors coefficients of the predictive regression model, which implies two regimes of slope coefficients. An important aspect to emphasize in our framework is that we treat the model intercept differently with respect to the other parameters when testing for structural breaks. The reason is that keeping the model intercept unchanged provides an initial condition to the predictive regression model for both regimes while focusing on testing for a possible parameter instability in the remaining parameters, simplifying also some of the derivations for the development of the asymptotic theory of the test statistics. Nevertheless, once the null hypothesis \(\mathbb{H}_{0}\), is rejected, a practitioner needs to locate the break point or the break point fraction \(\hat{\uptau}\) relative to the sample size and this is the importance of the proposed framework6. **Remark 5**.: Suppose that the alternative hypothesis denoted with \(H_{1}\) is true. Furthermore, it is well-known in mathematical statistics that the optimal test (NP test or likelihood-ratio test) is described by Newman-Pearson lemma and the critical region of this test is defined as follows: \[C_{NP}(\alpha)=\left\{x:\frac{\mu_{U}(x)}{\nu(x)}\leq\lambda_{\alpha}\right\}, \tag{3.7}\] where \(\alpha\in(0,1)\) is the significance level and the constant \(\lambda_{\alpha}\) is chosen in such a way that \(\mu_{U}\Big{(}C_{NP}(\alpha)\Big{)}=\alpha\). Notice also that when we consider the asymptotic behaviour of the tests for large \(n\) any effects occurring due to non-randomized tests will be negligible. ### Estimating Breaks in Predictive Regressions **Example 5**.: Consider the following predictive regression model \[y_{t}=\begin{cases}\beta_{1}x_{t-1}+\epsilon_{t},&1\leq t\leq k_{1}^{0},\\ \beta_{2}x_{t-1}+\epsilon_{t},&k_{1}^{0}+1\leq t\leq k_{2}^{0},\\ \beta_{3}x_{t-1}+\epsilon_{t},&k_{2}^{0}+1\leq t\leq n\end{cases}\quad x_{t}= \begin{cases}\rho_{1}x_{t-1}+u_{t},&1\leq t\leq k_{1}^{0},\\ \rho_{2}x_{t-1}+u_{t},&k_{1}^{0}+1\leq t\leq k_{2}^{0},\\ \rho_{3}x_{t-1}+u_{t},&k_{2}^{0}+1\leq t\leq n\end{cases} \tag{3.8}\] with \(\rho_{1}=\left(1-\frac{c_{1}}{n}\right)\), \(\rho_{2}=\left(1+\frac{c_{2}}{n}\right)\) and \(\rho_{3}=\left(1-\frac{c_{1}}{n}\right)\) where \(c_{1},c_{2},c_{3}>0\). To develop an estimation procedure for the model given by (4.9)-(3.26), we first compute the difference of the residual sums of squares at the break-points \(k_{1}^{0}\) and \(k_{2}^{0}\). Let \(\text{RSS}(\tau)\) be the residual sum of squares on the date \([\tau n]\) then, the following theorem can be shown to hold (see, Pang et al. (2021)). **Theorem 3** (Pang et al. (2021)).: For model (4.9)-(3.26), we have that \[\text{RSS}(\tau_{1}^{0})-\text{RSS}(\tau_{2}^{0})=\begin{cases}\eta_{1}&=2 \left(\frac{\sum_{t=1}^{k_{1}^{0}}x_{t-1}\epsilon_{t}}{\sum_{t=1}^{k_{1}^{0}}x _{t-1}^{2}-\frac{\sum_{t=1}^{k_{1}^{0}}x_{t-1}\epsilon_{t}}{\sum_{t=1}^{k_{2} ^{0}}x_{t-1}^{2}}}\right)\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2},\\ \eta_{2}&=2\left(\frac{\sum_{t=k_{1}^{0}+1}^{k_{2}^{0}}x_{t-1}^{2}\sum_{t=k_{2} ^{0}+1}^{n}x_{t-1}\epsilon_{t}-\sum_{t=k_{1}^{0}+1}^{k_{2}^{0}}x_{t-1}\epsilon _{t}\sum_{t=k_{2}^{0}+1}^{n}x_{t-1}^{2}}{\sum_{t=k_{1}^{0}+1}^{n}x_{t-1}^{2}} \right),\\ \eta_{3}&=-\frac{\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2}\sum_{t=k_{1}^{0}+1}^{k_{2} ^{0}}x_{t-1}^{2}}{\sum_{t=1}^{k_{2}^{0}}x_{t-1}^{2}}\\ \eta_{4}&=-\frac{\sum_{t=k_{1}^{0}+1}^{k_{2}^{0}}x_{t-1}^{2}\sum_{t=k_{2}^{0} +1}^{n}x_{t-1}^{2}}{\sum_{t=k_{1}^{0}+1}^{n}x_{t-1}^{2}}\\ \Omega_{n}=\frac{\left(\sum_{t=1}^{k_{2}^{0}}x_{t-1}\epsilon_{t}\right)^{2}}{ \sum_{t=1}^{k_{2}^{0}}x_{t-1}^{2}}+\frac{\left(\sum_{t=k_{2}^{0}+1}^{n}x_{t-1 }\epsilon_{t}\right)^{2}}{\sum_{t=k_{2}^{0}+1}^{n}x_{t-1}^{2}}-\frac{\left(\sum _{t=1}^{k_{1}^{0}}x_{t-1}\epsilon_{t}\right)^{2}}{\sum_{t=1}^{k_{1}^{0}}x_{t- 1}^{2}}-\frac{\left(\sum_{t=k_{1}^{0}+1}^{n}x_{t-1}\epsilon_{t}\right)^{2}}{ \sum_{t=k_{1}^{0}+1}^{n}x_{t-1}^{2}}.\end{cases} \tag{3.9}\] Proof.: (3.10) \[\text{RSS}_{1,n}(\uptau^{0}_{1})=\sum_{t=1}^{k_{1}^{0}}\bigg{(}y_{t}-\hat{\beta}_{ 1}(\uptau^{0}_{1})x_{t-1}\bigg{)}^{2}+\sum_{t=k_{1}^{0}+1}^{k_{2}^{0}}\bigg{(}y_ {t}-\hat{\beta}_{2}(\uptau^{0}_{1})x_{t-1}\bigg{)}^{2}.\] \[\sum_{t=1}^{k_{1}^{0}}\bigg{(}y_{t}-\hat{\beta}_{1}(\uptau^{0}_{1})x_{t-1} \bigg{)}^{2}=\sum_{t=1}^{k_{1}^{0}}y_{t}^{2}-2\sum_{t=1}^{k_{1}^{0}}y_{t}\hat{ \beta}_{1}(\uptau^{0}_{1})x_{t-1}+\sum_{t=1}^{k_{1}^{0}}\hat{\beta}_{1}^{2}( \uptau^{0}_{1})x_{t-1}^{2}\] Therefore, we have that \[\sum_{t=1}^{k_{1}^{0}}\bigg{(}...\bigg{)}^{2} =\sum_{t=1}^{k_{1}^{0}}\Bigg{\{}(\beta_{1}x_{t-1}+\epsilon_{t})^{ 2}-2(\beta_{1}x_{t-1}+\epsilon_{t})\frac{\sum_{t=1}^{k_{1}^{0}}(\beta_{1}x_{t -1}+\epsilon_{t})x_{t-1}}{\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2}}x_{t-1}+\left[\frac {\sum_{t=1}^{k_{1}^{0}}(\beta_{1}x_{t-1}+\epsilon_{t})x_{t-1}}{\sum_{t=1}^{k_ {1}^{0}}x_{t-1}^{2}}\right]^{2}x_{t-1}^{2}\Bigg{\}}\] \[=\sum_{t=1}^{k_{1}^{0}}\left\{\beta_{1}^{2}x_{t-1}^{2}+2\beta_{1} x_{t-1}\epsilon_{t}+\epsilon_{t}^{2}\right\}-2\frac{\left[\sum_{t=1}^{k_{1}^{0}}( \beta_{1}x_{t-1}+\epsilon_{t})x_{t-1}\right]^{2}}{\sum_{t=1}^{k_{1}^{0}}x_{t-1 }^{2}}+\frac{\left[\sum_{t=1}^{k_{1}^{0}}(\beta_{1}x_{t-1}+\epsilon_{t})x_{t -1}\right]^{2}}{\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2}}\] \[=\beta_{1}^{2}\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2}+2\beta_{1}\sum_{ t=1}^{k_{1}^{0}}x_{t-1}\epsilon_{t}+\sum_{t=1}^{k_{1}^{0}}\epsilon_{t}^{2}-\beta_{1}^ {2}\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2}-\beta_{1}^{2}\frac{\left(\sum_{t=1}^{k_{1 }^{0}}x_{t-1}\epsilon_{t}\right)^{2}}{\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2}}\] \[=2\beta_{1}\sum_{t=1}^{k_{1}^{0}}x_{t-1}\epsilon_{t}+\sum_{t=1}^{ k_{1}^{0}}\epsilon_{t}^{2}-\beta_{1}^{2}\frac{\left(\sum_{t=1}^{k_{1}^{0}}x_{t-1 }\epsilon_{t}\right)^{2}}{\sum_{t=1}^{k_{1}^{0}}x_{t-1}^{2}}\] ### Break Estimation Procedure The case study we illustrate in the previous section is based on the specific approach proposed by Pang et al. (2021). In this article since the estimation methodology of model coefficients is taken into consideration, then we need to establish the asymptotic theory separately for the break-point estimator which based on the OLS optimization versus the break-point IVX optimization, in order to evaluate the consistency and statistical properties of the estimated break-point. Our objective in is to evaluate the properties of \(k_{1}\) which is an estimator of the location of the break-point \(k_{1}^{0}\) in the slope parameters and intercept of the predictive regression model (see, Kostakis et al. (2015)). In particular, the estimator of the break-point is obtained by minimizing the concentrated sum of squares errors function as \[S_{1n}(k)=\sum_{t=1}^{k}\left(y_{t}-x_{t}^{\prime}\hat{\beta}_{1}(k)\right)^{2}+ \sum_{t=k+1}^{n}\left(y_{t}-x_{t}^{\prime}\hat{\beta}_{2}(k)\right)^{2} \tag{3.11}\] where \(\hat{\beta}_{1}(k)\) and \(\hat{\beta}_{2}(k)\) denote the least squares estimators of the slope parameters within each regime for given \(k\). Alternatively, we can also reformulate \(\hat{k}_{1}\) as below \[\hat{k}_{1}=\mathsf{arg}\ \mathsf{max}_{k}\ G_{1n}(k)\quad\text{where}\ \ S_{n}-S_{1n}(k)\ \ \text{and}\ \ S_{n}=\sum_{t=1}^{n}\left(y_{t}-x_{t}^{\prime}\hat{\beta}\right)^{2} \tag{3.12}\] where \(S_{n}\) denotes the full sample sum of squared errors. Thus, we are interested to establish the weak consistency of \(\hat{\uptau}=\hat{k}_{1}/n\) under a certain set of assumptions. This formulation allows us to establish the weak consistency of the break fraction \(\hat{\uptau}=\hat{k}_{1}/n\) under certain set of assumptions. Thus, to have more meaningful power comparisons we study in more details the asymptotic behaviour of the break estimators under the alternative hypothesis based on the two estimators. To do this, we follow the methodology proposed with the framework of Pang et al. (2021), but in our setting we focus in the case of a single unknown break-point. We consider separately the break-point estimator based on the OLS versus the IVX estimators to evaluate the consistency of the estimated break-point and obtain corresponding convergence rates7. Footnote 7: The finite sample properties of \(\hat{\uptau}_{1}\) is important not only because of the direct economic implications that the accurate dating of a structural break in the mean may entail but also for the subsequent analysis which could involve the search for further breaks in the variance, typically based on the residual sequence (see, Pitarakis (2004)). #### 3.2.1 OLS-based Break-Point Date Estimator for a Single Structural Break **Step 1:** For any given \(0<\tau<1\), denote with \[\hat{\beta}_{j}(\uptau)=\frac{\sum_{t=1}^{[\uptau n]}y_{t}x_{t-1}}{\sum_{t=1 }^{[\uptau n]}x_{t-1}^{2}}\ \ \text{and}\ \ \hat{\beta}_{3}(\uptau)=\frac{\sum_{t=[\uptau n]+1}^{n}y_{t}x_{t-1}}{\sum_{t= [\uptau n]+1}^{n}x_{t-1}^{2}} \tag{3.13}\] Then the change-point estimator of \(\uptau_{2}^{0}\) is defined as below \[\hat{\uptau}_{2,n}=\underset{\uptau\in[0,1]}{\text{argmin}}\ \text{RSS}_{2,n}(\uptau), \tag{3.14}\] where \[\text{RSS}_{2,n}(\uptau)=\sum_{t=1}^{[\uptau n]}\left(y_{t}-\hat{\beta}_{j}( \uptau)x_{t-1}\right)^{2}+\sum_{t=[\uptau n]+1}^{n}\left(y_{t}-\hat{\beta}_{3 }(\uptau)x_{t-1}\right)^{2}. \tag{3.15}\] Once we obtain \(\hat{\uptau}_{2,n}\), the least squares estimator of \(\beta_{3}\) is represented by \(\hat{\beta}_{3}^{OLS}\left(\hat{\uptau}_{2,n}\right)\) and the OLS of \(k_{2}^{0}\) is denoted by \(\hat{k}_{2}=\left[\hat{\uptau}_{2,n}n\right]\). **Step 2:** For any given \(0<\tau<\hat{\tau}_{2,n}\), the OLS estimators of the parameters \(\beta_{1}\) and \(\beta_{2}\) are given by \[\hat{\beta}_{1}(\uptau)=\frac{\sum_{l=1}^{\left[\uptau\right]}y_{l}x_{t-1}}{ \sum_{t=1}^{\left[\uptau\right]}x_{t-1}^{2}}\ \ \text{and}\ \ \hat{\beta}_{2}(\uptau)=\frac{\sum_{t= \left[\uptau\right]+1}^{\hat{k}_{2}}y_{t}x_{t-1}}{\sum_{t=\left[\uptau\right]+1 }^{\hat{k}_{2}}x_{t-1}^{2}} \tag{3.16}\] respectively. Then the change-point estimator of \(\uptau_{1}^{0}\) is defined as below \[\hat{\uptau}_{1,n}=\underset{\tau\in[0,\hat{\uptau}_{2,n}]}{\text{argmin}} \ \text{RSS}_{1,n}(\uptau), \tag{3.17}\] where \[\text{RSS}_{1,n}(\uptau)=\sum_{t=1}^{\left[\uptau\right]}\left(y_{t}-\hat{ \beta}_{1}(\uptau)x_{t-1}\right)^{2}+\sum_{t=\left[\uptau\right]+1}^{\hat{k}_ {2}}\left(y_{t}-\hat{\beta}_{2}(\uptau)x_{t-1}\right)^{2}. \tag{3.18}\] Once we obtain \(\hat{\uptau}_{1,n}\), the least squares estimator of \(\beta_{1}\) and \(\beta_{2}\) are represented by \(\hat{\beta}_{1}^{OLS}\left(\hat{\uptau}_{1,n}\right)\) and \(\hat{\beta}_{2}^{OLS}\left(\hat{\uptau}_{1,n}\right)\) respectively, and the OLS of \(k_{1}^{0}\) is denoted by \(\hat{k}_{1}=\left[\hat{\uptau}_{1,n}n\right]\). **Remark 6**.: The break estimators that correspond to the OLS and IVX estimation, can be helpful to investigate which break-point will be identified first under abstract degree of persistence in predictive regression models. Therefore, after showing that the estimators \(\hat{\uptau}_{1,n}^{OLS}\) and \(\hat{\uptau}_{1,n}^{IVX}\) are consistent estimators of the true break-point \(\pi_{0}\), we can determine their convergence rates and obtain useful performance statistics, under the alternative hypothesis, such as the average run length or the power loss function. Thus, is important to examine also the convergence rates and asymptotic behaviour of the two estimators when under the alternative hypothesis of parameter instability, we obtain an estimator of the break point fraction and also modified parameter estimates which are functions of the estimated break point fraction instead of the initial sample size. #### 3.2.2 IVX-based Break-Point Date Estimator for a Single Structural Break **Step 1:** For any given \(0<\tau<1\), denote with \[\hat{\beta}_{j}^{IVX}(\uptau)=\frac{\sum_{t=1}^{\left[\uptau\right]}y_{t} \tilde{z}_{t-1}}{\sum_{t=1}^{\left[\uptau\right]}\tilde{z}_{t-1}^{2}}\ \ \text{and}\ \ \hat{\beta}_{3}^{IVX}(\tau)=\frac{\sum_{t= \left[\uptau\right]+1}^{n}y_{t}\tilde{z}_{t-1}}{\sum_{t=\left[\uptau\right]+1 }^{n}\tilde{z}_{t-1}^{2}} \tag{3.19}\] Then the change-point estimator of \(\mathfrak{T}_{2}^{0}\) is defined as below \[\hat{\mathfrak{T}}_{2,n}=\underset{\tau\in[0,1]}{\text{argmin}}\ \text{RSS}_{2,n}( \mathfrak{\tau}), \tag{3.20}\] where \[\text{RSS}_{2,n}(\mathfrak{\tau})=\sum_{t=1}^{[\mathfrak{\tau}n]}\bigg{(}y_{t}- \hat{\beta}_{j}^{IVX}(\mathfrak{\tau})\tilde{z}_{t-1}\bigg{)}^{2}+\sum_{t=[ \mathfrak{\tau}n]+1}^{n}\bigg{(}y_{t}-\hat{\beta}_{3}^{IVX}(\mathfrak{\tau}) \tilde{z}_{t-1}\bigg{)}^{2}. \tag{3.21}\] Once we obtain \(\hat{\mathfrak{T}}_{2,n}\), the IVX estimator of \(\beta_{3}\) is represented by \(\hat{\beta}_{3}^{IVX}\left(\hat{\mathfrak{T}}_{2,n}\right)\) and the IVX of \(k_{2}^{0}\) is denoted by \(\hat{k}_{2}^{IVX}=[\hat{\mathfrak{T}}_{2,n}n]\). **Step 2:** For any given \(0<\tau<\hat{\tau}_{2,n}\), the IVX estimators of the parameters \(\beta_{1}\) and \(\beta_{2}\) are given by \[\hat{\beta}_{1}^{IVX}(\mathfrak{\tau})=\frac{\sum_{t=1}^{[\mathfrak{\tau}n]}y _{t}\tilde{z}_{t-1}}{\sum_{t=1}^{[\mathfrak{\tau}n]}\tilde{z}_{t-1}^{2}}\ \ \text{and}\ \ \hat{\beta}_{2}^{IVX}(\mathfrak{\tau})=\frac{\sum_{t=[ \mathfrak{\tau}n]+1}^{\hat{k}_{2}^{IVX}}y_{t}\tilde{z}_{t-1}}{\sum_{t=[ \mathfrak{\tau}n]+1}^{\hat{k}_{2}^{IVX}}\tilde{z}_{t-1}^{2}} \tag{3.22}\] respectively. Then the change-point estimator of \(\mathfrak{T}_{1}^{0}\) is defined as below \[\hat{\mathfrak{T}}_{1,n}=\underset{\tau\in[0,\hat{\mathfrak{T}}_{2,n}]}{\text {argmin}}\ \text{RSS}_{1,n}(\mathfrak{\tau}), \tag{3.23}\] \[\text{RSS}_{1,n}(\mathfrak{\tau})=\sum_{t=1}^{[\mathfrak{\tau}n]}\bigg{(}y_{t }-\hat{\beta}_{1}^{IVX}(\mathfrak{\tau})\tilde{z}_{t-1}\bigg{)}^{2}+\sum_{t=[ \mathfrak{\tau}n]+1}^{\hat{k}_{2}}\bigg{(}y_{t}-\hat{\beta}_{2}^{IVX}( \mathfrak{\tau})\tilde{z}_{t-1}\bigg{)}^{2}. \tag{3.24}\] Once we obtain \(\hat{\mathfrak{T}}_{1,n}\), IVX estimator of \(\beta_{1}\) and \(\beta_{2}\) are represented by \(\hat{\beta}_{1}^{IVX}\left(\hat{\mathfrak{T}}_{1,n}\right)\) and \(\hat{\beta}_{2}^{IVX}\left(\hat{\mathfrak{T}}_{1,n}\right)\) respectively, and the IVX of \(k_{1}^{0}\) is denoted by \(\hat{k}_{1}^{IVX}=[\hat{\mathfrak{T}}_{1,n}n]\). **Remark 7**.: Therefore, based on the break estimators above we want to investigate which break-point will be identified first under abstract degree of persistence using the two estimators. The break estimators that correspond to the OLS and IVX estimation, can be helpful to investigate which break-point will be identified first under abstract degree of persistence in predictive regression models. After showing that the estimators \(\hat{\mathfrak{T}}_{1,n}^{OLS}\) and \(\hat{\mathfrak{T}}_{1,n}^{IVX}\) are consistent estimators of the true break-point \(\pi_{0}\), we can determine their convergence rates and obtain useful performance statistics, under the alternative hypothesis, such as the average run length or the power loss function. ### Main Aspects on Predictive Regression Model Therefore, notice that each of the covariates of the predictive regression are modelled using the autoregressive process \(x_{t}=\rho_{n}x_{t-1}+v_{t}\), \(x_{0}=0\). Specifically, when \(\rho_{n}=\rho\) such that \(|\rho|<1\) then, \(x_{t}\) is a stationary weakly dependent process. However, the present study focuses on cases where \(x_{t}\) is nonstationary. In particular, in these cases \(\rho_{n}=\left(1+\frac{c}{n}\right)\). More precisely, if the autoregressive parameter is fixed with \(\rho_{n}=\rho\) and \(|\rho|<1\), then \(x_{t}\) is asymptotically stationary and weakly dependent. Therefore, this framework helps to unify structural break testing in predictive regression models in cases where the properties of the predictor is not known thus offering robustness to integration order (Duffy and Kasparis, 2021). \[y_{t} =\mu+\mathbf{\beta}^{\prime}\mathbf{x}_{t-1}+u_{t},\quad\text{ for }\ 1 \leq t\leq n, \tag{3.26}\] \[\mathbf{x}_{t} =\left(\mathbf{I}_{p}-\frac{\mathbf{C}_{p}}{n^{\gamma}}\right)\mathbf{x}_{t- 1}+\mathbf{v}_{t}, \tag{3.25}\] where \(Y_{t}\in\mathbb{R}\) is an \(1-\)dimensional vector and \(\mathbf{x}_{t}\in\mathbb{R}^{p\times n}\) is a \(p-\)dimensional vector of local unit root regressors, with an initial condition \(\mathbf{x}_{0}=0\). Moreover, \(\mathbf{C}=\mathsf{diag}\{c_{1},...,c_{p}\}\) is a \(p\times p\) diagonal matrix which determines the degree of persistence of the regressors by the unknown persistence coefficients \(c_{i}\)'s which are assumed to be positive constants. Define with \(\mathbf{\eta}_{t}=\left(u_{t},\mathbf{v}_{t}^{\prime}\right)^{\prime}\). Then, the partial sum process constructed from \(\mathbf{\eta}_{t}\) satisfies a multivariate invariance principle. That is, for \(r\in[0,1]\) and as \(n\to\infty\) we have (where \(\Rightarrow\) denotes weak convergence in distribution), \[X_{n}(r)\Rightarrow n^{-1/2}\sum_{t=1}^{\lfloor nr\rfloor}B(r) \tag{3.27}\] where B(r) is a \(p-\)dimensional Brownian motion with covariance matrix \[\mathbf{\Omega}=\underset{n\to\infty}{\text{lim}}\frac{1}{n}\mathbb{E}\left[ \left(\sum_{t=1}^{n}\mathbf{\eta}_{t}\right)\left(\sum_{t=1}^{n}\mathbf{\eta}_{t}^{ \prime}\right)\right]\equiv\underset{n\to\infty}{\text{lim}}\frac{1}{n}\sum_ {t=1}^{n}\sum_{j=1}^{n}\mathbb{E}\Big{[}\mathbf{\eta}_{t}\mathbf{\eta}_{t}^{\prime} \Big{]} \tag{3.28}\] Now \(\mathbf{\Omega}\) and \(\mathbf{B}(r)\) are partitioned as below \[\mathbf{\Omega}:=\begin{bmatrix}\omega_{uu}&\omega_{vu}^{\prime}\\ \omega_{uv}&\mathbf{\Omega}_{vv}\end{bmatrix}=\mathbf{\Sigma}+\mathbf{\Lambda}+\mathbf{\Lambda} ^{\prime},\quad\mathbf{B}(r):=\begin{bmatrix}B_{u}(r)\\ B_{v}(r)\end{bmatrix}\quad\Omega_{\varepsilon}=\frac{1}{n}\sum_{t=1}^{n}\hat{ \varepsilon}_{t}\hat{\varepsilon}_{t}^{\prime}+\frac{2}{n}\sum_{j=0}^{n}w \left(\frac{j}{M}\right)\sum_{t=j+1}^{n}\hat{\varepsilon}_{t}\hat{ \varepsilon}_{t-j}^{\prime}, \tag{3.29}\] with partitions given by \[\Sigma_{\epsilon} =\frac{1}{n}\sum_{t=1}^{n}\hat{\varepsilon}_{t}\hat{\varepsilon}_ {t}^{\prime} \tag{3.31}\] \[\Lambda_{\epsilon} =\frac{1}{n}\sum_{j=0}^{n}w\left(\frac{j}{M}\right)\sum_{t=j+1}^{ n}\hat{\varepsilon}_{t}\hat{\varepsilon}_{t-j}^{\prime} \tag{3.30}\] **Remark 8**.: Notice that due to the LUR specification of the autocorrelation matrix \(\mathbf{R}_{n}\), the process \(\mathbf{x}_{t}\) represents a restricted VAR model. When \(\mathbf{x}_{t}\) is an unrestricted VAR model and the regressors of the predictive regression model has the same lag as the regressand then the system corresponds to the cointegrating predictive regression and thus different VAR representation theory is needed to handle the possible near-nonstationary and near-explosive components. We assume that the innovation sequence is a stationary vector and therefore we can examine the nonstationary properties of the predictors. However, it is important to emphasize that we refer to "disease-free equilibrium" (steady states) which imply the existence of such cointegrating relationships. * In other words, although the limiting distributions are nonstandard and nonpivotal bootstrap-based simulation methodologies can be employed to obtain critical values or to obtain p-values. In particular, these limiting distributions are determined by sup functionals of Gaussian processes or functionals which depends on the localizing coefficient of persistence. * Furthermore, comparing the performance of the OLS with the IVX estimators as the the value of the localizing coefficient increases, that is, we move away from the unit boundary for low degrees of endogeneity (low values of the correlation between the innovation sequences) then both have about similar empirical size. However, as we move closer to the unity boundary, that is, for nearly-integrated regressors with high endogeneity we can clearly see that for the IVX estimator we obtain empirical size closer to the nominal size, while for the OLS estimator the empirical size is almost double or even three times the nominal size. In other words, the IVX estimators corrects the endogeneity bias under high persistence which we would get if we relied on inference based on the OLS estimator. * Note that in the case that \(\omega_{uv}\neq 0\) the regressors are endogenous and in addition to regressor endogeneity the setting also allows for relatively unrestricted forms of serial correlation of the errors \(\mathbf{\eta}_{t}\). These two aspects in general necessitate some form of modified least squares estimation in conjunction with HAC arguments to allow for the development of standard asymptotic inference. * It is well known that in cointegrating regressions the OLS estimator is consistent despite the fact that the regressors are allowed to be endogenous and the errors are allowed to be serially correlated. However, the limiting distribution of the OLS estimator is contaminated by second order bias terms, reflecting the correlation structure between the regressors and the errors (see the paper: Tuning parameter free inference in Cointegrating Regressions). * The literature provides several estimators which overcome this difficulty at the cost of tuning parameter choices such as: the number of leads and lags for the Dynamic OLS (D-OLS) estimator, kernel and bandwidth choice for the fully modified OLS estimator and the canonical cointegrating regression estimator. Such tuning parameters are often difficult to choose in practice and the finite sample performance of the estimators and tests based upon them often reacts sensitively to their choices. * In contrast to the aforementioned approaches, the integrated modified OLS (IM-OLS) estimator avoids the choice of tuning parameters. However, standard asymptotic inference based on the IM-OLS estimator does require the estimation of a long-run variance parameter. Therefore, this is typically achieved by non-parametric kernel estimators, which necessitate kernel and bandwidth choices. In particular, to capture their effects in finite samples, V and Wagner (2014) propose fixed-b theory for obtaining critical values. However, their simulation results reveal that when endogeneity and/or error serial correlation is strong, a large sample size is needed for the procedure to yield reasonable sizes. ## 4 Background Main Results **Lemma 2**.: Consider the following predictive regression model \[y_{t}=\begin{cases}\beta_{1}y_{t-1}+\epsilon_{t},&1\leq t\leq k_{1}^{0},\\ \beta_{2}y_{t-1}+\epsilon_{t},&k_{1}^{0}+1\leq t\leq k_{2}^{0},\\ \beta_{3}y_{t-1}+\epsilon_{t},&k_{2}^{0}+1\leq t\leq n\end{cases} \tag{4.1}\] Under Assumptions C1-C4, the following results hold jointly \[\frac{1}{n}\sum_{t=1}^{k_{1}^{0}}y_{t-1}u_{t}\Rightarrow\frac{\sigma^{2}}{2} \left(W^{2}(\tau_{1}^{0})-\tau_{1}^{0}\right),\quad\frac{1}{n^{2}}\sum_{t=1}^{ k_{1}^{0}}y_{t-1}^{2}\Rightarrow\sigma^{2}\int_{0}^{\tau_{1}^{0}}W^{2}(s)ds, \quad\frac{y_{k_{1}^{0}}}{\sqrt{n}}\Rightarrow\sigma W\left(\tau_{1}^{0}\right) \tag{4.2}\] Proof.: To prove part (a), note that \[y_{t}=y_{0}+\sum_{j=1}^{t}u_{j}=\frac{tc}{n^{\eta}}+y_{0}+\sum_{j=1}^{t} \epsilon_{j},0\leq t\leq k_{1}^{0}, \tag{4.3}\] Therefore, we obtain that \[\frac{1}{n}\sum_{j=1}^{k_{1}^{0}}y_{t-1}u_{t}=\frac{1}{n}\sum_{j=1}^{k_{1}^{0} }\left(\frac{(t-1)c}{n^{\eta}}+y_{0}+\sum_{j=1}^{t}\epsilon_{j}\right)\left( \frac{c}{n^{\eta}}+\epsilon_{t}\right)=\frac{1}{n}\sum_{j=1}^{k_{1}^{0}}\left( \sum_{i=1}^{t-1}\epsilon_{t}\right)\epsilon_{t}+o_{p}(1), \tag{4.4}\] Since \(\eta>1/2\) and by applying standard results in the unit root literature, we have that \[\frac{1}{n}\sum_{j=1}^{k_{1}^{0}}y_{t-1}u_{t}\Rightarrow\frac{\sigma^{2}}{2} \int_{0}^{\tau_{1}^{0}}W(s)dW(s)=\frac{\sigma^{2}}{2}\left(W^{2}(\tau_{1}^{0}) -\tau_{1}^{0}\right) \tag{4.5}\] Furthermore, note that it can be proved that \[\begin{array}{ll}\sum\limits_{\begin{subarray}{c}t=1\\ \frac{k_{1}^{0}}{k_{1}^{0}}\end{subarray}}^{\hat{k}_{1}^{0}}\overset{p}{\to}1, &\sum\limits_{\begin{subarray}{c}t=\hat{k}_{1}^{0}+1\\ \frac{k_{2}^{0}}{k_{2}^{0}}\end{subarray}}^{\hat{k}_{2}^{0}}y_{t-1}^{2}\\ \sum\limits_{t=1}^{k_{1}^{0}}y_{t-1}^{2}&\sum\limits_{ \begin{subarray}{c}t=\hat{k}_{1}^{0}+1\\ \frac{k_{2}^{0}}{k_{1}^{0}}\end{subarray}}^{\hat{k}_{2}^{0}}y_{t-1}^{2}\end{array} \overset{p}{\to}1,&\sum\limits_{\begin{subarray}{c}t=\hat{k}_{2}^{0}+1\\ \frac{k_{2}^{0}}{k_{2}^{0}}\end{subarray}}^{\hat{p}}y_{t-1}^{2}\\ \overset{p}{\to}1\quad\text{and}&\frac{\hat{\sigma}^{2}}{\sigma^{2}}\overset{p} {\to}1.\end{array} \tag{4.6}\] Another important aspect is to consider the case of weakly dependent errors which applies removing the independence assumption of the error sequence \(\left\{\epsilon_{t}\right\}_{t=1}^{n}\). Specifically, the case of weakly dependent errors assumes the following linear process representation \[\epsilon_{t}=\sum\limits_{j=0}^{\infty}a_{j}e_{t-j},\ \ \text{for}\ \ t\geq 1. \tag{4.7}\] Furthermore, to ensure that \(\epsilon_{t}\)'s are weakly dependent error terms we assume that \(a(1):=\sum_{j=0}^{\infty}a_{j}\neq 0\), \(\sum_{j=0}^{\infty}j^{3/2}|a_{j}|<\infty\), and \(\left\{e_{t}\right\}_{t=1}^{n}\) is a sequence of _i.i.d_ random variables with mean zero and variance \(0<\sigma_{e}^{2}<\infty\). Moreover, the consistency of three estimators that correspond to the three regimes, when the estimated break is incorporated, needs to be established such that (Theorem 4.1 in Pang et al. (2021)) \[\begin{cases}\tau_{1}^{0}n\left(\hat{\beta}_{1}\left(\hat{\tau}_{1,n}\right)- \beta_{1}\right)\Rightarrow\frac{W^{2}(1)-a(2)/a^{2}(1)}{2\int_{0}^{1}W^{2}( s)ds},\\ \\ \sqrt{\frac{\tau_{1}^{0}n^{1+\alpha_{1}}}{2c_{1}}}\beta_{2}^{k_{2}^{0}-k_{1}^ {0}}\left(\hat{\beta}_{2}\left(\hat{\tau}_{1,n}\right)-\beta_{2}\right) \Rightarrow\xi,\\ \\ \sqrt{\frac{\tau_{1}^{0}n^{1+\alpha_{2}}}{2c_{2}}}\beta_{2}^{k_{2}^{0}-k_{1}^ {0}}\left(\hat{\beta}_{3}\left(\hat{\tau}_{2,n}\right)-\beta_{3}\right) \Rightarrow\zeta,\end{cases} \tag{4.8}\] where \(a(2)=\sum_{j=0}^{\infty}a_{j}^{2}\), and \(\xi\) and \(\zeta\) are two independent standard Cauchy variates. **Remark 9**.: In terms of the identification of the break point this is determined by the stochastic orders of \(P_{1}\) and \(P_{2}\). If \(P_{2}\) has a higher stochastic order of magnitude that \(P_{1}\), then \(RSS(\tau_{1}^{0})-RSS(\tau_{2}^{0})\) will diverge to \(\infty\) in probability, and \(k_{2}^{0}\) will be identified first asymptotically. Instead, if \(P_{1}\) has a higher stochastic order of magnitude than \(P_{2}\), \(RSS(\tau_{1}^{0})-RSS(\tau_{2}^{0})\) will go to \(-\infty\) in probability, and \(k_{1}^{0}\) will be identified first asymptotically. However, when \(P_{1}\) and \(P_{2}\) have the same stochastic order of magnitude, which break will be identified first depends on the magnitude and the duration of the break, which are unobservable in reality. Therefore, it is difficult to determine which break will be uncovered first, and we need to test and estimate the second break from all subsamples split by the first estimated break point (Pang et al., 2021). Local to Unity Consider the following predictive regression model \[y_{t}=\begin{cases}\beta_{1}y_{t-1}+u_{t},&1\leq t\leq k_{1}^{0},\\ \beta_{2}y_{t-1}+u_{t},&k_{1}^{0}+1\leq t\leq k_{2}^{0},\\ \beta_{3}y_{t-1}+u_{t},&k_{2}^{0}+1\leq t\leq n\end{cases} \tag{4.9}\] where \(\beta_{1}=\left(1-\frac{\gamma}{n}\right)\) with \(\gamma\in\mathbb{R}\), \(\beta_{2}=\beta_{2n}=\left(1+\frac{c_{1}}{k_{n}}\right)\) with \(c_{1}>0\), \(\beta_{3}=\beta_{3n}=\left(1-\frac{c_{2}}{k_{n}}\right)\) with \(c_{2}>0\) and \(u_{t}=cn^{-\eta}+\epsilon_{t}\) with \(c\in\mathbb{R}\) and \(\eta>1/2\) when \(t\leq k_{1}^{0}\), and \(u_{t}=\epsilon_{t}\) when \(t>k_{1}^{0}\). Then \(\hat{\beta}_{1}\left(\hat{\tau}_{1,n}\right)\), \(\hat{\beta}_{2}\left(\hat{\tau}_{1,n}\right)\) and \(\hat{\beta}_{3}\left(\hat{\tau}_{2,n}\right)\) are all consistent, and their asymptotic distributions are respectively given by \[\begin{cases}n\left(\hat{\beta}_{1}\left(\hat{\tau}_{1,n}\right)-\beta_{1} \right)\Rightarrow\frac{\int_{0}^{\tau_{1}^{0}}\int_{0}^{\tau}e^{\gamma(r-s)} dW(s)dW(r)}{\int_{0}^{\tau_{1}^{0}}\left(\int_{0}^{\tau}e^{\gamma(r-s)}dW(s) \right)^{2}dr},\\ \\ \sqrt{\frac{nk_{n}}{2c_{1}}}\beta_{2}^{k_{2}^{0}-k_{1}^{0}}\left(\hat{\beta}_{ 2}\left(\hat{\tau}_{1,n}\right)-\beta_{2}\right)\Rightarrow\frac{\sqrt{2c_{1} }\mathcal{X}}{\int_{0}^{\tau}e^{\gamma(\tau_{1}^{0}-s)}dW(s)},\\ \\ \sqrt{\frac{nh_{n}}{2c_{2}}}\beta_{2}^{k_{2}^{0}-k_{1}^{0}}\left(\hat{\beta}_{ 3}\left(\hat{\tau}_{2,n}\right)-\beta_{3}\right)\Rightarrow\frac{\sqrt{2c_{2} }\mathcal{Z}}{\int_{0}^{\tau_{1}^{n}}e^{\gamma(\tau_{1}^{0}-s)}dW(s)},\end{cases} \tag{4.10}\] where \(\mathcal{X}\) and \(\mathcal{Z}\) are two random variables obeying \(\mathcal{N}\left(0,\frac{1}{2c_{1}}\right)\) and \(\mathcal{N}\left(0,\frac{1}{2c_{2}}\right)\) respectively. Moreover, the random variables \(\mathcal{X}\), \(\mathcal{Z}\) and \(\left\{W(s),0\leq s\leq\tau_{1}^{0}\right\}\) are mutually independent. Additional Hypothesis Testing as in Pang et al. (2021): We discuss a hypothesis test problem which is important for testing for the equality of the persistence in the two shifting regimes. Suppose that \(k_{n}=n^{\alpha_{1}}\) with \(0<\alpha_{1}<1\) and \(h_{n}=n^{\alpha_{2}}\) with \(0<\alpha_{2}<1\). Therefore, the null hypothesis is given by \(H_{0}:c_{1}=c_{2}=0\). Under the null, this implies that the underlying stochastic process has an autoregressive unit root throughout the sample. The alternative hypothesis can be formulated as below \(H_{0}:c_{1}\neq 0\) and \(c_{2}\neq 0\). Therefore, under the alternative hypothesis the underlying process is generated by the AR(1) model with \(k_{n}=n^{\alpha_{1}}\) and \(h_{n}=n^{\alpha_{2}}\). Denote with \(\hat{\beta}=\frac{\sum_{t=1}^{n}y_{t}y_{t-1}}{\sum_{t=1}^{n}y_{t-1}^{2}}\), and the associated test statistic is given by \[t_{n}=\sqrt{\sum_{t=1}^{n}y_{t-1}^{2}}\left(\hat{\beta}-1\right)=\frac{\sum_{t =1}^{n}y_{t}y_{t-1}}{\sqrt{\sum_{t=1}^{n}y_{t-1}^{2}}}. \tag{4.11}\] Furthermore, it is well known that under \(H_{0}\), we have that \[t_{n}\Rightarrow\frac{W^{2}(1)-1}{2\sqrt{\int_{0}^{1}W^{2}(s)ds}} \tag{4.12}\] which implies that the the t-ratio given by the random variable \(t_{n}\) is bounded in probability under \(H_{0}\). On the other hand, since we can prove that \(|t_{n}|\) will go to infinity in probability under \(H_{1}\), it implies that the t-statistic has the ability to discriminate between the data generated from a unit root model vis-a-vis the date generated from the model specification above. **Example 6** (Structural Breaks both in Mean and Variance).: Consider the following DGP \[y_{t} =\mathbf{x}_{t-1}^{\prime}\mathbf{\beta}_{t}+u_{t},\qquad\text{for}\ \ t=1,...,n, \tag{4.14}\] \[\mathbf{x}_{t} =\left(\mathbf{I}_{p}-\frac{\mathbf{C}_{p}}{n^{\gamma}}\right)\mathbf{x}_{t- 1}+\mathbf{v}_{t}, \tag{4.13}\] We consider a time-varying coefficient vector written as \[\mathbf{\beta}_{t}=\mathbf{\beta}_{1}\mathbf{1}\big{(}t\leq k_{1}^{0}\big{)}+\mathbf{\beta}_{ 2}\mathbf{1}\big{(}t>k_{1}^{0}\big{)} \tag{4.15}\] which implies a structural break setting in the coefficients of the predictive regression model. Notice for the moment, we assume that the model of interest does not have an intercept. In other words, we do not consider a transformation of the \(y_{t}\) random variable with a model intercept. Furthermore, we introduce conditional heteroscedastity for the innovation term of the predictive regression model such that \(u_{t}=\sigma_{t}\epsilon_{t}\) and we consider the existence of a possible break-point in the conditional variance at an unknown location which can be at a different sample fraction from the break-point that corresponds to the model parameters. Therefore, it follows that \[\sigma_{t}=\sigma_{1}\mathbf{1}\big{(}t\leq k_{2}^{0}\big{)}+\sigma_{2}\mathbf{1} \big{(}t>k_{2}^{0}\big{)} \tag{4.16}\] where \(\sigma_{1}>0\) and \(\sigma_{2}>0\) and the structural break occurring in its variance at some unknown location \(k_{2}^{0}\). In other words, within the particular testing environment we consider the case of a break in the conditional mean parameters of the predictive regression model and error variance occurring at break-point locations \(k_{1}^{0}\) and \(k_{2}^{0}\) respectively. For simplicity we will refer to the break in the model coefficients, that is, the parameter vector \(\mathbf{\beta}_{i}\) and \(\sigma_{i}\) for \(i\in\{1,2\}\) as the break in mean and variance respectively. ## 5 Conclusion Our main research objective is to investigate the statistical properties and asymptotic behaviour of break-point estimators when a single change-point occurs in predictive regression models. In this article we consider testing for structural break in univariate time series regressions models, an active research literature considers statistical methodologies for estimation and inference in change-point models for multivariate time series (see, Preuss et al. (2015) among others). We leave such considerations and extensions of our framework as future research. Other extensions include consider a sequence of break-point estimators in a high-dimensional predictive regression model using shrinkage type estimators to obtain the break-point date estimators (see, the excellent framework proposed recently by Tu and Xie (2023)8, see also the discussion on the shrinkage inference approach in Katsouris (2023) and the proposed shrinkage methodology presented in Chen and Nkurunziza (2017), Nkurunziza and Ahmed (2010) as well as Nkurunziza (2021)). Footnote 8: Although the term \({}^{*}\)sporadic predictability\({}^{*}\), roughly speaking reflects predictability adapted to the changing environment based on the myriads of global and local macroeconomic shocks not to mention that the first word of the particular paper title is mildly not appropriate for an economics paper. * Predictive regression models are employed for statistical inference purposes when the lagged value of a financial variable is used as predictor for next-period stock returns. Therefore, using the proposed Wald-type statistics for detecting parameter instability, we examine whether stock returns are predictable even when we account for the presence of a single structural break in the relation between the regressand and the persistent predictors. Although the predictive regression errors are uncorrelated to any predetermined regressor are allowed to be contemporaneously correlated with the innovations of the local unit root processes. * The asymptotic theory is established using weak convergence arguments for sample moment functionals into OU functionals (see, Uhlenbeck and Ornstein (1930)). In order to correct for the finite sample effects of estimating the model intercept, which are most pronounced for highly persistent regressors that are strongly correlated with the predictive model's innovations, Kostakis et al. (2015) recommend the use of a finite-sample correction. On the other hand, the inclusion of this correction factor does not alter any of the large sample results. * In other words, generally when we consider the joint predictability and parameter instability tests we find that a rejection by these statistics can only be interpreted as signaling instability in any of the model parameters. Therefore, if the objective is not only to test for overall model stability but also to determine which particular subset of parameters is unstable a sup-Wald of joint stability of all parameters in the predictive regression model is needed. ## 6 Appendix ### Auxiliary Results **Lemma 3**.: Assume that \(x_{t}\) is generated by \[x_{t}=\left(1-\frac{c}{n^{\gamma}}\right)x_{t-1}+u_{t},\ \ x_{0}=0,c>0,\gamma\in(0,1). \tag{6.1}\] with \(t\in\{1,...,n\}\) where \(u_{t}\sim_{i.i.d}\mathcal{N}(0,\sigma_{u}^{2})\). Then the following results hold: \[(i) \frac{1}{n^{3/2}}\sum_{t=1}^{\lfloor ns\rfloor}x_{t-1}\overset{d }{\rightarrow}\int_{0}^{s}J_{c}(r)dr \tag{6.3}\] \[(ii) \frac{1}{n^{\frac{1}{2}+\gamma_{x}}}\sum_{t=1}^{\lfloor ns\rfloor }x_{t-1}\overset{p}{\rightarrow}J_{c}(s) \tag{6.2}\] **Example 7**.: Consider the integrated process: \((1-L)x_{t}=u_{t}\). Then, by recursive substitution we obtain that \(x_{t}=\sum_{j=1}^{t}u_{j}+x_{0}\), for \(1\leq t\leq n\). Define with \[X_{n}(r)=\frac{1}{\sqrt{n}}\sum_{j=1}^{\lfloor nr\rfloor}u_{j}=\frac{1}{\sqrt {n}}S_{\lfloor nr\rfloor}\in\mathcal{D}(0,1). \tag{6.4}\] \[\sum_{j=1}^{n}x_{t} =\sum_{j=1}^{n}\left[S_{j-1}+u_{j}+x_{0}\right]=\sqrt{n}\sum_{j=1 }^{n}\left[\frac{1}{\sqrt{n}}S_{j-1}\right]+\sum_{j=1}^{n}u_{j}+x_{0}\] \[=n\sqrt{n}\sum_{j=1}^{n}\left[\int_{(j-1)/n}^{j/n}X_{n}(r)dr \right]+\sum_{j=1}^{n}u_{j}+x_{0}\] \[=n^{3/2}\int_{0}^{1}X_{n}(r)dr+\sum_{j=1}^{n}u_{j}+x_{0}\] Hence, \[\frac{1}{n^{3/2}}\sum_{j=1}^{n} =\int_{0}^{1}X_{n}(r)dr+\sum_{j=1}^{n}u_{j}+x_{0}=\int_{0}^{1}X_{ n}(r)dr+o_{P}(1)\] \[\Rightarrow\int_{0}^{1}B(r)dr\ \ \ \text{where}\ \ B(r)\equiv BM(\omega^{2}).\] Therefore, it holds that \[\frac{1}{n^{3/2}}\sum_{j=1}^{\lfloor ns\rfloor}x_{t}\Rightarrow\int_{0}^{r}B (s)ds. \tag{6.5}\] **Example 8**.: Consider the LUR autoregression \[x_{t}=\rho_{n}x_{t-1}+u_{t},\ \ x_{0}=0,c>0,\gamma_{x}\in(0,1), \tag{6.6}\] where \(\rho_{n}:=\left(1-\frac{c}{n^{\gamma}}\right)\). Then, we obtain that \[x_{t-1}=\rho_{n}^{t-1}\sum_{j=1}^{t-1}\rho_{n}^{-j}u_{j}\equiv\left(1-\frac{c} {n^{\gamma}}\right)^{t-1}\sum_{j=1}^{t-1}\left(1-\frac{c}{n^{\gamma}}\right)^{ -j}u_{j}. \tag{6.7}\] where \(c>0\) and \(\gamma\in(0,1)\). Thus, we need to show that \[\frac{1}{n^{3/2}}\sum_{t=1}^{\lfloor ns\rfloor}x_{t-1}\overset{d}{\to}\int_{0 }^{s}J_{c}(r)dr \tag{6.8}\] Proof.: We have that \[\sum_{t=1}^{n}x_{t-1} =\sum_{t=1}^{n}\left\{\left(1-\frac{c}{n^{\gamma}}\right)^{t-1} \sum_{j=1}^{t-1}\left(1-\frac{c}{n^{\gamma}}\right)^{-j}u_{j}\right\}=\sum_{t =1}^{n}\left\{\left(1-\frac{c}{n^{\gamma}}\right)^{-1}\sum_{j=1}^{t-1}\left(1 -\frac{c}{n^{\gamma}}\right)^{t-j}u_{j}\right\}\] \[=\left(1-\frac{c}{n^{\gamma}}\right)^{-1}\sum_{t=1}^{n}\left\{ \sum_{j=1}^{t-1}\left(1-\frac{c}{n^{\gamma}}\right)^{t-j}u_{j}\right\}\] Next, consider the boundaries on the above expression such that \(t\in\{1,...,\lfloor ns\rfloor\}\), which gives \[\frac{1}{n}\left(1-\frac{c}{n^{\gamma}}\right)^{-1}\frac{1}{\sqrt{n}}\sum_{t =1}^{\lfloor ns\rfloor}\left\{\sum_{j=1}^{t-1}\left(1-\frac{c}{n^{\gamma}} \right)^{t-j}u_{j}\right\},\ \ \text{for}\ s\in[0,1], \tag{6.9}\] Therefore as \(n\to\infty\) we obtain the limit: \[\frac{1}{n^{3/2}}\sum_{t=1}^{\lfloor ns\rfloor}x_{t-1}\overset{d}{\to}\int_{0 }^{s}J_{c}(r)dr \tag{6.10}\] **Example 9** (see, Kejriwal et al. (2020)).: Consider testing the null hypothesis, \(H_{0}:\alpha_{i}=1\), for all \(i\), then regression model can be written as \[\Delta y_{t}=c_{i}+(\alpha_{i}-1)y_{t-1}+\sum_{j=1}^{p-1}\Upsilon_{j}\Delta y _{t-j}+e_{t}^{*},\ \ \ c_{i}=(1-\alpha_{i})\big{[}u_{n_{i-1}}^{0}+\mu_{i}\big{]}.\] ### Appendix of the paper #### 6.2.1 Proof of Lemma 1 Using the weak law of large numbers in Andrews (1988, Theorem 2) we have that \[\sup_{\pi_{1}\leq\tau\leq\pi_{0}}\left|\hat{\beta}_{1}(\uptau)-\beta_{1}\right|= \sup_{\pi_{1}\leq\tau\leq\pi_{0}}\left|\frac{\sum_{t=1}^{\lfloor n \uptau\rfloor}y_{t-1}\epsilon_{t}}{\sum_{t=1}^{\lfloor n \uptau\rfloor}y_{t-1}^{2}}\right|\leq\frac{n}{\sum_{t=1}^{\lfloor n \uptau_{1}\rfloor}y_{t-1}^{2}}\sup_{\pi_{1}\leq\tau\leq\tau_{0}}\left|\frac{ \sum_{t=1}^{\lfloor n\uptau_{1}\rfloor}y_{t-1}\epsilon_{t}}{n}\right|=\mathcal{ O}_{p}(1)o_{p}(1)=o_{p}(1).\] Moreover, we have that \[\hat{\beta}_{2}(\uptau)-\beta_{1}=\frac{\sum_{t=\lfloor n\uptau \rfloor+1}^{n}y_{t-1}^{2}}{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}} \left(\beta_{2}-\beta_{1}\right)+\frac{\sum_{t=\lfloor n \uptau\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{\sum_{t=\lfloor n \uptau\rfloor+1}^{n}y_{t-1}^{2}}, \tag{6.11}\] Proof.: We have that \[\hat{\beta}_{2}(\uptau)=\frac{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t}y_{t- 1}}{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}\equiv\frac{\sum_{t= \lfloor n\uptau\rfloor+1}^{n}\left\{\beta_{1}y_{t-1}I\left\{t\leq k_{0}\right\} +\beta_{2}y_{t-1}I\left\{t>k_{0}\right\}+\epsilon_{t}\right\}y_{t-1}}{\sum_{t =\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}} \tag{6.12}\] \[\hat{\beta}_{2}(\uptau)=\beta_{1}\frac{\sum_{t=\lfloor n\uptau \rfloor+1}^{n}y_{t-1}^{2}}{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}+ \frac{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}\beta_{2}y_{t-1}I\left\{t>k_{0} \right\}y_{t-1}}{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}+\frac{\sum_ {t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{\sum_{t=\lfloor n\uptau \rfloor+1}^{n}y_{t-1}^{2}}, \tag{6.13}\] Therefore, \[\hat{\beta}_{2}(\uptau)=\beta_{1}\frac{\sum_{t=\lfloor n\uptau \rfloor+1}^{n}y_{t-1}^{2}}{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}+ \frac{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}{\sum_{t=\lfloor n\uptau \rfloor+1}^{n}y_{t-1}^{2}}\beta_{2}+\frac{\sum_{t=\lfloor n\uptau\rfloor+1}^{ n}y_{t-1}\epsilon_{t}}{\sum_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}, \tag{6.14}\] and \[\hat{\beta}_{2}(\uptau)-\beta_{1} =\beta_{1}\frac{\sum\limits_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t- 1}^{2}}{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}+\frac{\sum \limits_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}^{2}}{\sum \limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}\beta_{2}+\frac{\sum \limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{\sum \limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}-\beta_{1},\] \[=\beta_{1}\left(\frac{\sum\limits_{t=\lfloor n\uptau_{0}\rfloor+1 }^{n}y_{t-1}^{2}}{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}-1 \right)+\frac{\sum\limits_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}^{2}}{ \sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}\beta_{2}+\frac{\sum \limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{\sum \limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}} \tag{6.15}\] \[=\frac{\sum\limits_{t=\lfloor n\uptau_{0}\rfloor+1}^{n}y_{t-1}^{ 2}}{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}\big{(}\beta_{2}- \beta_{1}\big{)}+\frac{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1} \epsilon_{t}}{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}\] Note that similarly, we also have that \[\hat{\beta}_{1}(\uptau)-\beta_{1}=\frac{\sum\limits_{t=\lfloor n \uptau_{0}\rfloor+1}^{\lfloor n\uptau\rfloor}y_{t-1}^{2}}{\sum \limits_{t=1}^{\lfloor n\uptau\rfloor}y_{t-1}^{2}}\big{(}\beta_{2}-\beta_{1} \big{)}+\frac{\sum\limits_{t=1}^{\lfloor n\uptau\rfloor}y_{t-1}\epsilon_{t}}{ \sum\limits_{t=1}^{\lfloor n\uptau\rfloor}y_{t-1}^{2}}, \tag{6.16}\] Next consider adding and subtracting the follow term on the expressions above \[\left(\beta_{2}-\beta_{1}\right)\frac{n\frac{(1-\uptau_{0}) \sigma^{2}}{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}}{ \sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}, \tag{6.17}\] We obtain \[\left[\hat{\beta}_{2}(\uptau)-\beta_{1}\right]+\left(\beta_{2}- \beta_{1}\right)\frac{n\frac{(1-\uptau_{0})\sigma^{2}}{\frac{1- \beta_{2}^{2}}{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}}}-( \beta_{2}-\beta_{1})}{\sum\limits_{t=\lfloor n\uptau\rfloor+1}^{n}y_{t-1}^{2}} \tag{6.18}\] and \[RHS\equiv\frac{\sum_{t=\lfloor n\tau_{0}\rfloor+1}^{n}y_{t-1}^{2}}{ \sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}^{2}}\Big{(}\beta_{2}- \beta_{1}\Big{)}+\frac{\sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t- 1}^{2}}{\sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}^{2}}+(\beta_{2}- \beta_{1})\,\frac{n\frac{(1-\tau_{0})\sigma^{2}}{1-\beta_{2}^{2}}}{\sum_{t= \lfloor n\tau\rfloor+1}^{n}y_{t-1}^{2}}-(\beta_{2}-\beta_{1})\,\frac{n\frac{(1 -\tau_{0})\sigma^{2}}{1-\beta_{2}^{2}}}{\sum_{t=\lfloor n \tau\rfloor+1}^{n}y_{t-1}^{2}}\] \[=\frac{\sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}\epsilon_{t}}{ \sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}^{2}}+\Big{(}\beta_{2}- \beta_{1}\Big{)}\left(\frac{\sum_{t=\lfloor n\tau_{0}\rfloor+1}^{ n}y_{t-1}^{2}}{\sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}^{2}}-\frac{n\frac{(1- \tau_{0})\sigma^{2}}{1-\beta_{2}^{2}}}{\sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{ t-1}^{2}}\right)+(\beta_{2}-\beta_{1})\,\frac{n(1-\tau_{0})\sigma^{2}}{ 1-\beta_{2}^{2}}\] #### 6.2.2 Derivations of Equations 6 and 7 The asymptotic behaviour of the residual sum of squares is as following \[\sup_{\tau_{1}\leq\tau\leq\tau_{0}}\left|\frac{1}{n}\right.\,RSS_{n}(\pi)- \sigma^{2}-\frac{(\tau_{0}-\tau)(1-\tau_{0})(\beta_{2}-\beta_{1})^{2}\sigma^ {2}}{(\tau_{0}-\tau)(1-\beta_{2}^{2})+(1-\tau_{0})(1-\beta_{1}^{2})}\right|=o _{p}(1), \tag{6.19}\] \[\sup_{\tau_{0}\leq\tau\leq\tau_{2}}\left|\frac{1}{n}\right.\,RSS_{n}(\pi)- \sigma^{2}-\frac{\tau_{0}(\tau-\tau_{0})(\beta_{2}-\beta_{1})^{2}\sigma^{2}} {\tau_{0}(1-\beta_{2}^{2})+(\tau-\tau_{0})(1-\beta_{1}^{2})}\right|=o_{p}(1), \tag{6.20}\] Proof of Expression (6.19):For \(\tau_{1}\leq\tau\leq\tau_{0}\), we can write the residual sum of squares as: \[RSS_{n}(\tau) =\sum_{t=1}^{\lfloor n\tau\rfloor}\left(\epsilon_{t}-\left(\hat{ \beta}_{1}(\tau)-\beta_{1}\right)y_{t-1}\right)^{2}+\sum_{t=\lfloor n\tau \rfloor+1}^{\lfloor n\tau_{0}\rfloor}\left(\epsilon_{t}-\left(\hat{\beta}_{2}( \tau)-\beta_{1}\right)y_{t-1}\right)^{2}\] \[+\sum_{t=\lfloor n\tau_{0}\rfloor+1}^{n}\left(\epsilon_{t}-\left( \hat{\beta}_{2}(\tau)-\beta_{2}\right)y_{t-1}\right)^{2}\] \[=\sum_{t=1}^{n}\epsilon_{t}^{2}-\frac{\left(\sum_{t=1}^{\lfloor n \tau\rfloor}y_{t-1}\epsilon_{t}\right)^{2}}{\sum_{t=1}^{\lfloor n\tau \rfloor}y_{t-1}^{2}}-2\left(\hat{\beta}_{2}(\tau)-\beta_{1}\right)\sum_{t= \lfloor n\tau\rfloor+1}^{\lfloor n\tau_{0}\rfloor}y_{t-1}\epsilon_{t}\] \[+\left(\hat{\beta}_{2}(\tau)-\beta_{2}\right)^{2}\sum_{t=\lfloor n \tau_{0}\rfloor+1}^{n}y_{t-1}^{2}.\] Proof of Expression (6.20):For \(\tau_{0}<\tau\leq\tau_{2}\), we can write the residual sum of squares as \[RSS_{n}(\tau)=\sum_{t=1}^{\lfloor n\tau_{0}\rfloor}\left(\epsilon_{t}-\left(\hat{ \beta}_{1}(\tau)-\beta_{1}\right)y_{t-1}\right)^{2}+\sum_{t=\lfloor n\tau_{0} \rfloor+1}^{\lfloor n\tau\rfloor}\left(\epsilon_{t}-\left(\hat{\beta}_{1}( \tau)-\beta_{2}\right)y_{t-1}\right)^{2}\] \[+\sum_{t=\lfloor n\tau\rfloor+1}^{n}\left(\epsilon_{t}-\left(\hat{\beta}_{2}( \tau)-\beta_{2}\right)y_{t-1}\right)^{2}\] \[=\sum_{t=1}^{n}\epsilon_{t}^{2}-2\left(\hat{\beta}_{1}(\tau)-\beta_{1}\right) \sum_{t=1}^{\lfloor n\tau_{0}\rfloor}y_{t-1}\epsilon_{t}+\left(\hat{\beta}_{1 }(\tau)-\beta_{1}\right)^{2}\sum_{t=1}^{\lfloor n\tau_{0}\rfloor}y_{t-1}^{2}\] \[-2\left(\hat{\beta}_{1}(\tau)-\beta_{2}\right)\sum_{t=\lfloor n\tau_{0} \rfloor+1}^{\lfloor n\tau\rfloor}y_{t-1}\epsilon_{t}+\left(\hat{\beta}_{1}( \tau)-\beta_{2}\right)^{2}\sum_{t=\lfloor n\tau_{0}\rfloor+1}^{\lfloor n\tau \rfloor}y_{t-1}^{2}\] \[-2\left(\hat{\beta}_{2}(\tau)-\beta_{2}\right)\sum_{t=\lfloor n\tau\rfloor+1}^ {n}y_{t-1}\epsilon_{t}+\left(\hat{\beta}_{2}(\tau)-\beta_{2}\right)^{2}\sum_{ t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}^{2}.\] After simplification of the last two terms we obtain: \[RSS_{n}(\tau)=\sum_{t=1}^{n}\epsilon_{t}^{2}-2\left(\hat{\beta}_{1}(\tau)- \beta_{1}\right)\sum_{t=1}^{\lfloor n\tau_{0}\rfloor}y_{t-1}\epsilon_{t}+ \left(\hat{\beta}_{1}(\tau)-\beta_{1}\right)^{2}\sum_{t=1}^{\lfloor n\tau_{0 }\rfloor}y_{t-1}^{2}\] \[-2\left(\hat{\beta}_{1}(\tau)-\beta_{2}\right)\sum_{t=\lfloor n\tau_{0} \rfloor+1}^{\lfloor n\tau\rfloor}y_{t-1}\epsilon_{t}+\left(\hat{\beta}_{1}( \tau)-\beta_{2}\right)^{2}\sum_{t=\lfloor n\tau_{0}\rfloor+1}^{\lfloor n\tau \rfloor}y_{t-1}^{2}\] \[-\frac{\left(\sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}\epsilon_{t}\right)^{2 }}{\sum_{t=\lfloor n\tau\rfloor+1}^{n}y_{t-1}^{2}}.\]
2307.15539
Beating Backdoor Attack at Its Own Game
Deep neural networks (DNNs) are vulnerable to backdoor attack, which does not affect the network's performance on clean data but would manipulate the network behavior once a trigger pattern is added. Existing defense methods have greatly reduced attack success rate, but their prediction accuracy on clean data still lags behind a clean model by a large margin. Inspired by the stealthiness and effectiveness of backdoor attack, we propose a simple but highly effective defense framework which injects non-adversarial backdoors targeting poisoned samples. Following the general steps in backdoor attack, we detect a small set of suspected samples and then apply a poisoning strategy to them. The non-adversarial backdoor, once triggered, suppresses the attacker's backdoor on poisoned data, but has limited influence on clean data. The defense can be carried out during data preprocessing, without any modification to the standard end-to-end training pipeline. We conduct extensive experiments on multiple benchmarks with different architectures and representative attacks. Results demonstrate that our method achieves state-of-the-art defense effectiveness with by far the lowest performance drop on clean data. Considering the surprising defense ability displayed by our framework, we call for more attention to utilizing backdoor for backdoor defense. Code is available at https://github.com/damianliumin/non-adversarial_backdoor.
Min Liu, Alberto Sangiovanni-Vincentelli, Xiangyu Yue
2023-07-28T13:07:42Z
http://arxiv.org/abs/2307.15539v3
# Beating Backdoor Attack at Its Own Game ###### Abstract Deep neural networks (DNNs) are vulnerable to backdoor attack, which does not affect the network's performance on clean data but would manipulate the network behavior once a trigger pattern is added. Existing defense methods have greatly reduced attack success rate, but their prediction accuracy on clean data still lags behind a clean model by a large margin. Inspired by the stealthiness and effectiveness of backdoor attack, we propose a simple but highly effective defense framework which injects non-adversarial backdoors targeting poisoned samples. Following the general steps in backdoor attack, we detect a small set of suspected samples and then apply a poisoning strategy to them. The non-adversarial backdoor, once triggered, suppresses the attacker's backdoor on poisoned data, but has limited influence on clean data. The defense can be carried out during data preprocessing, without any modification to the standard end-to-end training pipeline. We conduct extensive experiments on multiple benchmarks with different architectures and representative attacks. Results demonstrate that our method achieves state-of-the-art defense effectiveness with by far the lowest performance drop on clean data. Considering the surprising defense ability displayed by our framework, we call for more attention to utilizing backdoor for backdoor defense. Code is available at [https://github.com/damianiumin/non-adversarial_backdoor](https://github.com/damianiumin/non-adversarial_backdoor). ## 1 Introduction In recent years, deep neural networks (DNNs) have achieved impressive performance across tasks, such as object detection [36, 33], speech recognition [46, 2] and machine translation [37, 41]. With the increasing usage of DNNs, security of neural networks has attracted a lot of attention. Studies have shown that DNNs are especially vulnerable to backdoor attack [43], a variant of data poisoning which fools the model to establish a false correlation between inserted patterns and target classes. Specifically, the adversary injects a trigger pattern to a small proportion of the training data. A network trained on the poisoned data has normal behavior on benign data, but deviates from its expected output when the trigger pattern is implanted. To ensure the security of DNN systems, a lot of novel defense methods have been proposed in the past few years. Most of the defense methods try to either 1) avoid learning the backdoor during training or 2) erase it from a poisoned model at the end. Following idea 1), some studies detect and filter poisoned samples [38, 6]. Since a small number of poisoned samples slipping from detection can lead to a successful attack, simply filtering the potentially poisoned samples is not enough in most cases. A more realistic way is to adopt data separation as an intermediate procedure [23, 14]. Some other works pre-process the input to depress the effectiveness of injected patterns [31, 13]. However, these methods have limited effects under the increasingly diverse attacking strategies. Another line of work follows idea 2) [24, 44]. Despite promising defense effectiveness, Figure 1: Representations under the effect of adversarial backdoor (AB) and non-adversarial backdoor (NAB), which are injected by attackers and defenders respectively. “Stamp” is the trigger pattern for NAB. (a) Clean samples are not influenced by backdoor. (b) AB changes model behavior on poisoned samples. (c) NAB is not triggered on clean samples. (d) NAB suppresses the effectiveness of AB on poisoned samples. erasing-based methods suffer from performance drop due to the additional erasing stage. Performance on clean data still lags behind a clean model by a large margin. Reducing the performance gap on clean data while maintaining satisfying defense effectiveness remains a challenging problem. Under backdoor attack, representations of poisoned samples are dominated by the trigger pattern as shown in Fig. 1. Therefore, injecting the pattern can force a poisoned model to behave in a way expected by the attacker. Considering the effectiveness of such strategies, a natural question is whether backdoor can be utilized for defense purpose, that is to say _beating backdoor attack at its own game_. To be more specific, a model might misbehave when only the trigger pattern is exposed, but the misbehavior should be suppressed once a benign pattern, which is called a _stamp_ in this paper, is injected to the poisoned sample. There are three advantages behind this idea. _First_, the defender only needs a small set of poisoned training samples to inject a backdoor, which is a much easier requirement than filtering all the poisoned data. _Second_, a backdoor targeting poisoned data, ideally, will not influence the model performance on clean data. _Finally_, the backdoor can be injected during data pre-processing, without any modification to the standard end-to-end training pipeline. In this work, we propose a novel defense framework, _Non-adversarial Backdoor (NAB)_, which suppresses backdoor attack by injecting a backdoor targeting poisoned samples. Specifically, we first detect a small set of suspected samples using existing methods such as [23, 14, 11]. Then we process these samples with a poisoning strategy, which consists of a stamping and a relabeling function. A pseudo label is generated for each detected sample and we stamp the samples with inconsistent orginal and pseudo labels. In this way, we insert a non-adversarial backdoor which, once triggered, is expected to change model behaviors on poisoned data. Furthermore, NAB can be augmented with an efficient test data filtering technique by comparing the predictions with or without the stamp, ensuring the performance on poisoned data. We instantiated the NAB framework and conducted experiments on CIFAR-10 [16] and tiny-ImageNet [17] over several representative backdoor attacks. Experiment results show that the method achieves state-of-the-art performance in both clean accuracy and defense effectiveness. Extensive analyses demonstrate how NAB takes effect under different scenarios. Our main contributions can be summarized as follows: * We propose the idea of backdooring poisoned samples to suppress backdoor attack. To the best of our knowledge, our work is the first to utilize non-adversarial backdoor in backdoor defense. * We transform the idea into a simple, flexible and effective defense framework, which can be easily augmented with a test data filtering technique. * Extensive experiments are conducted and our method achieves state-of-the-art defense effectiveness with by far the lowest performance drop on clean data. ## 2 Related Work Backdoor Attack.Backdoor attack is a type of attack involved in the training of DNNs, with the interesting property that the model works well on clean data but generates unexpected outputs once the attack is triggered. A main track of the attacks focuses on poisoning training data in an increasingly stealthier and more effective way by developing novel trigger patterns [15]. Attack methods for visual models, the mainstream of backdoor attack research, can be divided according to the visibility of patterns. Visible attacks inject human perceptible patterns like a single pixel [39], an explicit patch [10, 26], sample-specific patterns [30], or more complex and indistinguishable patterns like blending random noise [5] and sinusoidal strips [3]. Invisible attacks [50, 40, 34, 18, 29, 22] are even more stealthy to human observers. Backdoor attacks can also be categorized into dirty-label attacks [10, 26, 30] and clean-label attacks [3, 40]. Clean-label attacks are more difficult to detect since there lacks an obvious mismatch between the images and labels. We also notice some non-poisoning based methods which induce backdoor by modifying other training settings [19, 20] or the model weights [7, 9, 32]. Backdoor Defense.Existing backdoor defense methods aim to avoid learning the backdoor during training or erase the backdoor at the end. To avoid injection of the backdoor, various techniques detecting poisoned data have been proposed [39, 4, 38, 6, 11]. These methods alone cannot achieve successful defense when a fraction of poisoned samples escape from the detection. Instead of simply filtering all the poisoned samples, a more practical idea is to adopt data separation as an intermediate procedure. Some other works attempt to bypass the backdoor by pre-processing the input before passing it into the model [31, 8, 13], but these methods typically have limited effects over the increasingly various attacks. Meanwhile, erasing methods try to mitigate the effects of backdoor after the model gets attacked [23, 48, 24, 44]. [23] reduced attack success rate to a negligible level under several attacks, but the prediction accuracy on clean data still lags behind a well-trained clean model by a large margin. Our method adopts a data separation stage as in [23, 14]. Nevertheless, the core idea, injecting a backdoor for defense purpose, is similar to none of the previous defense methods. Non-Adversarial Backdoor.Non-adversarial applications of backdoor has been proposed before, including watermark-based authentication [1], protection of open-sourced datasets [25] and neural networks interpretability [49]. [35] also injected a backdoor to hide weaknesses in a model under adversarial attack [28]. However, our work is the first attempt to utilize backdoor in defense against backdoor attack. ## 3 Method ### Preliminary Threat Model.In this paper, we assume that the attacker has full control over the data source, is capable of arbitrarily changing the images and relabeling them with target classes, but does not have access to the model and training process. The defender has control over the model, training process, and data once obtained from the data source, but does not know the proportion and distribution of the poisoned samples, target classes, and attacking strategies. Given some partially poisoned data, the defender aims to train a model that preserves accuracy on clean data and avoids predicting the target class on poisoned data. Backdoor Attack.The attacker first obtains a clean training set \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\), where \(\mathbf{x}_{i}\in\mathcal{X}\) is an image and \(y_{i}\in\mathcal{Y}\) is the corresponding label. The poisoning strategy consists of two parts: \(\mathcal{P}:\mathcal{X}\rightarrow\mathcal{X}\) applies a trigger pattern to the image and \(c:\mathcal{Y}\rightarrow\mathcal{Y}\) replaces the label with target label. The attacker selects a subset of clean data and generates a set of malign samples \(\mathcal{D}_{m}=\{(\mathcal{P}(\mathbf{x}),c(y))\}\) accordingly using the poisoning strategy, where \(\lambda=|\mathcal{D}_{m}|/|\mathcal{D}|\) is the poisoning rate. Merging \(\mathcal{D}_{m}\) with the remaining clean training data, the attacker generates a poisoned dataset \(\mathcal{D}_{p}\) and releases it to potential victims. The empirical error on a poisoned dataset can be decomposed into a _clean loss_ and an _attack loss_[23]: \[\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}}[\ell(f_{\theta}(\mathbf{x}),y)]+\mathbb{E }_{(\mathbf{x},y)\sim\mathcal{D}_{m}}[\ell(f_{\theta}(\mathbf{x}),y)] \tag{1}\] where \(\ell(\cdot)\) is the loss function and \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) is the neural network. Minimizing the first term above encourages the model to learn the image classification task while the second one forces the learning of a correlation between the trigger pattern and target class. Both tasks can be learned well due to the excessive learning ability of neural networks [21], making backdoor attack effective and hard to detect. ### Non-Adversarial Backdoor The success of backdoor attack leads us to think about the feasibility of utilizing backdoor for defense purpose. An attacker wants the model to classify a benign sample \(\mathbf{x}\) to the target class. In the same way, a defender wants the model to classify a poisoned sample \(\mathcal{P}(\mathbf{x})\) to any but the target class. The similarity between the objectives makes it a natural idea to apply backdoor in both attack and defense settings, while the latter was rarely explored. Based on the idea above, we propose a defense framework _Non-Adversarial Backdoor_ (NAB). In this section, we assume that a small set of suspected samples \(\mathcal{D}^{\prime}_{s}\subset\mathcal{D}_{p}\) and a poisoning strategy are available. The defender's poisoning strategy also has two components: 1) \(\mathcal{S}:\mathcal{X}\rightarrow\mathcal{X}\) applies a trigger pattern, which is called a _stamp_ to tell from the adversarial trigger pattern, and 2) \(r:\mathcal{X}\rightarrow\mathcal{Y}\) generates a pseudo label conditioned on the image. Details of backdoor detection and poisoning strategies are discussed in Sec. 3.3. We then generate a set of stamped samples \(\mathcal{D}^{\prime}_{m}=\{(\mathcal{S}(\mathbf{x}),r(\mathbf{x}))|(\mathbf{x},y)\in \mathcal{D}^{\prime}_{s}\wedge r(\mathbf{x})\neq y\}\). Note that the proportion of stamped samples is typically lower than the detection rate \(\mu=|\mathcal{D}^{\prime}_{s}|/|\mathcal{D}_{p}|\), as we avoid stamping samples whose labels remain unchanged to let the backdoor take effect. Merging \(\mathcal{D}^{\prime}_{m}\) with data that are not stamped, Figure 2: Overview of the proposed framework. The attacker injects an adversarial backdoor by selecting and poisoning a set of clean samples. After obtaining the dataset, the defender detect and poison a set of suspected samples to inject the non-adversarial backdoor. Both attack and defense take place in the standard end-to-end training pipeline. In the testing stage, we stamp each input to keep the non-adversarial backdoor triggered. We can also adopt a test data filtering technique by comparing the predictions with or without the stamp. Samples with inconsistent predictions are identified as poisoned. we obtain the processed dataset \(D^{\prime}_{p}\) for training. The defense framework can be implemented during data preprocessing, without any modification to the end-to-end training pipeline. During inference, we stamp all inputs for defense. In NAB, we further decompose the attack loss in Eq. (1) into the original attack loss and the _defense loss_: \[\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{m}}[\ell(f_{\theta}(\mathbf{x}),y)]+ \mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}^{\prime}_{m}}[\ell(f_{\theta}(\mathbf{x}),y)] \tag{2}\] Jointly optimizing the model using the three losses leaves two backdoors in the network: an adversarial backdoor triggered by \(\mathcal{P}(\cdot)\) and a non-adversarial backdoor triggered by \(\mathcal{S}(\mathcal{P}(\cdot))\). The non-adversarial one prevents a poisoned sample with stamp from being classified to the target class. Typically, \(D^{\prime}_{s}\) is a mixture of poisoned and clean samples due to mistakes of detection methods. When the detection accuracy is low, the non-adversarial backdoor might influence the performance on clean data. Further analysis is presented in Sec. 4.4. ### Backdoor Detection and Poisoning Strategy While attackers can select samples to poison randomly and simply label them with the target class, defenders need more deliberation on data selection and relabeling strategy. Backdoor Detection.To create a backdoor targeting poisoned data, we detect a set of suspicious samples \(\mathcal{D}^{\prime}_{s}\) from \(\mathcal{D}_{p}\) with ratio \(\mu\). _Detection accuracy_ is the ratio of poisoned samples in \(\mathcal{D}^{\prime}_{s}\). Different from detection-based defenses that aims at filtering all the poisoned samples, \(\mu\) is typically smaller than the poisoning rate \(\lambda\) in NAB as we only need part of the poisoned data for backdoor injection. Poisoning Strategy.The stamping function \(\mathcal{S}(\cdot)\) is less important as long as it is perceptible to neural networks. We care more about the relabeling function \(r(\cdot)\) which generates pseudo labels to approximate true labels. Although randomly generated pseudo labels suffices to create the non-adversarial backdoor, a higher _pseudo label accuracy_ can help preserve the performance on clean data when the detection method malfunctions. Many existing or naive methods can fulfill the relabeling and simplified backdoor detection tasks effectively [11, 23, 14]. Nevertheless, the NAB framework is independent of any specific detection method or poisoning strategy. As the chasing game between backdoor attack and defense goes on, stronger methods are likely to show up in the future, and NAB can be easily instantiated with the latest techniques. The flexibility and portability ensures the long-term value of our framework. ### Test Data filtering A wide range of previous works consider minimizing the attack success rate as their only goal on poisoned samples. However, misclassification might still happen even if the image is not classified to the target class, bringing about unintended consequences. If we add a requirement to the threat model that all poisoned test samples should be either identified or correctly classified, the defense effectiveness of some existing methods will be less satisfying. An additional benefit of NAB is that it can be easily augmented with a test data filtering technique. Ideally, the prediction results on a clean sample \(\mathbf{x}\) and its stamped version \(\mathcal{S}(\mathbf{x})\) are both the true label \(y\). However, the model tends to predict \(c(y)\) on \(\mathcal{P}(\mathbf{x})\) and \(r(\mathcal{P}(\mathbf{x}))\) on \(\mathcal{S}(\mathcal{P}(\mathbf{x}))\), which are expected to be different due to the defender's backdoor. Based on the observation above, we identify samples with \(f_{\theta}(\mathbf{x})\neq f_{\theta}(\mathcal{S}(\mathbf{x}))\) as poisoned and reject them during inference. In this way, the augmented NAB can handle poisoned data appropriately with a high accuracy. ## 4 Experiments ### Experiment Settings Attack.Experiments are conducted under 5 representative backdoor attacks, including two classical attacks: BadNets attack (patch-based) [10] and Blend attack (blending-based) [5], two advanced attacks: Dynamic attack (sample-specific) [30] and WaNet attack (invisible) [29], and one label-consistent attack: Clean-Label attack [40]. We follow the configurations suggested in the original papers, including the trigger patterns and trigger sizes. Performance of the attacks are evaluated on two datasets: CIFAR-10 (10 classes, 50k samples) [16] and tiny-ImageNet (200 classes, 100k samples) [17]. Dynamic attack and Clean-Label attack (CL) are omitted on tiny-ImageNet for failure of reproduction. The target label is set to 0 for both datasets. We set the poisoning rate \(\lambda=0.1\) for the first four attacks, and \(\lambda=0.25\) (2.5% of the whole training set) for CL. Defense and Training.We instantiate the NAB framework with 3 backdoor detection techniques and 2 relabeling strategies throughout our experiments. The detection rate \(\mu\) is set to \(0.05\) for the following methods: * **Local Gradient Ascent (LGA)**[23]: Train with a tailored loss function in early epochs and isolate samples with lower training losses. * **Label-Noise Learning (LN)**[14]: Train a classier appended to a self-supervised learning (SSL) pertained feature extractor with label-noise learning [42], and capture low-credible samples. * **SPECTRE**[11]: Use robust covariance estimation to amplify the spectral signature of poisoned data and detect them with QUantum Entropy (QUE) scores. In our poisoning strategy, \(\mathcal{S}(\cdot)\) simply applies a \(2\times 2\) patch with value 0 on the upper left corner of the samples. We adopt the following strategies for pseudo label generation: * **Verified Data (VD)**: Train a label predictor with supervised learning on a small collection of verified data (5% of the training set as assumed in [24]). * **Nearest-Center (NC)**: Obtain representations using a SSL-pretrained model and assign pseudo labels according to the nearest center. We _do not assume_ that the methods above are state-of-the-art. They are chosen for their simplicity and can be safely replaced with comparable or stronger methods. LN and NC are introduced because one of our baselines [14] relies on a SSL stage. Experiments are conducted on ResNet-18 (by default) and ResNet-50 [12]. We train the models for 100 epochs with three data augmentations: random crop, horizontal flipping and cutout. The optimizer is Stochastic Gradient Descent (SGD) with momentum 0.9. Learning rate is set to 0.1 and decays with the cosine decay schedule [27]. More details are presented in the supplementary material. Baselines.We compare our method with 3 state-of-the-art defense methods: 1) Neural Attention Distillation (NAD) [24] uses 5% of clean training data to fine-tune a student network under the guidance of a teacher model. We use the same set of verified data for NAD and the relabeling strategy VD. 2) Anti-Backdoor Learning (ABL) [23] unlearns the backdoor using a small set of isolated data. Note that an additional fine-tuning stage is added before backdoor unlearning to improve clean accuracy for fair comparison. 3) Decoupling-Based backdoor Defense (DBD) [14] divides the training pipeline into a three stages to prevent learning the backdoor. Despite its impressive performance under some attacks, we find that DBD fails when the poisoned samples are clustered after the self-supervised learning stage under some attacks (_e.g_. Dynamic attack [30]). Besides, DBD was tested without applying trigger patterns to the target class, but its performance drops when the constraint is removed. We leave a detailed discussion of the weaknesses in the supplementary material, and provide a separate comparison with our method following their original settings except for the poisoning rate. Metrics.We adopt two widely used metrics for the main results: attack success rate (ASR, ratio of poisoned samples mistakenly classified to the target class) and Clean Accuracy (CA, ratio of correctly predicted clean samples). To test the effectiveness of our data filtering method, we further introduce backdoor accuracy (BA, ratio of correctly predicted backdoor samples), ratio of rejected clean data (C-REJ), prediction success rate (PSR, ratio of correctly predicted _and not_ rejected clean samples), ratio of rejected poisoned data (B-REJ), and defense success rate (DSR, ratio of correctly predicted _or_ rejected poisoned samples) ### Main Results Backdoor Detection and Pseudo Labels.LGA is adopted for backdoor detection in this section. As shown in Fig. 3, detection accuracy approaches its maximal value in most cases. However, the method has less satisfying performance under WaNet attack, which adopts a noise mode to escape detection. We generate pseudo labels with VD and NC. Fig. 3 shows that the latter method based on self-supervised learning generates pseudo labels of higher quality. In some cases, NC achieves very high accuracy on detected samples. We attribute this partially to the fact that LGA prefers to isolate images whose losses drop faster. These samples have more salient class features and are thus closer to the corresponding centers. On tiny-ImageNet, detection accuracy approaches 100%, but the label accuracy of VD is 46.64%, 38.90%, 42.34% for BadNets, Blend, WaNet respectively, posing a greater challenge than CIFAR-10. Comparison with NAD and ABL.As shown in Tab. 1 and Tab. 2, NAB outperforms NAD and ABL by a large margin across all the settings in terms of clean accuracy. CA of our method is lower under WaNet and CL attack than other attacks because the a large number of clean samples are incorporated into the detection data \(\mathcal{D}^{\prime}_{s}\), but still higher than that of the baseline defenses. Our method also has outstanding performance in terms of attack success rate. It obtains the lowest ASR in most cases. On tiny-ImageNet, however, ABL achieves a significantly low ASR. We suspect that the diverse classes of the dataset help the unlearning stage of ABL identify backdoor features more precisely. We also find that results of our method are even better on ResNet-50, achieving a much lower ASR and a clean accuracy comparable to _no defense_ in some cases. Model capacity might benefit the injection of an additional backdoor. In summary, our method suppresses the attacker's backdoor effectively while having limited influence on clean accuracy. By simply poisoning part of the training data, our method achieves state-of-the-art defense performance. Comparison with DBD.DBD adopts self-supervised learning (SSL) in its first training stage, and part of the Figure 3: Detection and pseudo label accuracy on CIFAR-10. The maximal detection accuracy for CL attack is \(\min(\frac{\lambda}{\mu},1)=0.5\). Pseudo label accuracy is calculated on LGA detected samples. impressive performance on clean accuracy comes from the extra training epochs. We also use the SSL pretrained network in pseudo label prediction (NC) and model initialization for fair comparison. As shown in Tab. 3, our method outperforms DBD by a large margin in both ASR and CA. NAB achieves better results even without SSL pre-training. We also find that higher pseudo label accuracy obtained by NC helps reduce the performance drop on clean data. More analyses of this factor are presented in Sec. 4.4. ### Effectiveness of Data Filtering We validate the effectiveness of data filtering and present the results in Tab. 4. For all the defenses listed, accuracy on poisoned data lags behind that on clean data. The gap is especially obvious for NAB since the model learns to predict a stamped sample to its pseudo label which is typically not very accurate. Augmenting NAB with data filtering provides a remedy for this. We find that the defense success rate reaches over 99% in all cases, suggesting that the filtering technique identifies most poisoned samples and those escaping from detection are typically correctly classified. Part of the clean samples are also rejected, but a significant performance drop is not observed. This is because most of the rejected clean samples are also misclassified. ### Further Analyses Detection and Pseudo Label Accuracy.Backdoor detection and pseudo label generation are two major components influencing the performance of our method. Analyses on them helps understand the robustness of NAB and can guide the selection of specific detection and relabeling strategies. Therefore, we test NAB under different detection accuracy (DA) and pseudo label accuracy (PLA), and present the results in Fig. 4. The following patterns can be observed: 1) Clean accuracy (CA) relies on both DA and PLA, and NAB can preserve a high CA when either DA or PLA reaches a \begin{table} \begin{tabular}{c|c c c c c c c|c c c c c c c c} \hline \hline \multicolumn{1}{c|}{**Arch**} & \multicolumn{8}{c|}{**ResNet-18**} & \multicolumn{8}{c}{**ResNet-50**} \\ \hline \hline **Defense** & **No Defense** & **NAD** & **ABL** & **NAB (Ours)** & **No Defense** & **NAD** & **ABL** & **NAB (Ours)** \\ \hline **Attack \(\downarrow\)** & CA & ASR & CA & ASR & CA & ASR & CA & ASR & CA & ASR & CA & ASR & CA & ASR \\ \hline BadNets & 93.99 & 100 & 89.09 & 2.04 & 91.85 & **0.26** & **93.26** & 0.93 & 94.09 & 99.52 & 89.97 & 1.29 & 92.80 & 0.50 & **93.44** & **0.00** \\ Blend & 94.09 & 100 & 89.29 & 1.22 & 89.87 & 1.62 & **93.18** & **0.29** & 94.26 & 99.98 & 90.04 & 1.03 & 88.11 & 1.41 & **94.34** & **0.09** \\ Dynamic & 94.29 & 99.99 & 89.11 & 10.28 & 91.64 & 1.74 & **93.75** & **0.24** & 94.00 & 99.98 & 89.80 & 4.53 & 92.50 & 1.30 & **94.23** & **0.12** \\ WaNet & 93.06 & 97.53 & 88.52 & 1.31 & 89.57 & 9.11 & **90.36** & **0.67** & 93.19 & 97.02 & 89.90 & 1.64 & 88.39 & 4.11 & **91.54** & **0.34** \\ CL & 94.66 & 99.73 & 88.97 & 4.63 & 87.27 & 0.61 & **91.63** & **0.48** & 94.70 & 91.58 & 90.16 & 4.22 & 88.24 & 0.98 & **91.50** & **0.40** \\ \hline **Average** & 94.02 & 99.45 & 89.00 & 3.90 & 90.04 & 2.67 & **92.44** & **0.52** & 94.05 & 97.62 & 89.97 & 2.54 & 90.00 & 1.66 & **93.01** & **0.19** \\ \hline \hline \end{tabular} \end{table} Table 1: Attack success rate (%) and clean accuracy (%) of NAD, ABL and our proposed method against 5 attacks over ResNet-18 and ResNet-50. The benchmark is CIFAR-10. We bold the best defense results under each attack. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline **Defense** & **NAD** & **ABL** & **NAB** & **NAB + filtering** \\ \hline **Attack \(\downarrow\)** & \multicolumn{2}{c|}{BA} & \multicolumn{2}{c}{C-REJ} & \multicolumn{1}{c}{PSR} & \multicolumn{1}{c}{B-REJ} & \multicolumn{1}{c}{DSR} \\ \hline BadNets & 87.66 & 91.69 & 72.10 & 2.83 & 92.49 & 98.61 & 99.14 \\ Blend & 85.26 & 89.64 & 72.47 & 1.54 & 92.79 & 97.17 & 99.68 \\ Dynamic & 66.75 & 85.47 & 65.38 & 1.71 & 93.26 & 96.94 & 99.67 \\ WaNet & 86.51 & 81.48 & 79.51 & 6.47 & 89.04 & 89.30 & 99.15 \\ CL & 86.06 & 86.02 & 75.28 & 4.41 & 90.95 & 89.83 & 99.33 \\ \hline **Average** & 82.45 & 86.86 & 72.95 & 3.39 & 91.71 & 94.37 & 99.39 \\ \hline \hline \end{tabular} \end{table} Table 4: Backdoor accuracy (%) and effectiveness (%) of data filtering on CIFAR-10, ResNet-18. \begin{table} \begin{tabular}{c|c c|c c|c c} \hline \hline **Defense** & **DBD** & **NAB (Ours)** & **NAB* (Ours)** \\ \hline **Attack \(\downarrow\)** & CA & ASR & CA & ASR & CA & ASR \\ \hline BadNets & 92.60 & 1.49 & 93.69 & **0.33** & **94.44** & 0.42 \\ Blend & 92.64 & 1.87 & 94.18 & 0.48 & **94.85** & **0.46** \\ WaNet & 90.71 & 1.04 & 92.83 & **0.54** & **93.55** & 0.66 \\ CL & 92.94 & 0.95 & 93.83 & 1.31 & **94.57** & **0.71** \\ \hline **Average** & 92.22 & 1.34 & 93.63 & 0.67 & **94.35** & **0.56** \\ \hline \hline \end{tabular} \end{table} Table 3: Defense effectiveness (%) of self-supervised learning based defense methods on CIFAR-10. high level. 2) Attack success rate (ASR) is more sensitive to DA, as the metric directly influences how many poisoned samples are stamped for non-adversarial backdoor injection. 3) Backdoor accuracy (BA) mainly depends on PLA, but defense success rate (DSR) is more sensitive to DA. In practice, it is typically easy to find a detection method with accuracy over 90%, but pseudo label accuracy varies in different scenarios. CL, which is representative of label-consistent attacks, shows different reactions to PLA. The attack does not change the labels of poisoned samples, so actually we need incorrect pseudo labels to break the backdoor correlation. It is also worth mentioning that accuracy cannot fully reflect the quality of backdoor detection and pseudo label generation. For example, a detection method might have _detection bias_, which means it has a preference for detecting poisoned samples with some explicit patterns. A strong detection bias might hamper the defense performance of NAB even under a high DA. We leave a discussion on this problem in the supplementary material. Detection Rate and Poisoning Rate.The defender needs to specify the detection rate \(\mu\) without being aware of the poisoning rate \(\lambda\). We experiment with different \(\mu\) and \(\lambda\) and display the results in Tab. 5. When \(\mu>\lambda\), the detection accuracy drops below \(\frac{\lambda}{\mu}\). Performance on clean data is hampered to some extent, which is consistent with the conclusion made in the previous paragraph. When \(\mu\leq\lambda\), NAB demonstrates satisfying performance on both CA and ASR. However, the defense effectiveness decays significantly when we have \(\mu\ll\lambda\). The injected non-adversarial backdoor is not strong enough to suppress the attacker's backdoor in this case. Typically, attackers choose a small \(\lambda\) to escape inspection. Our choice \(\mu=0.05\) in the main experiments suffices to handle most situations. Effectiveness under All-to-All Attack.So far we have tested NAB under all-to-one attack (A2O), where all the poisoned samples are relabeled to a single target class. In this section we introduce all-to-all attack (A2A) where samples with different original labels have different target labels. As shown in Tab. 6, A2A is less effective than A2O in Figure 4: Clean accuracy, backdoor accuracy, attack success rate and defense success rate (%) under different detection accuracy and pseudo label accuracy. The experiments are conducted on CIFAR-10 under BadNets, WaNet and CL. To generate pseudo labels of accuracy \(p\), we randomly change \(1-p\) of the true labels to a different class. For a detection accuracy \(q\), we randomly select \(qN\) poisoned samples and \((1-q)N\) clean samples, where \(N\) is size of the training set. \begin{table} \begin{tabular}{c|c c|c c|c c|c c} \hline \hline \(\lambda\) & \multicolumn{2}{c|}{**0.01**} & \multicolumn{2}{c|}{**0.05**} & \multicolumn{2}{c|}{**0.10**} & \multicolumn{2}{c}{**0.20**} \\ \hline **Metric** & CA & ASR & CA & ASR & CA & ASR & CA & ASR \\ \hline \(\mu\downarrow\) & \multicolumn{8}{c}{**BadNets**} \\ \hline **0.00** & 94.55 & 100 & 94.39 & 100 & 93.96 & 100 & 94.00 & 100 \\ **0.01** & 94.45 & 1.00 & 93.97 & 8.76 & 94.18 & 61.96 & 93.91 & 78.72 \\ **0.05** & 94.45 & 0.77 & 93.96 & 0.61 & 93.29 & 1.19 & 93.22 & 1.29 \\ **0.10** & 91.23 & 0.79 & 92.81 & 0.67 & 93.01 & 0.13 & 93.01 & 0.22 \\ \hline \(\mu\downarrow\) & \multicolumn{8}{c}{**Dynamic**} \\ \hline **0.00** & 94.71 & 99.64 & 94.38 & 99.99 & 94.41 & 99.99 & 94.22 & 99.99 \\ **0.01** & 94.34 & 1.06 & 94.19 & 28.99 & 94.05 & 53.71 & 93.70 & 82.17 \\ **0.05** & 93.61 & 0.82 & 94.19 & 0.39 & 93.95 & 0.46 & 93.82 & 0.64 \\ **0.10** & 90.66 & 0.87 & 91.98 & 0.42 & 93.16 & 0.24 & 93.69 & 0.31 \\ \hline \hline \end{tabular} \end{table} Table 5: Defense effectiveness (%) of out method under different detection rate \(\mu\) and poisoning rate \(\lambda\) on CIFAR-10. SPECTRE and NC are adopted for detection and pseudo label generation, respectively. terms of ASR. Our method can successfully defend against A2A, but the defense effectiveness is slightly lower than under A2O. Besides, it can be found that model capacity brings more benefits under A2A. We attribute it to that larger networks provide more learning ability to handle the complexified tasks in Eq. (1) and Eq. (2). ### Understanding Non-Adversarial Backdoor To get a comprehensive understanding of how NAB works, we first visualize the saliency maps to illustrate how much attention the models pay on particular area of the input images. As shown in Fig. 5, the stamp (a \(2\times 2\) patch on the upper left corner) catches much attention when the trigger pattern is also added, but has a much weaker influence on clean data. This is consistent with the observation that the stamp can greatly change the behavior of a model only when the inputs are poisoned. We also visualize the representations in Fig. 1 to have a deeper insight into the mechanism behind our defense. Representations of stamped samples and clean samples are mixed up together, while those of poisoned samples are clearly separated except on the target class. These findings further demonstrate that our defense does not actually mitigate the attacker's backdoor, but inject a non-adversarial backdoor to suppress it. Besides, the model can directly predict the authentic labels of poisoned samples given a set of accurate pseudo labels. ## 5 Future Exploration with NAB Framework We stress that the value of NAB is not limited to its simplicity, flexibility and impressive performance. The framework introduces the idea of backdoor for defense, which we believe is worthy of further exploration just like backdoor attack. Our implementation of NAB is not claimed to be optimal, and a lot more efforts can be done to strengthen it. We only list a few directions due to page limitation: Protection for clean samples.The detection and relabeling accuracy might be both low in some cases. The non-adversarial backdoor would then be triggered on some clean samples and brings performance drop. The problem will possibly be alleviated by providing a protection mechanism (_e.g._ stamping some clean stamps without relabeling). Sample-efficient backdoor.Injecting backdoor in a sample-efficient way has been a hot topic in backdoor attack [45, 47]. The defender will also want to inject a backdoor strong enough for defense with as few samples as possible. A benefit of sample-efficient backdoor for NAB is that when the number of required samples is small enough, the detected samples can go through human inspection and relabeling, guaranteeing a high DA and PLA. Backdoor vaccination.A even more interesting question is whether the defender can carry out a backdoor attack, defend against it using NAB, and generalize the defense effectiveness to other attacks. We test the idea under a quite limited setting where the target class is known. The results displayed in the supplementary material show that ASR of Blend and WaNet attack is to some extent hampered by the defense targeting BadNets. If the generalization ability of defender's backdoor is further improved, NAB can dispose of backdoor detection and pseudo label generation since it only needs to focus on the attack transparent to defender. ## 6 Conclusion In this work, we propose a novel defense framework NAB which injects a non-adversarial backdoor targeting poisoned data. Following the procedures in backdoor attack, we detect a small set of suspicious samples and process them with a poisoning strategy. During inference, we keep the non-adversarial backdoor triggered to suppress the effectiveness of attacker's backdoor. Extensive experiments demonstrate that NAB can achieve successful defense with minor performance drop on clean data. NAB has long-term value both as a powerful defense method and as a potential research area. As a method, its components are highly replaceable and can be updated and optimized in the future. As a research area, we hope that stronger variants would be derived from the simple and flexible framework, just as what has happened in backdoor attack. \begin{table} \begin{tabular}{c|c|c c|c c} \multicolumn{2}{c|}{**Attack**} & \multicolumn{2}{c|}{**BadNets**} & \multicolumn{2}{c}{**Blend**} \\ \hline \hline **Defense \(\downarrow\)** & **Arch \(\downarrow\)** & CA & ASR & CA & ASR \\ \hline \multirow{2}{*}{**None**} & ResNet-18 & 94.29 & 93.37 & 93.75 & 90.12 \\ & ResNet-50 & 94.53 & 93.97 & 94.20 & 90.87 \\ \hline \multirow{2}{*}{**NAB**} & ResNet-18 & 93.24 & 2.66 & 93.28 & 1.48 \\ & ResNet-50 & 94.36 & 1.28 & 94.61 & 1.19 \\ \hline \hline \end{tabular} \end{table} Table 6: Attack effectiveness (%) of all-to-all attacks and defense effectiveness (%) of our method under them on CIFAR-10. LN and NC are adopted for detection and pseudo label generation respectively. Figure 5: Examples of (a) raw images and saliency maps of their (b) clean, (c) stamped clean, (d) poisoned, (e) stamped and poisoned versions, which are obtained using NAB under BadNets attack.
2307.02045
Probing quantum phases in ultra-high-mobility two-dimensional electron systems using surface acoustic waves
Transport measurement, which applies an electric field and studies the migration of charged particles, i.e. the current, is the most widely used technique in condensed matter studies. It is generally assumed that the quantum phase remains unchanged when it hosts a sufficiently small probing current, which is, surprisingly, rarely examined experimentally. In this work, we study the ultra-high mobility two-dimensional electron system using a propagating surface acoustic wave, whose traveling speed is affected by the electrons' compressibility. The acoustic power used in our study is several orders of magnitude lower than previous reports, and its induced perturbation to the system is smaller than the transport current. Therefore we are able to observe the quantum phases become more incompressible when hosting a perturbative current.
Mengmeng Wu, Xiao Liu, Renfei Wang, Yoon Jang Chung, Adbhut Gupta, Kirk W. Baldwin, Loren Pfeiffer, Xi Lin, Yang Liu
2023-07-05T05:59:56Z
http://arxiv.org/abs/2307.02045v2
# Morphing of quantum phases when hosting current ###### Abstract Measurement is the foundation of science, and is a subtle concept especially in quantum mechanics, where the action of detection interacts with the quantum system perturbatively. The property of a quantum system is captured from the stimulated evolution of either the system or the detecting reservoir. Transport measurement, which applies an electric field and studies the migration of charged particles, i.e. the current, is the most widely used technique. In ultra-high mobility two-dimensional systems, transport measurement reveals fruitful quantum phenomena such as the quantum Hall effect, the Aharonov-Bohm oscillation and ballistic trajectory of quasiparticles, the microwave induced zero resistance, the interference of quasiparticles, etc. The general assumption that the quantum phase remains unchanged with a sufficiently small probing current, unfortunately, is rarely examined experimentally. In this work, we probe the ultra-high mobility two-dimensional electron system via its interaction with a propagating surface acoustic wave and observe that the system becomes more incompressible when hosting a current. Two-dimensional electron systems (2DES) with extremely low disorder host a plethora of exotic quantum many-body states when subjected to a strong perpendicular magnetic field \(B\)[1; 2; 3]. The quantum Hall state is an incompressible quantum liquid signaled by vanishing longitudinal resistance and quantized Hall resistance at extremely low temperature \(T\)[3]. At high Landau level fillings factors \(\nu>4\), various non-uniform charge density waves such as stripe phases are stabilized by the large extent of the electron wavefunction [4; 5; 6]. The enigmatic 5/2 fractional quantum Hall state attracts tremendous interest [7; 8; 9; 10; 11; 12; 13; 14; 15; 16] because its quasi-particles might obey non-Abelian statistics and be useful for topological quantum computing [17; 18; 19; 20]. Varies of experimental studies are employed to study its topological properties and quasi-particle statistics, such as weak tunneling[21; 22; 23; 24], interferometry[25; 26; 27; 28], shot noise[29; 30; 31; 32] and thermal transport[33; 34; 35]. Most of these studies rely upon the hypothesis that a quantum state is unperturbed by the tiny probing current passing through the \(\mu\)m size device. Surface acoustic wave (SAW) is a useful current-free technique to investigate the property of 2DES [36; 37; 38; 39; 40; 41; 42; 43; 44]. The propagating piezo-electric field accompanying the SAW interacts with the charge carriers, which in turn affects its velocity (\(v\)) and attenuation. Qualitatively, this interaction is related to the compressibility of 2DES: \(v\) increases when the 2DES becomes incompressible and thus unable to respond to the SAW [45]. In this work, we probe the 2DES using a pw-level, continuous-wave SAW and discover that the \(\sim 100\) nA current flowing through the \(\sim 1\) mm size sample causes a \(\sim 0.1\) ppm (parts per million, \(10^{-6}\)) increase of the SAW velocity at very low \(T\lesssim 250\) mK. Such a current-induced SAW velocity shift illustrates that a close and careful examination on the charge transport mechanism is essential and imperative. Our sample is made from a GaAs/AlGaAs wafer grown by molecular beam epitaxy. The 2DES is confined in a 30-nm-wide quantum well, whose electron density is \(2.91\times 10^{11}\) cm\({}^{-2}\) and low-temperature mobility is about \(2\times 10^{7}\)cm\({}^{2}\)/(V-s). We make a Van der Pauw mesa (\(d_{\rm m}=1.2\) mm) by wet etching, and then evaporate 5-\(\mu\)m-period interdigital transducers (IDTs) on each side of the mesa. 50 \(\Omega\) resistance is connected in parallel to each IDT for broadband impedance matching. When applied with an AC voltage whose frequency matches the resonance condition, the IDT generates a propagating SAW. The SAW will be captured by the IDT on the opposite side of the sample as a voltage output through the piezoelectric effect, see Fig. 1(a). We use a custom-built RF lock-in amplifier to analyze the amplitude and phase delay \(\Phi\) of the output signal. The typical input RF power in this study is 1 nW (-61 dBm), and only a tenth of which turns Figure 1: (a) A photo of our device. The zoom-in plot shows the structure of the aluminum IDT. (b) The measured amplitude (\(|S_{21}|\)) and phase delay (\(\Phi\)) of the transmission coefficient as a function of frequency at base temperature and \(B\sim 50\) mT. into SAW considering the attenuation of cables and the efficiency of the IDT. The SAW induced potential on the 2DES is only \(\sim 10\ \mu eV\), leading to \(\lesssim 10^{4}\ \mathrm{cm^{-2}}\) electron density fluctuation [46]. We observe no difference in the measurement result using 3-orders-of-magnitude smaller input power [46]. The experiment is carried out in a dilution refrigerator whose base temperature is around 10 mK. Figure. 2(a) and (b) shows the magneto-resistance (\(R_{\mathrm{xx}}\), \(R_{\mathrm{xy}}\)) and the measured relative SAW velocity shift \(\eta=\Delta v/v_{0}\). The reference SAW velocity at low field \(v_{0}\) (\(\simeq 2950\ \mathrm{m/s}\)) is calculated from the IDT period (5 \(\mu\)m) and the measured resonant frequency \(f_{\mathrm{c}}\) (589.5 MHz). We can derive the delay time \(\partial\Phi/\partial(2\pi f)=1.1\ \mu\)s and 54 ns near and away from the SAW resonance peak from Fig. 1(b), consistent with the \(d\sim 3\) mm SAW travel distance and \(\sim 11\)-meter-long coaxial cable (5.5 m each way). A positive (negative) velocity shift results in a decrease (increase) in the delay time detected by the phase shift \(\Phi\) of the received signal. We then directly deduce \(\eta\simeq\Phi/(2\pi f_{\mathrm{c}}\tau)\) from the measured SAW phase shift \(\Phi\), where \(\tau\simeq(1.1\mu\mathrm{s}-54\mathrm{ns})\cdot d_{\mathrm{m}}/d\) is the SAW's propagating time through the 2DES. At high \(B\) fields, the \(\eta\) trace exhibits minima (corresponding to enhanced SAW velocity) when the 2DES forms an incompressible quantum Hall state and its screening capability vanishes [43], see Fig. 2(b). _Shortly speaking, \(\eta\) is a measurement of the 2DES compressibility._\(\eta\) at integer fillings increases linearly with decreasing \(\nu\) and reaches it's maximum \(\eta_{m}\)\(\simeq 124\) ppm at the extreme quantum limit \(\nu=0\)[46]. Unlike the vanishing plateau seen in \(R_{xx}\), we observe "V"-shape minima in \(\eta\). At the vicinity of integer filling factors \(\nu=N+\nu^{*}\), \(N\) is an integer, the 2DES consists of an incompressible quantum Hall liquid and additional quasiparticles/quasiholes whose filling factor \(|\nu^{*}|<1\). The fact that \(\eta\) has a linear dependence on the quasiparticles/quasiholes density \(n^{*}=n|\nu^{*}|/\nu\) suggests that the quantum phase formed by these dilute quasiparticles/quasiholes is compressible [46]. The SAW velocity enhancement is also seen as clear "V"-shape minimum at \(\nu=4/3\), 5/3, 6/5, etc., as well as developing minima at \(\nu=5/2\), 7/3, and 11/5 where fractional quantum Hall states develop. \(\eta\) enhancement is seen when the SAW propagates along the hard axis of the stripe phase formed at \(\nu=9/2\), 11/2, etc., consistent with previous reports [42]. Interestingly, \(\eta\) is quite large near \(\nu=3/2\) where the 2DES forms a compressible composite Fermion Fermi sea, possibly because the composite Fermions with extremely Figure 2: (a) The longitudinal (\(R_{\mathrm{xx}}\)) and Hall (\(R_{\mathrm{xy}}\)) resistance, measured by standard quasi-DC (7.104 Hz) lock-in technique. (b) The measured SAW velocity shift \(\eta=\Delta v/v_{0}\). A \(f_{0}=0.125\) Hz AC current passes through the sample (contact \(1\to 2\)) during the measurement, imposing a 4-s-period oscillation to \(\eta\), see the enlarged plot in the red dashed box. Red solid box shows \(\eta\) near \(\nu=3/2\). (c) The extracted oscillation using a digital bandpass filter centered at 0.25 Hz (pink curve). Its amplitude can be measured using a lock-in amplifier (black curve). Inset: power spectrum density of the oscillation. large effective mass are inert to the SAW-induced field [47]. Our setup has an extremely low noise background (i -160 dBm/Hz), leading to a resolution of 0.1 ppm in \(\eta\) at -61 dBm input power. We are able to resolve very delicate response of 2DES while preserving the fragile many-body states. When a 500 nA (rms amplitude) AC current passing through the 2DES, we observe an about 4-s period, \(\sim\) 2 ppm amplitude oscillation in \(\eta\), see the expanded inset in the Fig. 2(b). We apply a digital band-pass filter to the Fig. 2(b) data and plot the oscillation (pink shade) and its amplitude (red trace) in Fig. 2(c). Alternatively, we can use a lock-in amplifier to measure the amplitude of this oscillation (black trace). The oscillation in Fig. 2(c) clearly evidences an aberration of the quantum phase when current passing through the 2DES. We notice that the oscillation frequency is twice as much as the current frequency (\(f_{0}=0.125\) Hz), see the power spectrum in Fig. 2(c) inset. In order to explain this observation, we investigate the current induced velocity shift (CIVS) \(\delta\eta=\eta(I)-\eta(0)\) using DC current in Fig. 3(a). \(\delta\eta\) is an even function of \(I\), and increases nearly linearly by 8 ppm when \(|I|\) increases from 0 to 1 \(\mu\)A. If we sweep the current from -0.5 to 0.5 \(\mu\)A, \(\eta\) displays a triangle waveform indicating \(\delta\eta\propto|I|\). Therefore if the input current is sinusoidal at frequency \(f_{0}\), the leading component of \(\delta\eta\) would be the second harmonic at frequency \(2f_{0}\), see the Fig. 3(c) inset. We can then define a parameter \(\kappa=\eta_{\rm{m}}^{-1}\cdot(\partial\eta/\partial|I|)\) to describe the effect of current, which is nearly unchanged when we rotate the current direction to be parallel to SAW, see Fig. 3(e). Therefore, we are tentative to conclude that, to the leading order, the SAW velocity has a linear dependence on the amplitude of current passing through the 2DES, no matter which direction the current flows. At integer filling factors, unlike the "V"-shape minima in the \(\eta\) trace and the plateau in the \(R_{\rm{xx}}\) trace, \(\kappa\) presents a "W"-shape minimum - it has a positive peak at exact integer \(\nu=\) 1, 2, 3, etc. and reduces to zero on both sides before increasing. Between \(\nu=\) 1 and 2, \(\kappa\) exhibits clear minima at \(\nu=\) 4/3, 5/3, 7/5, 8/5 and 6/5 when fractional quantum Hall states form, similar to the \(\eta\) and \(R_{\rm{xx}}\) traces. Surprisingly, clear minimum can be seen in the \(\kappa\) trace corresponding to the fragile fractional quantum Hall states at \(\nu=\) 5/2, 7/3, 8/3, 11/5 and 14/5 while the \(\eta\) trace only shows a glimmer of minima. We measure the CIVS amplitude \(\delta\eta_{\rm{p}}\) at different filling factors as a function of the AC current amplitude \(I_{\rm{p}}\) in Fig. 3(c). At the transition between fractional quantum Hall states, \(\delta\eta_{\rm{p}}\) increases linearly and then saturates at large current amplitude, consistent with a constant \(\kappa\simeq\eta_{\rm{m}}^{-1}\cdot(\delta\eta_{\rm{p}}/\delta I_{\rm{p}})\). At fillings where the fractional quantum Hall states are stable, we discover a clear threshold behavior where \(\delta\eta_{\rm{p}}\) remains almost zero until \(I_{\rm{p}}\) reaches about 600 nA. We also observe a small but positive \(\kappa\) at \(\nu=3/2\) where 2DES forms the compressible Fermi sea. We can rule out the possibility that finite \(\kappa\) is caused by the heating effect. Firstly, \(\delta\eta\) is proportional to \(|I|\) in Fig. 3(a) instead of \(I^{2}\). Secondly, \(\kappa\) dip at a fragile quantum Hall state such as \(\nu=5/2\) is much more obvious than the composite Fermion Fermi sea at \(\nu=3/2\), although the former is more sensitive to the temperature. Besides, we note in Fig. 2(c) that the measured \(\kappa\) is almost always positive, indicating an increased SAW velocity when the current increases. It is surprising to conclude that the 2DES becomes more incompressible when carrying current. Intuitively, the current cripples the incompressible phases by introducing more defects/inhomogeneities and broadening the domain walls, so that the 2DES are expected to be more compressible and conductive. Unfortunately, there's very little investigation on the morphing of the quantum phase when carrying a non-destructive current. Meanwhile, the large \(\kappa\) is seen at the transition between neighboring quantum Hall states, where a rigorous description of charge transport must involve the quasiparticle localization and percolation. Figure. 4(b) shows \(\kappa\) measured at different \(T\). At all fields, positive \(\kappa\) decreases as \(T\) increases, and eventually Figure 3: (a) \(\delta\eta\) vs. DC Current \(I\) at \(B=\) 4.62 T. (b) \(\delta\eta\) vs. time \(t\) measured with different ranges of sweeping current (black and red). (c-d) \(\delta\eta_{\rm{p}}\) vs Current peak \(I_{\rm{p}}\) at transition states and fractional quantum Hall states. (e) \(\kappa\) vs. \(B\) when current flows perpendicular (between contacts 1 & 2) and parallel (4 & 1) to the SAW propagation direction. vanishes when \(T\simeq 250\) mK. The summarized \(\kappa\) vs. \(T\) data at different fields in Fig. 4(c) suggests an exponential dependence \(\kappa\propto\exp(-T/T_{\rm C})\) where the characteristic temperature \(T_{\rm C}\) is about 50 mK at \(2<\nu<3\) and 70 mK at \(1<\nu<2\). More data show that the \(T_{\rm C}\) is insensitive to the probing SAW frequencies/wavelengths [46]. It is important to mention that the vanishing of \(\kappa\) is unlikely a direct result of reduced quantum Hall stability, since the quantum Hall state around 3/2 remains quite strong at \(T\simeq 250\) mK when \(\kappa\) vanishes. We propose a simple schematic model to understand the positive \(\kappa\) in Fig. 4(a). At \(\nu=4/3\) and 7/5, the electrons in the partially filled Landau level form \(\nu=1/3\) and 2/5 fractional qauntum Hall states, respectively, if the 2DES is fully-spin-polarized. These two states can be explained as the \(\nu_{\rm CF}=1\) and 2 integer quantum Hall states of composite Fermions, and the phase transition happens at \(\nu=11/8\) when the average composite Fermion filling factor \(<\)\(\nu_{\rm CF}\)\(>\)\(=1.5\). Because of the density fluctuation, the regions with \(\nu_{\rm CF}<1.5\) (\(\nu_{\rm CF}>1.5\)) consist of an incompressible \(\nu=4/3\) (\(\nu=7/5\)) quantum Hall state and additional movable negative-charged quasiparticles (positive-charged quasiholes), see Fig. 4(a). When a current passes through the sample, e.g. from left to right, quasiparticles move leftward and quasiholes move rightward. The effective magnetic field poses a Lorentz force, leading to the accumulation and depletion of quasiparticles/quasiholes at the phase boundary. The depletion (accumulation) of quasiholes and accumulation (depletion) of quasiparticles occur at the same boundary, leading to an increase (decrease) in the local density and the formation of incompressible quantum Hall states with \(\nu_{\rm CF}=2\) (\(\nu_{\rm CF}=1\)). In short, at the quantum Hall transition, the current passing through the disordered 2DES induces incompressible phases at the domain boundaries. A Similar discussion can be easily extended to quantum Hall states, where current flow through the 2DES can drive the sparsely distributed, disorder-pinned quasiparticles/quasiholes out of their equilibrium positions, and piles them at the boundary of the incompressible liquid phase. In conclusion, we use the interaction between SAW and electrons to study the morphing of quantum phases in ultra-high mobility 2DES. We discover that the SAW velocity increases, suggesting that the 2DES becomes more incompressible when a non-destructive current flows through the 2DES. This effect is only seen with a revolutionarily enhanced sound velocity resolution at very low temperatures and disappears at \(T\gtrsim 250\)mK. We acknowledge support by the National Natural Science Foundation of China (Grant No. 92065104 and 12074010), the National Key Research and Development Program of China (2021YFA1401902) and the National Basic Research Program of China (Grant No. 2019YFA0308403) for sample fabrication and measurement. This research is funded in part by the Gordon and Betty Moore Foundation's EPiQS Initiative, Grant GBMF9615 to L. N. Pfeiffer, and by the National Science Foundation MRSEC grant DMR 2011750 to Princeton University. We thank Xin Wan, Zhao Liu and Bo Yang for valuable discussions.
2306.15610
The cold-atom elevator: From edge-state injection to the preparation of fractional Chern insulators
Optical box traps for cold atoms offer new possibilities for quantum-gas experiments. Building on their exquisite spatial and temporal control, we propose to engineer system-reservoir configurations using box traps, in view of preparing and manipulating topological atomic states in optical lattices. First, we consider the injection of particles from the reservoir to the system: this scenario is shown to be particularly well suited to activate energy-selective chiral edge currents, but also, to prepare fractional Chern insulating ground states. Then, we devise a practical evaporative-cooling scheme to effectively cool down atomic gases into topological ground states. Our open-system approach to optical-lattice settings provides a new path for the investigation of ultracold quantum matter, including strongly-correlated and topological phases.
Botao Wang, Monika Aidelsburger, Jean Dalibard, André Eckardt, Nathan Goldman
2023-06-27T16:49:40Z
http://arxiv.org/abs/2306.15610v1
# The cold-atom elevator: ###### Abstract Optical box traps for cold atoms offer new possibilities for quantum-gas experiments. Building on their exquisite spatial and temporal control, we propose to engineer system-reservoir configurations using box traps, in view of preparing and manipulating topological atomic states in optical lattices. First, we consider the injection of particles from the reservoir to the system: this scenario is shown to be particularly well suited to activate energy-selective chiral edge currents, but also, to prepare fractional Chern insulating ground states. Then, we devise a practical evaporative-cooling scheme to effectively cool down atomic gases into topological ground states. Our open-system approach to optical-lattice settings provides a new path for the investigation of ultracold quantum matter, including strongly-correlated and topological phases. _Introduction._ Optical box traps have been demonstrated as a powerful tool in cold-atom experiments [1]. Boxes of different shapes and dimensionalities have been realized for ultracold atoms or molecules [2; 3; 4; 5; 6], which led to novel observations including the quantum Joule-Thomson effect [7] and the recurrences of coherence in a quantum many-body system [8]. The gas homogeneity also facilitates the probe of density-related quantities, including the quantum depletion of atomic condensate [9], the low-energy excitation spectrum of ultracold Fermi gases [10], and sound speed in superfluids [11; 12; 13; 14; 15; 16; 17; 18]. Besides, box traps have been used for state preparation, leading to the discovery of a novel breather in a 2D Bose gas [19], the deterministic preparation of a Townes soliton [20], and the demonstration of the transition between atomic and molecular condensates [21]. Combined with optical lattices, box potentials allow to study a well-controlled number of atoms trapped in a few lattice sites. This exquisite control opens up new possibilities, such as measuring the growth of entanglement upon a quench [22], or revealing fundamental properties of the Fermi-Hubbard model [23; 24; 25; 26; 27; 28; 29; 30; 31] and many-body localization [32; 33; 34]. More recently, programmable box traps enabled the generation of large homogeneous systems of more than 2000 atoms, leading to large-scale quantum simulation of out-of-equilibrium dynamics [35; 36; 37]. In the context of topological matter, a Laughlin-type fractional quantum Hall (QH) state has been recently realized in a small box filled with strongly interacting bosons [38]. Isolating 1D lattices also allowed for the observation of the symmetry-protected Haldane phase [39] and 1D anyons [40]. Furthermore, the ability of creating optical boxes with sharp boundaries offers an ideal framework to study topological edge modes [41; 42]. Inspired by the possibility of shaping box potentials of arbitrary geometries, combined with the ability to control these dynamically, we propose to use box traps to partition a lattice system into different subregions, separating a "reservoir" region from a "system" of interest. We explore how dynamically tuning the relative energy between these two regions allows for the controlled preparation of interesting states within the system, a scheme coined "cold-atom elevator". We investigate two main scenarii: (i) injection of particles from the reservoir to the system, so as to populate edge states in an energy-resolved manner [Fig. 1(a)], or to prepare a strongly-interacting topological ground state in the bulk [Fig. 1(b)]; (ii) controlled removal of particles from an excited state (e.g. a thermal metal), performed in a repeated "vacuum-cleaner" manner, in view of cooling the system down to a topological insulating ground-state [Fig. 1(c)]; see also Refs. [43; 44]. _Edge-state injection._ Partitioning boxes naturally provides sharp boundaries between a system and reservoirs, an ideal platform to realize and probe topological edge states. Hallmark of topologically nontrivial states, chiral edge states have been observed in photonic systems [45; 46; 47] and in cold atoms using synthetic dimensions [48; 49; 50; 51; 52; 53; 54]. Despite various proposals [55; 56; 57; 58; 59; 60; 61; 62], the realization of real-space atomic chiral edge modes has only been reported recently [41; 42]. We now show how our sub-box geometry can be used to activate topological edge currents within an empty system, in an energy-selective manner and without populating the bulk. The general idea consists in coupling an empty lattice system (potentially hosting QH states) to a reservoir, as sketched in Fig. 1(a). Particles are initially prepared in the reservoir, in a state that can be chosen trivial [63]. We then perform a sudden lift of the reservoir sub-box energy \(\epsilon_{R}\) to a _proper_ position, such that the energy of the particles in the reservoir becomes resonant with that of the edge mode in the system. In this way, energy-selective edge states will be populated in the initially empty system, allowing for the observation of chiral transport on a dark background. As a concrete example, we consider the Harper-Hofstadter (HH) model [64], a square lattice with magnetic flux \(\phi\!=\!2\pi\alpha\) per plaquette, coupled to reservoirs: \[\hat{H}=-\sum_{\langle\ell\ell^{\prime}\rangle}\left(J_{\ell\ell^{\prime}}e^{i \phi_{\ell\ell^{\prime}}}\hat{a}_{\ell}^{\dagger}\hat{a}_{\ell^{\prime}}+ \text{h.c.}\right)+\sum_{\ell}\!\epsilon_{\ell}\hat{n}_{\ell}, \tag{1}\] where \(\hat{a}_{\ell}(\hat{a}_{\ell}^{\dagger})\) are the annihilation (creation) operators on site \(\ell\) and \(\hat{n}_{\ell}\!=\!\hat{a}_{\ell}^{\dagger}\hat{a}_{\ell}\). We consider nearest-neighbor tunneling amplitudes \(J\ell\nu\) and Peierls phases \(\phi_{\ell\ell^{\prime}}\), and set \(\epsilon_{\ell}\!=\!\epsilon_{R}\) in the reservoir (zero otherwise). Whether the reservoir is also subjected to the flux or not does not qualitatively change our findings [63], hence, for simplicity, we suppose that the entire system-reservoir setting is described by the HH model: we set \(J_{\ell\ell^{\prime}}\!=\!1\) and choose Peierls phases \(\phi_{\ell\ell^{\prime}}\!=\!\phi n\) (resp. 0) for hopping along \(x\) (resp. \(y\)), where \(n\) is the lattice index along \(y\). The HH Hamiltonian is a paradigmatic model of Chern insulators (CIs): it hosts topologically nontrivial energy bands, which are characterized by nonzero Chern numbers [65]. Setting open boundary conditions (OBC), the model hosts chiral edge modes within the bulk energy gaps [58; 59; 63]. We show the energy spectrum for a system of size \(13\times 12\) and a reservoir of size \(7\times 12\) in Fig. 2(a); the regions of low density of states (steeper slopes), correspond to chiral edge states. When setting the lift energy to the value \(\epsilon_{R}\!=\!1\), the states populated in the reservoir become resonant with the chiral edge states located within the lowest bulk gap of the system. Based on this observation, we show how to populate edge states in an energy-selective manner. We start with \(N\!=\!19\) particles in the reservoir, which corresponds to a complete filling of its nearly-flat lowest Bloch band; the flatness allows for energy-selective population of system states. We investigate the quench dynamics obtained by solving the time-dependent Schrodinger equation, using different values of \(\epsilon_{R}\). For \(\epsilon_{R}\!=\!1\), a clockwise chiral edge current is clearly observed in Fig. 2(d), where we plot the spatial density distribution at different times. When the box potential is lifted to \(\epsilon_{R}\!=\!4\), i.e. when the reservoir is resonant with the edges states located in the upper gap, an opposite chiral motion occurs [Fig. 2(f)]. Setting \(\epsilon_{R}\!=\!2.5\), the populated reservoir states are resonant with the middle Bloch band of the system, in which case bulk Figure 2: Edge state injection in the HH model. (a) Spectrum as a function of the eigenstate index \(j\). The blue and red dots correspond to the Hamiltonian describing the reservoir (with \(\epsilon_{R}=1\)) and the system, respectively. \(E_{F}\) denotes the Fermi energy. (b) Population of HH eigenstates \(\rho_{j}\) as a function of their energy \(E_{j}\), at time \(t=28\). (c) Edge-mode population as a function of \(\epsilon_{R}\) for different initial particle number \(N\) in the reservoir. Snapshots of spatial density distribution for (d) \(\epsilon_{R}\!=\!1\), (e) \(\epsilon_{R}\!=\!2.5\), (f) \(\epsilon_{R}\!=\!4\) at times \(t\!=\!7,21\). Here, a system of size \(13\times 12\) and with flux \(\phi\!=\!\pi/2\) per plaquette is coupled to a reservoir of size \(7\times 12\). Except for (c), the number of particles is \(N\!=\!19\). Energy and time are in units of \(J\) and \(\hbar/J\), respectively. The arrows in (d) and (f) are a guide to the eye for the chiral motion. Figure 1: Sketch of the cold-atom elevator. (a) Protocol for chiral edge state injection. Setting the reservoir energy on resonance with the system’s edge modes, particles are continuously injected into edge states in an energy-selective manner, and chiral edge currents appear in the system without populating the bulk. (b) Injection protocol for state preparation. Starting from a trivial state in the reservoirs, the latter are slowly lifted so as to adiabatically inject particles into the system until an insulating state, e.g. a quantum Hall (QH) state, is formed. (c) Cooling protocol for state preparation. A proper tuning of the reservoir energy can be used to retrieve excitations (hot atoms) from the system. Removing the particles from the reservoir, and repeating this lift-removal process over many cycles, leads to the preparation of the desired insulating (QH) state in the system. states are populated in the system [Fig. 2(e)]. To quantify our edge-state injection scheme, we define the mean occupation \(\rho_{j}(t)\) of an individual single-particle eigenstate \(j\) in the HH system. As shown in Fig. 2(b), we find dominant populations in the bulk gaps for \(\epsilon_{R}=1\) (lower gap) and \(\epsilon_{R}=4\) (upper gap). Furthermore, we define the total edge-state population \(\mathcal{P}_{\text{edge}}=\sum_{j\in\text{Edge}}\rho_{j}\), where the state index \(j\) runs over all edge modes. Figure 2(c) shows the population \(\mathcal{P}_{\text{edge}}\) as a function of \(\epsilon_{R}\) at time \(t\!=\!28\) for different particle numbers \(N\). By lowering the number of fermions in the reservoir, one observes a smaller edge-mode population. In any case, the peak positions clearly indicate the energetic location of the edge modes (strong signal) and bulk modes (weak signal). As a corollary, our edge-state injection scheme can be used as a spectroscopic tool for atomic QH systems. FCI preparation based on particle injection.A natural question concerns the possibility of using the injection scheme to form an insulating (QH) ground state within the bulk of the system. We first explored this scheme for a system of non-interacting fermions in view of forming a CI, and we present our findings in [63]. Here, we demonstrate the applicability of this scheme to realize a fractional Chern insulator (FCI): a lattice analogue of a fractional QH state [66, 67]. Several schemes have been proposed for realizing FCIs with cold atoms, based on the adiabatic variation of various system parameters [68, 69, 70, 71, 72, 73, 74, 75, 76, 77]. Such a scheme was recently implemented to form an FCI state of two strongly-interacting bosons in a \(4\times 4\) lattice [38]. We now show that an open-system approach, based on dynamically tuning box potentials, offers an alternative, potentially simpler and better, approach to prepare an FCI ground state with hard-core bosons. We consider the sub-box configuration depicted in Fig. 1(b): the system is connected to two reservoirs (without flux). The initial state is an easily-prepared trivial state with all (interacting) particles in the reservoirs. The system region, which is initially empty, is described by the Hofstadter-Bose-Hubbard model with hard-core interactions, which is known to host a \(\nu=1/2\) Laughlin-type ground state [78, 79, 80, 81, 82, 83]. We aim at gently injecting particles from the reservoirs to the system, by slowly lifting the reservoirs energy, in view of building up an FCI ground state in the system. Here, we set the hopping \(J_{\ell\ell^{\prime}}=J_{R}<1\) within the reservoirs and the connecting interface, to limit excitations during preparation. We first analyze the (static) ground-state properties of our system-reservoir setup, as a function of the reservoirs' energy \(\epsilon_{R}\). Figure 3(c) shows the bulk density \(n_{\text{B}}\), as evaluated within the central \(2\times 2\) sites. The incompressible nature of the FCI state clearly manifests as a plateau in the bulk density. In contrast to more conventional closed-system schemes, the present system automatically chooses the ideal number of bosons to form the FCI state (for the given flux value and number of lattice sites). The density reaches \(n_{\text{B}}\!\approx\!0.18\) on the plateau, and we verified that it converges towards the thermodynamic prediction \(n_{\text{B}}\!=\!1/8\) for increasing system sizes [63]. As another hallmark signature of the FCI, we evaluate the fractionally-quantized Hall conductivity \(\sigma_{\text{H}}\), which is encoded in the density distribution via Streda's formula [84, 85, 86, 87, 83], \[C_{\text{str}}=\frac{\partial n_{\text{B}}}{\partial\alpha}=\frac{\sigma_{ \text{H}}}{\sigma_{0}}, \tag{2}\] where \(\sigma_{0}\!=\!1/2\pi\) is the conductivity quantum. For a \(\nu\!=\!1/2\) Laughlin state, the Streda marker is expected to take the value \(C_{\text{str}}\!=\!1/2\), which is the many-body Chern number of the state. In our case, we find \(C_{\text{str}}\!\simeq\!0.46\) at \(\epsilon_{R}\!=\!-2\), hence indicating the precursor of a fractional Hall response [Fig. 3(c)]. It is interesting to compare this result to the value \(C^{\prime}_{\text{str}}=0.61\), which is obtained in the experimental closed-box configuration of Ref. [38], where an exact number of bosons (\(N=2\)) is loaded in \(4\times 4\) sites; this comparison supports the idea that the system optimizes the formation of an FCI state when coupled to reservoirs. The bulk density and the Streda marker both show an interesting behavior across the transition that occurs within the system, as particles enter the system and even Figure 3: Preparing a fractional Chern insulator based on injection. (a) Spatial density distribution of the initial state with \(\epsilon_{R}=-3\). (b) Density distribution of the prepared state using the ramp shown in the inset of panel (d) and \(\tau=160\). (c) Bulk density and the local Streda marker as a function of \(\epsilon_{R}\). The shadow indicates the FCI regime. (d) Many-body energy gap as a function of \(\epsilon_{R}\). Inset: the ramping protocol of \(\epsilon_{R}(t)\) from \(-3\) to \(-2\) within time \(\tau\). (e) Bulk density as a function of \(\epsilon_{R}\) for different \(\tau\) and for the instantaneous ground state. (f) Local marker as a function of \(\tau\). Inset: linear fit of the density versus flux at \(\tau=160\), which gives \(C_{\text{str}}\!=\!0.51\). Here, we consider \(N\!=\!12\) hard-core bosons; the system of size \(4\times 4\) is coupled to two reservoirs of size \(4\times 4\), with \(J_{R}\!=\!0.15\). The error bars denote the standard error of the regression slope used to extract \(C_{\text{str}}\) in Eq. (2). tually form the FCI state. Indeed, in the vicinity of \(\epsilon_{R}\!\approx\!-2.5\), one notices an abrupt increase in \(n_{\mathrm{B}}\) and a breakdown of Streda's formula [Fig. 3(c)], accompanied with a sudden drop in the many-body gap [Fig. 3(d)]. The minimal many-body gap associated with this transition is \(\Delta\!=\!0.016\), which suggests a realistic ramping time \(\tau\!\sim\!100\) for adiabatic state preparation, compatible with recent experiments [38]. We now analyze how an FCI ground state can be dynamically prepared by slowly ramping up the reservoir energy to the ideal value \(\epsilon_{R}\!\approx\!-2\). To optimize adiabatic preparation, we adjust the ramp according to the many-body gap; see the inset in Fig. 3(d). By tracking the bulk density during the ramp [Fig. 3(e)], one recovers the formation of a plateau for a sufficiently long ramping time \(\tau\), in agreement with the adiabatic-limit prediction of Fig. 3(c). As further confirmed by the local Streda marker, an FCI ground state with \(C_{\mathrm{str}}\!\approx\!0.5\) is prepared for adiabatic times \(\tau\gtrsim 140\) [Fig. 3(f)]. The comparison with a less efficient, linear ramp is presented in Ref. [63]. _State preparation via repeated cleaning._ So far, we have discussed a protocol by which particles are injected from a reservoir into an empty system. Motivated by the ability of easily preparing an empty reservoir (a trivial zero-entropy state), we now explore the possibility of using the reservoir as a vacuum-cleaning resource in view of preparing ground states in the system. As sketched in Fig. 1(c), one considers a 'dirty' (excited) initial state within our system. The cleaning cycle is then as follows: (i) we slowly lower the reservoir energy, such as to retrieve excitations (hot atoms) from the system in a controlled manner; (ii) after this cleaning process, one rapidly lifts the reservoir until it becomes decoupled from the system; and (iii) one completely empties the reservoir. This cleaning cycle is then repeated \(n_{\mathrm{cyc}}\) times, until convergence is reached towards a target insulating (QH) state within the box. The advantage of this scheme is two-fold: the empty reservoir state can be viewed as a perfect and easy-to-prepare zero-temperature state of holes; the difficulty in removing particles that are located deep in the bulk is compensated by several repetitions. We apply this scheme to a concrete preparation sequence, designed to prepare Chern insulators in atomic HH systems. Inspired by Ref. [88], we start from a trivial metal realized by loading non-interacting fermions in a square lattice at half-filling in the presence of a staggered potential [63]. We then ramp up the flux in the lattice to the value \(\phi\!=\!\pi/2\), while reducing the staggered potential, hence changing the topological nature of the bands: at the end of this sequence, the target lowest band has a Chern number \(C\!=\!1\). Due to the occupation of higher bands in the initial metallic state, the target (lowest) band remains perfectly filled during the whole duration of this sequence, despite the gap closing (\(C\!=\!0\!\to\!1\)). The (irregular) band populations, obtained at the end of this sequence, are shown by blue dots in Fig. 4(b). Our aim is to remove atoms from higher bands, while leaving the lowest Chern band (\(C\!=\!1\)) almost perfectly filled, in view of forming a CI in the system. To achieve this goal, we now apply our vacuum-cleaning protocol by dynamically tuning the reservoir energy \(\epsilon_{R}\). During each cycle of duration \(\tau\), we vary \(\epsilon_{R}(t)\) with a saturation function [63], using a large initial value \(\epsilon_{R}^{i}=4\). At the end of this first cycle, \(\epsilon_{R}(t)\) reaches the value \(\epsilon_{R}^{f}\!=\!-1.14\), which is located right below the first excited Bloch band of the system. After lowering the reservoir to the final value \(\epsilon_{R}^{f}\), we then quickly lift it up until it becomes effectively decoupled from the system; we then empty the reservoir and complete one cycle. We then repeat this cleaning sequence, but for the sake of efficiency, we progressively increase the final value \(\epsilon_{R}^{f}\) at each cycle to properly address all the higher bands [63]. We note that this cleaning scheme can be understood through a simplified 3-level toy model [inset of Fig. 4(a)], which can serve as a guide in view of optimizing the control parameters [63]. Figure 4(b) demonstrates the efficient depletion of the excited bands (and the resulting decrease of entropy) as a function of the cycle number \(n_{\mathrm{cyc}}\). In this process, the bulk states of the lowest band remain almost perfectly filled, and we find that a satisfactory CI ground state is formed after 16 cycles of duration \(\tau\!=\!90\). We plot Figure 4: Preparing a Chern insulator by using the lift-removal strategy. (a) Energy spectrum of the HH model coupled to a trivial reservoir. Here, we partition a lattice of size \(20\times 20\) into a target \(12\times 12\) system (central box) and a surrounding reservoir. \(E_{F}\) denotes the Fermi energy. We set a flux \(\phi\!=\!\pi/2\) in the system only. Inset: Simplified three-level model; \(\langle E_{0}\rangle\) and \(\langle E_{1}\rangle\) denote representative energies of the lowest two bands. (b) Population in HH eigenstates \(\rho_{j}\) for different \(n_{\mathrm{cyc}}\). Inset: the HH orbital entropy versus \(n_{\mathrm{cyc}}\). We set \(J_{R}\!=\!0.15\) and \(\tau\!=\!90\) per cycle. (c) The local Streda marker of the evolved state at \(n_{\mathrm{cyc}}\!=\!16\), as a function of the site index along the middle row. Inset: the corresponding spatial density distribution of the evolved state. (d) The local marker as a function of \(\tau\) for different cycles \(n_{\mathrm{cyc}}\). The local marker is averaged over a disk at the center with a radius of \(r\!=\!2\). The error bars denote the standard error of the regression slope used to extract \(C_{\mathrm{str}}\). the local (single-site) Streda marker in Fig. 4(c), which confirms the topological nature of the bulk, \(C_{\mathrm{str}}\!\approx\!1\). We plot \(C_{\mathrm{str}}\) as a function of the ramping time per cycle for different \(n_{\mathrm{cyc}}\) in Fig. 4(d). This shows that an efficient cleaning is reached for \(n_{\mathrm{cyc}}\!\approx\!12\) cycles of duration \(\tau\!\gtrsim\!200\), and for \(n_{\mathrm{cyc}}\!\approx\!16\) cycles of duration \(\tau\!\gtrsim\!50\). Concluding remarks.This work explored different possibilities offered by the design of tunable boxes in cold-atom experiments, setting the focus on the realization of topological states. This approach offers substantial advantages: it relies on the preparation of a simple initial state in the reservoir and the ability to dynamically tune the latter's energy relative to the system region. In this sense, our open-system approach does not require any fine-tuning nor complicated time-dependence of the system parameters, and it is readily applicable to create a broad class of (bosonic or fermionic) many-body states of interest, including exotic Mott insulators and antiferromagnetic states in Hubbard-type models. While we considered a spatial separation between the system and reservoir regions on the 2D plane, we note that a double-layer configuration could also be envisaged to further enhance the transfer of particles between the two regions; this could be realized using a bilayer optical lattice or by exploiting two laser-coupled internal states of an atom. Finally, it would be interesting to combine the injection and cleaning schemes presented in this work, in view of realizing large FCI states or to explore quantum thermodynamics. _Acknowledgments._ The authors thank Julian Leonard, Yanfei Li, Nir Navon, Cecile Repellin, Raphael Saint-Jalm, Perrin Segura, Boye Sun, Amit Vashishht and Christof Weitenberg for discussions. J.D. acknowledges the support of the Solvay Institutes, within the framework of the Jacques Solvay International Chairs in Physics. Work in Brussels is also supported by the FRS-FNRS (Belgium), the ERC Starting Grants TopoCold and LATIS, and the EOS project CHEQS. M.A. and A.E. acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) via the Research Unit FOR 2414 under Project No. 277974659. M.A. also acknowledges funding from the DFG under Germany's Excellence Strategy - EXC-2111 - 390814868.
2310.19914
Efficient entanglement purification based on noise guessing decoding
In this paper, we propose a novel bipartite entanglement purification protocol built upon hashing and upon the guessing random additive noise decoding (GRAND) approach recently devised for classical error correction codes. Our protocol offers substantial advantages over existing hashing protocols, requiring fewer qubits for purification, achieving higher fidelities, and delivering better yields with reduced computational costs. We provide numerical and semi-analytical results to corroborate our findings and provide a detailed comparison with the hashing protocol of Bennet et al. Although that pioneering work devised performance bounds, it did not offer an explicit construction for implementation. The present work fills that gap, offering both an explicit and more efficient purification method. We demonstrate that our protocol is capable of purifying states with noise on the order of 10% per Bell pair even with a small ensemble of 16 pairs. The work explores a measurement-based implementation of the protocol to address practical setups with noise. This work opens the path to practical and efficient entanglement purification using hashing-based methods with feasible computational costs. Compared to the original hashing protocol, the proposed method can achieve some desired fidelity with a number of initial resources up to one hundred times smaller. Therefore, the proposed method seems well-fit for future quantum networks with a limited number of resources and entails a relatively low computational overhead.
André Roque, Diogo Cruz, Francisco A. Monteiro, Bruno C. Coutinho
2023-10-30T18:28:09Z
http://arxiv.org/abs/2310.19914v4
# Efficient entanglement purification based on noise guessing decoding ###### Abstract In this paper, we propose a novel bipartite entanglement purification protocol built upon hashing and upon the guessing random additive noise decoding (GRAND) approach recently devised for classical error correction codes. Our protocol offers substantial advantages over existing hashing protocols, requiring fewer qubits for purification, achieving higher fidelities, and delivering better yields with reduced computational costs. We provide numerical and semi-analytical results to corroborate our findings and provide a detailed comparison with the hashing protocol of Bennet et al. Although that pioneering work devised performance bounds, it did not offer an explicit construction for implementation. The present work fills that gap, offering both an explicit and more efficient purification method. We demonstrate that our protocol is capable of purifying states with noise on the order of 10% per Bell pair even with a small ensemble of 16 pairs. The work explores a measurement-based implementation of the protocol to address practical setups with noise. This work opens the path to practical and efficient entanglement purification using hashing-based methods with feasible computational costs. Compared to the original hashing protocol, the proposed method can achieve some desired fidelity with a number of initial resources up to one hundred times smaller. Therefore, the proposed method seems well-fit for future quantum networks with a limited number of resources and entails a relatively low computational overhead. ## I Introduction Entanglement is considered a valuable resource for numerous applications, spanning from secure communication and scalable quantum networks [1, 2, 3], to precision measurement and timing, entanglement is a versatile resource for various quantum technologies [4], including quantum key distribution [5], state teleportation [6], distributed quantum computation [7, 8] and distributed sensing and metrology [9]. In experimental settings, notable progress has been made in establishing entanglement between different components, as evidenced by successful demonstrations in various systems, including trapped ions [10, 11, 12], NV centers [13, 14], neutral atoms [15, 16], and superconducting circuits [17, 18]. However, entanglement is susceptible to several detrimental factors. Foremost among these is decoherence, resulting from interactions with the environment or thermal fluctuations that cause a quantum system to lose its coherent superposition, ultimately leading to the breakdown of entanglement. Quantum errors, resulting from imperfect gates or measurements, can also disrupt the intricate entangled states [19, 20, 4]. It is worth noting that the fidelity of the generated Bell pairs currently stands at approximately 10%, whereas the noise stemming from local gates and measurements is considerably lower, often falling below 1%. Unless actively corrected, the accumulation of these error processes will inevitably destroy any initial entanglement. Purification protocols (PP) are a possible solution for quantum networks that has been extensively studied in the context of quantum repeaters [217, 22] and is a crucial component of entanglement routing protocols [23, 24, 25]. Bipartite entanglement purification encompasses a broad category of techniques aimed at acquiring high-fidelity copies of Bell states that are jointly held by two parties [26, 27, 28, 29, 30, 31, 32]. This is achieved by using local quantum operations and classical communication (LOCC) on an initially shared ensemble of entangled particles. Purification protocols are designed to operate on two or more ensembles of noisy entangled quantum states. The primary goal is to generate a reduced number of states with higher fidelity. Purification protocols can be repeatedly applied to the refined states in order to achieve states with even higher levels of purification. Protocols such as the BBPSSW [26, 27] or the DJMPS [28], which fall into the category of recurrence protocols [32, 27], achieve this only probabilistically. Although the recursive application of these protocols leads to convergence towards maximally entangled states, their probabilistic nature requires multiple executions to successfully obtain a purified state. Since a single successful application only results in marginal improvements to the state's fidelity, the need to repeat these protocols several times to achieve sufficiently purified states leads to an exponential increase in the resources required. Another approach to purification is provided by hashing protocols [33, 30, 26, 31]. Hashing protocols operate on a large ensemble of Bell pairs to deterministically produce a smaller number of purified pairs, whose fidelity can be arbitrarily high. However, traditional hashing protocols can be highly computationally demanding. Moreover, they often require hundreds of pairs to achieve purification and thousands of pairs to become efficient. In this work, we introduce a novel one-way bipartite purification protocol, inspired by the recently developed decoding technique for quantum error correction codes (QECCs) known as quantum random additive noise decoding (QGRAND) [34; 35]. QGRAND was initially proposed by Cruz et al. [34] for quantum random linear codes and latter has been used to decode other QECCs in [35]. QGRAND takes advantage of the noise statistics and provides an efficient decoding process. Employing a hashing-based approach, our protocol represents a substantial advancement over the hashing protocol [26], with the distinguished advantages: i) decreased qubit demand for purification, ii) achievement of heightened fidelities for equal initial ensembles (maintaining equal qubit count and noise conditions), iii) augmented yields for smaller ensembles, iv) attainment of computationally feasible costs. Hashing protocols, while efficient, encounter a significant practical limitation: their viability diminishes in real-world scenarios with noisy local operations and measurements. Even some minimal noise disrupts these protocols due to global operation requirements and the cumulative entropy increase from noisy operations. Building upon the concepts introduced by in [36; 37], we also put forward a measurement-based [38; 39] version of our protocol and provide numerical results regarding the protocol's potential noise tolerance and its impact on the performance. This paper is organized as follows. In Section II we introduce a classical error decoding procedure along with its quantum equivalent and explain how a purification protocol can be obtained from a quantum error correction code. In Section III our purification protocol is presented and some results about its performance are provided. Section IV is devoted to establishing a comparison between our protocol and the hashing protocol. In Section V we present a measurement-based version of our protocol before some conclusions being presented in Section VI. ## II Background ### Qgrand While traditional decoding algorithms of classical error correction codes primarily concentrate on detecting error patterns through codeword analysis, a recent innovative proposal has surfaced, that focuses on "decoding" the noise; the algorithm was named GRAND (guessing random additive noise decoding) [40]. This algorithm redirects the decoding process toward discovering the error pattern that affected a transmitted codeword rather than focusing on the codebook. Consider a finite alphabet of symbols \(\mathcal{A}\). Let \(\mathcal{C}\) be a discrete communication channel with a block of \(n\) input symbols denoted by \(X^{n}\), which are subjected to a noise block \(N^{n}\), also composed of \(n\) symbols. Denoting the action of the channel on the input block by \(\oplus\), the output block of \(n\) symbols is \(Y^{n}=X^{n}\oplus N^{n}\). Assuming that the channel's action is invertible, let \(\ominus\) denote the inverse function, so that the inputs can be written as \(X^{n}=Y^{n}\ominus N^{n}\). Suppose that this channel is employed for communication between a sender and a receiver, both of whom share a codebook \(\mathcal{B}^{(n,M_{k})}\) containing \(M_{k}\) codewords, each of which is made of \(n\) symbols taken from the alphabet \(\mathcal{A}\). Each particular random set of \(n\) symbols forming a block \(N_{i}^{n}\) is said to be a _noise pattern_, which occurs with some known probability \(P(N_{i}^{n})\), according to the noise statistics. GRAND operates by successively testing noise patterns (in decreasing order of their probability) until finding one that generates a valid codeword, that is: \(X^{n}=Y^{n}\ominus N_{i}^{n}\), such that \(X^{n}\in\mathcal{B}^{(n,M_{k})}\). When this is true, then \(X^{n}=Y^{n}\ominus N_{i}^{n}\) is accepted as the decoded data. Otherwise, the algorithm moves to the next most likely error pattern. It was shown in [40] that this decoding procedure returns the maximum likelihood solution, i.e., \[b_{i}^{(n,M_{k})}=\underset{b_{i}^{(n,M_{k})}\in\mathcal{B}^{(n,M_{k})}}{\text {argmax}}\{P(N^{n}=Y^{n}\ominus b_{i}^{(n,M_{k})})\}. \tag{1}\] As mentioned, the probabilities for each noise sequence can be estimated using the specific noise statistics of the channel. Quantum GRAND (QGRAND) is a recently proposed decoder that adapted the classical GRAND to a QEC setup [34]. It allows to decode quantum random linear codes (QRLC) with the distinguished advantages of maximizing the use of information about the noise while providing a simple encoding/decoding process. QGRAND was defined for stabilizer codes and leverages noise statistics to establish a recovery procedure similar to the one used by GRAND. Consider an initial setup of some \(k\)-qubit state that one wants to protect against some source of noise. The encoding is generated by randomly choosing two-qubit Clifford gates randomly selected and applied to the \(n\) qubit system (\(k\) initial qubits plus \(n-k\) additional qubits in the state \(|0\rangle\)). The target pairs of such gates are also randomly chosen, assuming that all-to-all connectivity between the \(n\) qubits is possible. This encoding has been shown to be robust to depolarizing noise provided that the number of gates used is large enough. QGRAND can be done implemented in slightly different ways. In the following, we describe the approach taken in the present work. Let \(\mathcal{N}\) be the noise statistics of the error source considered. Since the circuit (composed of both the encoder and the stabilizers) is composed of Clifford gates, it is possible to classically simulate it in a efficient manner. By doing so, we can precompute a set \(\mathcal{N}_{P}\subseteq\mathcal{N}\) of error patterns. A set \(\mathcal{N}_{J}\subseteq\mathcal{N}\setminus\mathcal{N}_{P}\) of error patterns whose syndrome needs to be computed on the fly can also be considered, at the cost of some extra processing time. For each syndrome, only the most likely error pattern is stored, allowing one to establish a one-to-one correspondence between each syndrome and the set of correctable error patterns. Once the syndrome has been extracted, a correction attempt is made. The correction process should be reversible since encoding, syndrome measurement, and decoding procedures can fail. In that case, the applied correction is reversed and another correction procedure is tried. This process is repeated until a an error pattern that leads to a valid codeword is found. ### Equivalence between Quantum Error Correction and One-Way Purification Protocols Both QEC and purification protocols (PP) are strategies for managing the challenges of noisy quantum communication that are intricately interconnected. This relation was first observed in the early exploration of entanglement purification in [26; 41], where a general methodology has been introduced to construct QECC from one-way entanglement purification protocols. In cases where a code employs \(n\) physical qubits to encode \(k\) logical qubits (\(k<n\)), it becomes feasible to devise a purification protocol that makes use of \(n\) copies of two-qubit states and produces \(k\) pairs with enhanced fidelity. The remaining \(n-k\) pairs undergo measurement for revealing information concerning the obtained pairs [41; 42]. To understand the main idea, let us consider a noisy channel \(\mathcal{C}\), established between two parties, A (Alice) and B (Bob), to be used for distributing \(n\) perfect Bell pairs (\(|\phi^{+}\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\)). Although the construction can be generalized for any QEC code, we shall consider a stabilizer code for simplicity. For stabilizer codes, one can consider that the encoding and decoding operations consist of unitary operations \(U_{\text{enc}}=U\) and \(U_{\text{dec}}=U_{\text{enc}}^{-1}=U^{-1}\). In order to obtain a purification protocol, we start by noticing the fact that performing any unitary transformation (and this holds for any linear transformation) at B is equivalent to performing the transpose of that same transformation at A [26], that is, \[U_{A}^{T}\otimes I_{B}\;|\phi_{AB}^{+}\rangle^{\otimes n}=I_{A}\otimes U_{B} \;|\phi_{AB}^{+}\rangle^{\otimes n}. \tag{2}\] This allows us to perform encoding operations locally by A, and decoding to be performed locally by B. The channel affects the ensemble with some noise, which we shall assume to be a Pauli string \(E\), and a mixed state \[\rho =\mathcal{C}(|\phi_{AB}^{+}\rangle^{\otimes n}) \tag{3}\] \[=(1-p)(|\phi_{AB}^{+}\rangle^{\langle}\phi_{AB}^{+}|)^{\otimes n }+pE(|\phi_{AB}^{+}\rangle^{\langle}\phi_{AB}^{+}|)^{\otimes n}E^{\dagger}, \tag{4}\] is obtained. In order to purify \(\rho\), Alice starts by applying the encoding on her qubits, projecting them into a \(+1\) eigenspace defined by the code. Now Alice can perform measurements on \(n-k\) of her qubits to obtain information about the ensemble \(\rho\). Let \(\{P_{i}\}\) be the set of \(2^{n-k}\) projectors defined by the code. Since each projector \(P_{i}\) either commutes or anticommutes with any Pauli string, we have that \[(P_{i}\otimes I)(I\otimes E)|\phi^{+}\rangle^{\otimes n}=\pm(I\otimes E)(P_{ i}\otimes I)|\phi^{+}\rangle^{\otimes n}. \tag{5}\] Using Equation (2), we obtain that \[(P_{i}^{2}\otimes I)(I\otimes E)|\phi^{+}\rangle^{\otimes n}=\pm(I\otimes E)( P_{i}\otimes P_{i}^{T})|\phi^{+}\rangle^{\otimes n}. \tag{6}\] Hence, every time Alice applies a projector \(P_{i}\) on her qubits, Bob's qubits are projected through \(P_{i}^{T}\). After obtaining information about the measurements, Alice might restore her qubits to the \(+1\) eigenspace defined by the encoding, although this is not strictly necessary if one "adapts" the measurement information to the projection that was performed. The measurement information is transmitted to Bob through a classical channel, allowing him to adjust the correction operation required to return his \(n-k\) qubits to the code space. Finally, decoding operations are applied so that a state with higher fidelity is obtained. Thus, a purification protocol was obtained using only LOCC. This construction can be summarized as follows: 1. Generate \(n\) copies of a maximally entangled bipartite state and distribute them between A and B. 2. Apply locally on A the coding operation \(U^{T}\) and Pauli measurements on \(n-k\) of the \(n\) qubits. The information about the measurements is sent classically to B. 3. Extract the syndrome on B. Apply on B the correction operation determined from that measured syndrome. 4. Apply the decoding operation \(U^{-1}\) on B. In the end, one obtains \(k\) Bell pairs shared between Alice and Bob, while the remaining \(n-k\) pairs are discarded. The \(k\) purified pairs have higher fidelity than the initial \(n\) shared pairs. ### Noise modeling In order to evaluate the performance of the proposed protocol, we model both coherent and decoherence errors in bipartite states through a depolarizing channel. A **depolarizing channel**\(\mathcal{D}\) is a quantum channel that maps a quantum state onto a mixture of the original state and a maximally mixed state. If \(\rho\) represents the density matrix of some state, the action of \(\mathcal{D}\) on qubit \(i\) of the state \(\rho\) is naturally described by \[\mathcal{D}_{p}^{i}(\rho)=(1-p)\rho+\frac{p}{3}(X_{i}\rho X_{i}+Y_{i}\rho Y_{ i}+Z_{i}\rho Z_{i}), \tag{7}\] where \(p\) is the depolarization probability, that can be interpreted as an error probability, and \(X,Y\) and \(Z\) are Pauli matrices. Using this channel to model noise requires the following assumptions: i) the qubit errors that affect one qubit are independent from the errors that affect the remaining system, ii) single-qubit errors (\(X\), \(Y\) and \(Z\)) are equally likely and iii) all qubits belonging to the same system have the same error probability \(p\). Throughout this work, we will refer to noise of this form as local depolarizing noise (LDN). Applying a depolarizing channel to either the first or the second qubit of a Bell pair results in a state known as a Werner state, \(W_{F}\), [43] with fidelity \(F=\mathcal{F}(W_{F},|\phi^{+}\rangle)=\langle\phi^{+}|W_{F}|\phi^{+}\rangle=1-p\). This enables us to express this state in terms of its fidelity: \[W_{F} = \mathcal{D}_{p}^{2}(|\phi^{+}\rangle\langle\phi^{+}|)=\mathcal{D }_{p}^{1}(|\phi^{+}\rangle\langle\phi^{+}|) \tag{8}\] \[= \frac{1-F}{3}I+\frac{4F-1}{3}|\phi^{+}\rangle\langle\phi^{+}|,\] meaning that the obtained state can be interpreted as a classical mixture of the four Bell states. Notice that the qubit to which the channel is applied is irrelevant. Werner states hold significance in the examination of random two-qubit states, as the conversion of any bipartite mixed state to a Werner state is attainable through an irreversible process termed "twirl". This procedure involves the application of independent and random rotations from the \(\textit{SU}(2)\) group to each particle within the pair [44, 26]. ### Measurement Based Quantum Computation The measurement-based quantum computation (MBQC) [38, 39, 45] is a model of quantum computation where computations are performed through measurements on an initial resource state, rather than using gates. It provides a universal framework for performing quantum computations utilizing only single-qubit measurements. One of the most extensively studied and promising approaches in MBQC involves using as resource state a highly entangled state known as _the cluster state_. These states can be generated in lattices of qubits through Ising type interactions [38, 39]. This process generates a highly entangled state with the topology of a 2D square lattice, commonly referred to as a cluster state, as mentioned earlier. These cluster states are a specific type of state from a class of states that can be represented using graphs, known as graph states. To define a graph state, consider a graph \(G=(V,E)\) with a finite set of vertices \(V\) and edges \(E\in V\times V\). The graph state associated with \(G\) is defined as \[|G\rangle=\prod_{(a,b)\in E}CZ_{a,b}\left(\bigotimes_{a\in V}|+\rangle^{a} \right), \tag{9}\] with \(CZ_{a,b}\) representing the controlled-\(Z\) gate on qubits \(a\) and \(b\) and \(|+\rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}}\). Some graph operations such as vertex deletion and edge addition or deletion have a physical correspondence with operations on the graph state. Keeping this in mind, two-dimensional cluster states can be defined as graph states obtained from a graph with the topology of a square lattice. By systematically choosing the measurement sequence and bases, any arbitrary quantum circuit can, in principle, be simulated utilizing only single-qubit measurements on the cluster state [38]. This paradigm of quantum computation is especially relevant for the development of fault-tolerant quantum computation [46, 47, 48, 49]. Moreover, experimental realizations of multiparty purification protocols in optical lattices, aimed at improving the fidelity of cluster states, have already been presented [33, 50]. ## III Proposed Protocol ### Protocol Overview Following the construction proposed in Section II.2, we now present a purification protocol developed through an adaptation of QGRAND, which we will refer to as Purification GRAND (PGRAND). Let us assume an initial setup of \(n\) Bell pairs shared between Alice and Bob through a noisy quantum channel \(\mathcal{C}\). At the end of the protocol procedures, both parties will share \(k\) Bell pairs with higher fidelity. The protocol is depicted in Figure 1 and can be summarized by the following steps: 1. Apply a random encoding on Alice's \(n\) qubits. Share the encoding information with Bob. 2. Perform single-qubit measurements in the computational basis on \(n-k\) of Alice's \(n\) qubits, followed by the classical transmission of the measurement outcomes to Bob. 3. With the information about the encoding, simulate the stabilizer circuit and determine the stabilizers for the qubits which were initially entangled with the measured ones. Measure the syndrome using all the determined \(n-k\) stabilizers. 4. Update the syndrome obtained with the information received. 5. Use a noise guessing approach to identify the error pattern and apply the recovery procedure on the corrupted state. 6. Apply the decoding at B. Discard the measured pairs and keep the \(k\) remaining ones. #### iii.0.1 Encoding The protocol starts by applying an encoding on Alice's qubits, composed of randomly selected 2-qubit Clifford gates. To do so, one can generate a random tensor product of Clifford gates, transpose it, and apply it to the initial state. The information about the original tensor product has to be sent classically to Bob. Alternatively, both parties could previously agree on the random encoding to be used. To obtain a highly performant code, \(0.14n\log^{2}(n)\) Clifford gates are required on average [34]. As most of the gates can be applied in parallel, the encoding has a circuit depth of \(\mathcal{O}\big{(}\log^{3}(n)\big{)}\). #### ii.1.2 Determining the stabilizers After the qubits are generated and subjected to noise, the process of gathering information about the state is limited to local operations. To overcome that issue, we can still make use of local stabilizers on Bob's qubits, but it is required that the respective entangled qubits on Alice's side be measured. After some initial state \(|\psi\rangle_{I}\) has been encoded with an unitary transformation \(U\), the stabilizers of the encoded state \(|\psi\rangle_{F}=U|\psi\rangle_{I}\) can be obtained by evolving the initial stabilizers through the encoding, that is, if \(S_{I}\) is a stabilizer of the initial state, then \(S_{F}=US_{I}U^{-1}\) is a stabilizer of the final encoded state since \[S_{F}|\psi\rangle_{F}=(US_{I}U^{-1})U|\psi\rangle_{I}=US_{I}|\psi\rangle_{I}=U |\psi\rangle_{I}=|\psi\rangle_{F}. \tag{10}\] Although the proposed protocol performs the measurements after the encoding operation, given Equation (2) it is possible to assume that, for those qubits that are being measured, the measurements on Alice's side are performed before the encoding operation on Bob's side. Hence, we can focus our attention in evolving the stabilizers of the measured states. If the initial state is a Bell pair, the operators \(S_{M=0}=I_{A}\otimes Z_{B}\) and \(S_{M=1}=I_{A}\otimes(-Z)_{B}\) are able to stabilize locally on Bob's side the states obtained after a \(Z\)-measurement of a Bell pair, \(|00\rangle_{AB}\) and \(|11\rangle_{AB}\), respectively. Hence, each stabilizer can be evolved through \(U^{T}\), that is, we obtain the stabilizers \(S^{\prime}_{M=0}=I_{A}\otimes((U^{T})^{-1}ZU^{T})_{B}\) and \(S^{\prime}_{M=1}=I_{A}\otimes(-(U^{T})^{-1}ZU^{T})_{B}\). Since \(S^{\prime}_{M=1}=-S^{\prime}_{M=0}\), changing the stabilizers according to the measurement is equivalent to performing a classical bit-XOR operation of the syndrome with the measurement result. According to the Gottesman-Knill theorem Gottesman and Knill (1994); Gottesman (1996), for a quantum circuit of \(n\) qubits, updating the stabilizer specification requires \(\mathcal{O}(n)\) time for each Clifford gate in the circuit. The encoding operation made by \(U\) is a stabilizer circuit, and thus it has depth \(\mathcal{O}\big{(}\log^{3}(n)\big{)}\), determining the stabilizers requires \(\mathcal{O}(n\log^{3}(n))\) time. #### ii.1.3 Noise guessing and correction Bob can efficiently simulate the encoding circuit and build a parity check matrix to map each error pattern to the respective syndrome. With the noise statistics, Bob can determine the most likely error associated with a syndrome, so that whenever that syndrome is measured, the recovery procedure is done such that the most likely error is selected and corrected. Since the possible number of error patterns increases exponentially on the number of qubits for most of the error models, Bob might only be able to precompute the syndromes associated with a fraction of the total possible errors. The number of syndromes that are computed should be chosen according to the initial fidelity of the pairs, the number \(n\) of qubits and the required final fidelity. This approach will save resources and make the computation possible at the cost of a decrease in performance. The computation of the syndrome of each error pattern involves multiplying a \(2n\)-bit array representing the error with a \((2n)\times(n-k)\) matrix representing the stabilizers, requiring \(\mathcal{O}(n^{w})\) time. In practice, this procedure can be efficiently parallelized. When aiming to correct the \(N\) error patterns with a weight up to \(t\), we may take advantage of previously computed syndromes for lower weight errors to achieve a \(\mathcal{O}(Nn)\) computation Figure 1: Quantum circuit for the PGRAND applied to \(n\) Bell pairs distributed between Alice (A) and Bob (B) in order to obtain \(k\) purified pairs. The error is described by a quantum channel \(\mathcal{C}\). In the circuit, the measurements performed by Alice are depicted changing the ancilla qubits, but in reality only a virtual syndrome update needs do be done. The same goes to the recovery procedure: if it consists only of Pauli strings, then \(\mathcal{R}\) can be performed virtually (in software), without the need to apply extra gates. cost in obtaining the syndrome table. It is important to note that this step is the primary limiting factor in what regards the classical computational cost, just like in the case of the hashing protocol [26, 29], with the circuit simulation playing a negligible role in the overall cost. After mapping each error pattern with a syndrome, only the most likely error pattern associated to each syndrome needs to be stored, and the remaining can be discarded. The number of possible error patterns is \(N\) and \(S=2^{(n-k)}\) is the number of available syndromes. Whenever a syndrome is extracted, all that is needed to identify the recovery procedure is a linear search on a table of at most size \(T=\min\left\{N,S\right\}\), which requires \(\mathcal{O}(Tn)\) space and time to store the syndromes and perform a search. #### ii.1.4 Decoding As outlined in the protocol's presentation, the decoding entails reversing the encoding operation and discarding the measured qubits. Nevertheless, since only Clifford gates are used in the decoding, a virtual execution prior to the recovery operation is also feasible. In that case, all that needs to be done is to make the appropriate adjustments to the recovery operation. Both the decoding (except for the discarding of pairs) and the correction involve Clifford gates, so all of this can be performed classically through software [53]. Overall, for a single PGRAND application that uses \(n\) Bell pairs to obtain \(k\), the overhead per pair, that is, the number of qubits that are spent in the use of the protocol, is given by \(3(n/k-1)\) (due to the \(2(n-k)\) qubits that are required to be measured and the \(n-k\) qubits that are used as ancillas). For the syndrome extraction, each of the \(n-k\) stabilizer applications is composed by, on average, \(3n/4\) CNOT gates and \(7n/2\) single-qubit gates, requiring a total of \(\mathcal{O}\big{(}n^{2}\big{)}\) gates. ### Results Consider a setup of Werner states with initial fidelity \(F_{i}=1-p\), with \(p\) being the parameter of a depolarizing channel. Define \(\mathcal{W}:\mathcal{E}\rightarrow\mathbb{N}\) to be the map between each error pattern \(E_{i}\in\mathcal{E}\) and its weight. Let \(N_{w}\) be the number of possible error patterns with weight \(w\) and \(N_{\leq w}\) the number of possible error patterns with equal or less weight than \(w\). For depolarizing noise these quantities are given by \[N_{w}=\binom{n}{w}3^{w},\quad N_{\leq w}=\sum_{i=0}^{w}N_{i}. \tag{11}\] Let \(f_{\text{Bin}(n,p)}(w)\) be the binomial probability mass function. For each error pattern \(E_{i}\), we have that \[P(\{E_{i}\}|\mathcal{W}(E_{i})=w)=\frac{f_{\text{Bin}(n,p)}(w)}{N_{w}}=\Big{(} \frac{p}{3}\Big{)}^{w}(1-p)^{n-w}, \tag{12}\] from which we immediately conclude that \(P(\{E_{i}\})\geq P(\{E_{j}\})\Rightarrow\mathcal{W}(E_{i})\leq\mathcal{W}(E_ {j})\) for any \(E_{i},E_{j}\in\mathcal{P}^{n}\) (as long as \(p<3/4\)). The approach of correcting the most likely errors in this noise scenario prioritizes the correction of lower-weight errors. That is, for a set of error patterns that share the same syndrome, the protocol considers the error that has occurred to be the error pattern with the lowest weight. Under the assumption that any correctly identified error can be completely corrected, it is useful for the analysis of the protocol to define the correctable fraction of weight \(w\) as \[f_{w}=\frac{|\{E_{i}:\mathcal{W}(E_{i})=w\;,\;E_{i}\text{ is correctable}\}|}{|\{E_{i}: \mathcal{W}(E_{i})=w\}|}. \tag{13}\] Then, if degenerate scenarios are disregarded, the average correctable fraction is approximately [34] \[\langle f_{w}\rangle\simeq\frac{S}{N_{w}}e^{-\frac{N_{\leq(w-1)}}{\beta}}(1-e^ {-\frac{N_{w}}{\beta}})\quad\text{for }S\gg 1, \tag{14}\] Note that computing the syndromes associated with every possible error pattern has a high computational cost and it might be impossible even for relatively small values of \(n\). Thus we introduce a threshold parameter \(t\leq n\) that defines the maximum weight of the error patterns for which the syndromes are computed, implying that \(f_{w}=0,\quad\forall_{w>t}\). Now we can compute probability of error \(p_{e}\) of the protocol as follows \[p_{e}=1-\sum_{i=0}^{n}\langle f_{i}\rangle f_{\text{Bin}(n,p)}(i)=1-\sum_{i=0 }^{t}\langle f_{i}\rangle f_{\text{Bin}(n,p)}(i). \tag{15}\] This provides a useful lower bound for the achieved average fidelity, \(\langle F_{a}\rangle\), of the output pairs, as explained in Appendix A. Therefore, we say that the protocol achieves purification if one obtains that \(\langle F_{a}\rangle\geq 1-p_{e}>F_{i}\). The efficiency of a protocol is determined by its yield, which is normally defined as the ratio between the number of purified pairs with fidelity arbitrarily close to unity and the initial number of pairs required by the protocol. Such a definition would imply that many protocols would have a zero yield, including the hashing protocol when employed for finite-sized ensembles. Therefore, we shall consider a relaxed version of this definition. Given some target fidelity \(F_{t}\), we consider the yield to be the ratio between the number of obtained states with fidelity greater than \(F_{t}\) and the number of initial noisy copies. To assess the accuracy of the results obtained using Equation (14), the random encoding was simulated by generating a random parity check matrix, and the syndromes for each considered error pattern were computed using the techniques explained in Ref. [51]. This was performed considering 32 and 128 Bell pairs, affected by 1% LDN. The results are shown in Figure 2. These results were obtained for each value by performing twenty Monte Carlo simulations for each yield value considered. In the figure, each data point represents the average result of these simulations. Given the computational complexity and hardware constraints associated with conducting these extensive simulations, the subsequent findings in this section are derived from their analytical approximations. The numerical results obtained using these expressions for various values of \(n\), \(p\), and \(t\) can be found in Appendix C The approach of correcting the most likely errors implies that, in order to achieve purification, at least all the error patterns with weight up to \(np\) must be considered by the protocol. The capacity (i.e., the highest rate or yield one can possibly obtain using a quantum error correction code and PP) of the depolarizing channel for non-degenerate stabilizer codes is upper bounded by \[\frac{k}{n}<1-p\log_{2}(3)-H(p), \tag{16}\] where \(H(p)=-p\log_{2}(p)-(1-p)\log_{2}(1-p)\) is the binary entropy function [54]. This bound is known as the quantum Hamming bound, and it can be exceeded by degenerate codes [55]. Although the capacity of the depolarizing channel has already been determined [56; 57], this bound provides a more manageable expression for evaluating the performance of a stabilizer code in a setup that involves a large number of channel uses. Using similar arguments as the ones in [58], one can prove that the yield of our protocol is upper bounded by this bound. However, it is worth evaluating how close the yield requirements are to that bound. Assuming an initial fidelity \(F_{i}\), Figure 3 illustrates the maximum yield required to achieve an error probability below a specified threshold. These results are derived without imposing any constraints on the weight of the errors considered. Thus, the number of available syndromes emerges as the sole limiting factor influencing the outcomes. The most significant observation from fig. 3 is the initial growth in the required yield as the number of pairs increases up to a few hundred. Notably, by employing the protocol on ensembles with a few more qubits, much su Figure 2: Simulation results of the protocol probability of error \(p_{e}\) as a function of the yield, for a) 32 and b) 128 Bell pairs, assuming 1% of LDN. While on plot a) we simulated the procedure for errors with a weight up to \(t=5\), on b) we restricted to \(t=4\). The larger white dots represent the values obtained by combining expressions (14), and (15). The simulation results show that the probability of error closely agrees with the values obtained via the theoretical expression when using a relatively large number of gates for the encoding (120 gates for plot a) and 1000 for plot b)). Figure 3: Maximum achievable yield to purify \(n\) Bell pairs with an initial fidelity \(F_{i}\) with a probability of error inferior to \(p_{e}\). The initial fidelity is equal to a) \(F_{i}=0.90\), b) \(F_{i}=0.95\), c) \(F_{i}=0.975\) and d) \(F_{i}=0.99\). For each graph, the blue line represents the maximum yield at which purification is achievable. The dashed red line represents the bound given by Equation (16). Applying the protocol to ensembles with more than one hundred pairs allows for a substantial increase in the efficiency of the protocol. perior efficiencies can be obtained. However, using more than five hundred pairs leads to only marginal changes in the yield required to obtain the same error probabilities, since in this regime the yield is already close to the capacity of the channel. This observation may have critical implications for the realistic implementation of the protocol, as the challenge of generating and manipulating larger qubit ensembles could pose a substantial obstacle. Another noteworthy aspect is the minimal difference in the required yields for achieving pairs with varying error probabilities. This is evident by the proximity between the curves in each plot, indicating that a much more purified state can be obtained by slightly sacrificing the protocol's yield. ### Computational limitations In some situations, it may prove infeasible to account for error patterns that are extremely unlikely, such as those with very high weight, when the noise model is a Pauli channel. Figure 4 showcases the regimes that are more realistic to consider. As we limit the correction process to only account for error patterns with weight at most \(t\), PGRAND's performance may be degraded. In Figure 5 it is possible to see the minimum value of fidelity needed to achieve purification, \(F_{\text{min}}\), according to the number of pairs and the constraint imposed on the value of \(t\). If no constraints were imposed, this value is expected to solely decrease with the number of pairs, since in this scenario the only limiting factor is the number of available syndromes. However, following section III.1.3, if one limits the number of error patterns considered, a saturation point is reached. In this scenario, the number of possible syndromes surpasses the number of considered error patterns, and no advantage is obtained by having more syndromes. As the number of pairs increases, errors with greater weight become more likely, causing an increase in the error probability and the value of \(F_{\text{min}}\). It is also possible to determine a bound for the value of \(F_{\text{min}}\) regardless of the number of pairs, following a similar reasoning as the one used in Ref. [26]. Indeed, given that the yield of the protocol is bounded by Equation (16), a positive yield can only be obtained as long as \(p>0.1893\), that is, \(F_{\text{min}}>0.8107\). ## IV Comparison of approaches A similar approach to purification is performed by the Hashing protocol. The Hashing protocol [26] is a one-way entanglement purification protocol that operates on a large ensemble of noisy \(n\) entangled pairs. Representing by \(W\) the density matrix of the initial ensemble, the hashing protocol is capable of distilling a smaller number of pairs \(k\approx n((1-S(W))\), where \(S(W)\) is the Von Neumann entropy, with fidelity arbitrarily close to unity in the asymptotic regime where \(n\rightarrow\infty\). The protocol consists of \(n-k\) rounds where, at each round one of the parties performs some unitary operations and one of the qubits is measured and sacrificed to reveal information about the system, with each measurement revealing one bit about the unmeasured pairs. Figure 4: Contour plot of the number of possible error patterns with weight equal or inferior to \(t\) on an ensemble of \(n\) Bell pairs. Here it is possible to see that the number of error patterns increases exponentially with their weight, but sub-exponentially with the size of the ensemble. The yellowish zone marks the frontier between what is computationally feasible with the actual standards. Figure 5: Minimum initial fidelity \(F_{\text{min}}\) required to achieve purification as a function of the number of pairs in Werner form when attempting to correct errors with a weight up to \(t\in\{3,5,7,9,12\}\). These results were obtained by considering that only one purified Bell pair is obtained \(n\)-to-one. If no constraints were imposed on \(t\), the value of \(F_{\text{min}}\) would strictly decrease with the number of pairs. However, if one limits the number of precomputed syndromes, a saturation point is reached. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \(t\) & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \(n\) & 30 & 35 & 40 & 45 & 50 & 56 & 61 \\ \hline \(F_{\text{min}}\) & 0.8695 & 0.8642 & 0.8601 & 0.8578 & 0.8542 & 0.8512 & 0.8499 \\ \hline \end{tabular} \end{table} Table 1: Minimum fidelity to achieve purification and respective number of pairs required as a function of the parameter \(t\). The result of the measurement is sent classically to the other party, which by measuring the corresponding qubit obtains information about the parity of the ensemble. For a small parameter \(\delta>0\), the protocol attempts to correct errors that belong to the following set \[T_{\delta}=\bigg{\{}E_{i}:\bigg{|}-\frac{1}{n}\log_{2}(P(\{E_{i}\})-S(W)\bigg{|}< \delta\bigg{\}}. \tag{17}\] While the original intention of the hashing protocol was to operate with an asymptotically large number of Bell pairs, our interest is to assess its performance with a finite (and possibly small) ensemble. For this reason, we consider the yield of the hashing protocol with \(k\) rounds to be given by \(k/n\). Nonetheless, we present a comparison of performance between PGRAND and this protocol. The first challenge of establishing a fair comparison lies in the absence of a straightforward expression for the fidelity of the states produced by the hashing protocol. To enable the comparison, we rely on a bound derived in Ref. [59] concerning the average fidelity achieved by the hashing protocol when applied to Werner states with fidelity parameter \(F_{i}\). The bounding expression is as follows: \[\langle F_{a}\rangle \geq 1-2e^{\left\{\frac{-n}{a(F_{i})}\left[\left(g(F_{i})+\delta \right)\ln\left(1+\frac{\delta}{g(F_{i})}\right)-\delta\right]\right\}}\] \[-2^{-n[S(F_{i})+\delta]-(n-k)},\] where \[S(F) =-F\log_{2}(F)-(1-F)\log_{2}\left(\frac{1-F}{3}\right),\] \[a(F) =|\log_{2}\left(\frac{1-F}{3}\right)|+S(F),\] \[g(F) =\frac{\left(F\log_{2}^{2}(F)+(1-F)\log_{2}^{2}\left(\frac{1-F}{ 3}\right)-S^{2}(F)\right)}{a(F)}. \tag{18}\] This bound was established by bounding the probability of failure of the hashing protocol The parameter \(\delta\) presents a challenge to our comparison, as its smaller values prompt the hashing protocol to attempt to correct a larger number of error patterns, as stipulated by eq.17. However, if the yield is not sufficiently small, an increase in \(\delta\) leads to a reduction in the fidelity of the output pairs ( see Appendix B). This trade-off is evident in Figure 6, where we compare the fidelity achieved by PGRAND (without imposing a restriction on the number of corrected errors) to that achieved by the hashing protocol, for similar values of \(\delta\) as the ones used in other references [59, 26, 31]. Nevertheless, it is clear that PGRAND outperforms the hashing protocol, particularly when 128 pairs are used. For \(\delta>n^{-\frac{1}{3\delta}}\) the hashing protocol fails to achieve purification for the number of pairs considered. Next, we compare how the initial pair count in the ensemble changes the yield required for purification (\(F_{a}\geq 1-p\)). The results in Figure 7 show us that the hashing protocol demands a larger ensemble of pairs. The increase of the number of pairs enables the use of a greater value for the parameter \(\delta\), leading to higher yields. It becomes clear that as \(n\) increases the hashing protocol performance approaches the performance of our protocol. In the considered noise scenarios, using higher values of \(\delta\) would lead to similar yields for both protocols. To evaluate the minimum number of pairs required, denoted as \(n_{\min}\), for achieving purification, the protocols were examined under the condition where only one purified pair is generated. Specifically, this corresponds to the case where \(k=1\), enabling the establishment of a limiting scenario. For the PGRAND evaluation, no constraints were imposed on the value of \(t\) (which corresponds to considering \(t=n\)). When setting \(k=1\) for the hashing protocol, it is common in the literature to use \(\delta_{\text{reference}}=\frac{1}{2}\left(\frac{n-1}{n}-S(F)\right)\)[31, 59]. However, this choice of \(\delta\) is suboptimal, as demonstrated in Figure 8, where we compare the values of \(n_{\min}\) for our protocol and the hashing protocol. Therefore, we include a comparison with the results obtained Figure 7: Yield required to achieve purification as a function of the number of pairs of the initial ensemble for the hashing protocol and PGRAND, for 1% (a) and 5% (b) of LDN. For the PGRAND curve, no constraints were imposed on the value of \(t\). For the hashing protocol, increasing the number of pairs allows the choice of a smaller \(\delta\). As the number of pairs increases, the differences between the efficiencies of the protocols become smaller. Figure 6: Average fidelity achieved \(\langle F_{a}\rangle\) of the output pairs from the hashing protocol and PGRAND as a function of the yield, for 128 (a) and 256 (b) pairs. For \(\delta=n^{-\frac{1}{3\delta}}\) the hashing protocol is unable to achieve purification, meaning that considering lower values of \(\delta\) is pointless. For the PGRAND curve, no constraints were imposed on the value of \(t\). It is possible to observe the trade-off between the yield and output fidelity that comes from choosing different values for the parameter \(\delta\) of the hashing protocol. by the hashing protocol when selecting an \(\delta_{\text{optimal}}\) that maximizes the fidelity for each considered fidelity value \(F_{i}\). The results at some notable points of the curves are detailed in Table 2. There is a clear exponential relationship between \(F_{i}\) and \(n_{\text{min}}\), and as expected, the higher the initial fidelity the fewer initial pairs are required to achieve purification. For example, while for achieving purification with \(F_{i}=0.850\) it is required that \(n_{\text{min}}=60\), for \(F_{i}=0.818\) we have that \(n_{\text{min}}=1947\). Nevertheless, employing as few as 10 qubits enables the purification of states with \(F_{i}=0.95\), while increasing the qubit count to 16 lowers this threshold to \(F_{i}=0.90\). This is in striking contrast with the hashing protocol, which requires at least an ensemble with a size in the order of the hundred qubits for most noise regimes. For a practical scenario, it is crucial to assess the performance of both protocols under equivalent computational constraints. This entails examining how they fare when correcting a comparable number of errors. However, while PGRAND's performance improves with the number of errors it aims to correct (set by the parameter \(t\)), the hashing protocol doesn't necessarily follow the same trend. Increasing the number of errors under consideration (represented by a lower value of \(\delta\)) may result in a decline of its performance, as detailed in Appendix B, making direct comparisons challenging. Nonetheless, we define \(\delta^{\prime}(t)=\frac{1}{n}\log_{2}(N_{\leq t})-S(F_{i})\) and compare the fidelity achieved by the hashing protocol using \(\delta^{\prime}(t)\) with the fidelity attained by PGRAND when limiting the error correction to errors with weight up to \(t\). The choice of a suitable value for \(t\) is made to ensure a fair and balanced comparison with PGRAND under similar error correction conditions. For both protocols, we considered the scenario where only one purified pair is obtained (\(k=1\)). Therefore the results depicted in Figure 9 serve only to illustrate how both protocols perform under the same computational constraints. We observe that for a small number of pairs, PGRAND exhibits superior performance compared to the hashing protocol. As the number of pairs in the initial ensemble increases, the performance differences between the two approaches tend to diminish. However, it is crucial to consider the computational cost associated with larger initial ensembles. The hashing protocol becomes impractical due to this increased computational burden, while PGRAND maintains the advantage of achieving purification with a smaller number of pairs and manageable computational resources. PGRAND appears to offer a distinct improvement over the hashing protocol. However, its implementation demands more gates and qubits compared to the hashing protocol; while both require \(\mathcal{O}(n^{2})\) gates, the hashing protocol does not use ancilla qubits. Consequently, the advantage of using PGRAND might be diminished if one considers the noise introduced by each gate. ## V Measurement-based version of PGRAND During the analysis of the protocol we assumed that the operations were noiseless. Nonetheless, this is an unrealistic approach, and even a slight amount of noise can compromise the entire protocol, since the entropy of the ensemble is amplified by each noisy operation. Consid \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{\(n_{\text{min}}\)} \\ \hline \(F_{i}\) & 0.83 & 0.85 & 0.90 & 0.95 & 0.99 \\ \hline PGRAND & 251 & 60 & 16 & 10 & 8 \\ \hline Hashing \(\delta_{\text{optimal}}\) & 2326 & 637 & 153 & 71 & 45 \\ \hline Hashing \(\delta_{\text{reference}}\) & 8116 & 2027 & 412 & 164 & 82 \\ \hline \end{tabular} \end{table} Table 2: Minimum number of pairs required to achieve purification for an ensemble with fidelity \(F_{i}\). While choosing an optimal value for \(\delta\) greatly reduces the number of required pairs, PGRAND can reduce this number up to 10 times. Figure 8: Number of pairs in Werner form with fidelity \(F_{i}\) required to achieve purification. While in both protocols there is an exponential increase in the number of required pairs with the amount of noise, the requirements of our protocol are substantially lower than the ones of the hashing protocol. Figure 9: Fidelity achieved \(\langle F_{a}\rangle\) by the hashing protocol (H) and PGRAND (PG) as a function of the fidelity of Werner pairs \(F_{i}\), when constrained to correct the same number of errors. ering that \(\mathcal{O}(n^{2})\) two-qubit operations are required, the increase in entropy caused by noisy operations can easily exceed the information obtained from the measurement. This holds even for minor imperfections, posing a threat to the entire protocol. Although Clifford gates can be implemented in a fault tolerant manner, this problem can be circumvented by using a measurement-based approach. The main concept revolves around the preparation of a resource state using measurement-based methods, which incorporate both the encoding, stabilizer measurement, and decoding of the protocol. Afterward, the Bell pairs are coupled to the resource state using Bell measurements 1, in a manner similar to what is carried out in Refs. [61, 62, 63, 37, 36, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 320, 321, 324, 325, 326, 327, 328, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 424, 435, 446, 447, 448, 451, 452, 453, 454, 455, 456, 457, 458, 46, 46, 474, 47, 47, 48, 48, 49, 49, 50, 51, 52, 53, 54, 55, 55, 56, 57, 58, 59, 50, 52, 54, 56, 58, 59, 51, 53, 54, 57, 59, 52, 55, 59, 53, 56, 59, 54, 57, 55, 58, 59, 52, 59, 54, 59, 55, 56, 57, 59, 56, 58, 59, 57, 59, 58, 59, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 61, 64, 68, 69, 70, 62, 64, 65, 60, 63, 65, 61, 62, 64, 66, 67, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 88, 89, 91, 83, 85, 89, 92, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 110, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 18, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 22, 22, 23, 24, 25, 26, 27, 28, 29, 20, 210, 22, 23, 24, 25, 26, 28, 29, 21, 22, 24, 27, 29, 23, 25, 27, 28, 29, 24, 29, 25, 29, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 31, 30, 31, 32, 33, 35, 34, 36, 37, 39, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 89, 93, 94, 95, 96, 97, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 120, 121, 123, 124, 125, 126, 127, 128, 129, 133, 140, 141, 142, 143, 145, 146, 147, 148, 149, 150, 152, 153, 154, 156, 157, 158, 159, 161, 170, 172, 173, 174, 175, 176, 177, 178, 179, 180, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 203, 204, 205, 206, 207, 208, 209, 210, 211, 23, 209, 221, 23, 24, 25, 26, 27, 28, 29, 290, 22, 293, 206, 209, 221, 294, 295, 296, 297, 298, 299, 300, In this paper, we present a purification protocol that represents a significant improvement over Bennett's hashing protocol. We provide a detailed explanation of the protocol's construction concepts and the underlying principles, along with a comprehensive analysis of its performance across different ensemble sizes, noise regimes, and computational constraints. Our findings demonstrate that our protocol can achieve a fidelity of 90% when purifying states, even with a relatively small ensemble size of just 16 Bell pairs. While hashing protocols offer distinct advantages over other types of protocols due to their non-probabilistic nature (unlike recurrence protocols) and high performance as the initial ensemble size increases, they are constrained by the computational resources they require [29, 36, 59]. The requirement of such a small initial ensemble in our protocol not only reduces computational costs significantly but also paves the way for a more realistic implementation. This is especially relevant in practical scenarios where there is a preference for employing fewer qubits. Moreover, if one manages to obtain a larger ensemble of Bell pairs with sufficiently low levels of noise, high efficiencies can be obtained while maintaining a feasible computational cost. Thus, PGRAND purports to be specially relevant in low-entropy noise regimes. The assumption of using noiseless gates can be overcome by using a measurement-based implementation of these protocols. Moreover, the general fault-tolerance of a gate implementation of QGRAND is subject of ongoing work. The extension of our protocol to multipartite entanglement is also an interesting question to study. Given that the Bell matrix identity can generalized to GHZ states [66], a version of the protocol to these states should be conceivable. Furthermore, the performance of our protocol could be improved by considering its generalization to multi-level systems (qudits), since notable improvements were found for a bipartite qudit implementation of the hashing protocol [31]. It would also be worthy to study in a more detailed scenario the feasibility of the measurement-based of this protocol, and how the cluster state size/complexity would influence the error of the resource states in a realistic implementation. ###### Acknowledgements. Francisco Monteiro and Bruno Coutinho are grateful to Prof. Wolfgang Dur (University of Innsbruck) for insightful discussions about hashing-based purification. We thank FCT/MCTES (Portugal) for its support through national funds and when applicable co-funded EU funds under projects UIDB/50008/2020 and 2022.05558.PTDC. Diogo Cruz acknowledges the support from FCT through scholarship UI/BD/152301/2021.
2304.14411
Classifier for centrality determination with Zero Degree Calorimeter at the Cooling-Storage-Ring External-target Experiment
The Zero Degree Calorimeter (ZDC) plays a crucial role in determining centrality at the Cooling-Storage-Ring External-target Experiment (CEE) in the Heavy Ion Research Facility in Lanzhou (HIRFL). A Boosted Decision Trees (BDT) multi-classification algorithm is employed to classify the centrality of the collision events based on the raw features from ZDC such as the number of fired channels and deposited energy. The data from simulated $\rm ^{238}U$ + $\rm ^{238}U$ collisions at 500 $\rm MeV/u$, generated by the IQMD event generator and subsequently modeled through the GEANT4 package, is employed to train and test the BDT model. The results showed the high accuracy of the multi-classification model adopted in ZDC for centrality determination, which is robust against variations in different factors of detector geometry and response. The study demonstrates a good performance of the CEE-ZDC for determining the centrality in nucleus-nucleus collisions.
Biao Zhang, Li-Ke Liu, Hua Pei, Shusu Shi, Nu Xu, Yaping Wang
2023-04-15T09:48:25Z
http://arxiv.org/abs/2304.14411v1
Classifier for centrality determination with Zero Degree Calorimeter at the Cooling-Storage-Ring External-target Experiment ###### Abstract The Zero Degree Calorimeter (ZDC) plays a crucial role in determining centrality at the Cooling-Storage-Ring External-target Experiment (CEE) in the Heavy Ion Research Facility in Lanzhou (HIRFL). A Boosted Decision Trees (BDT) multi-classification algorithm is employed to classify the centrality of the collision events based on the raw features from ZDC such as the number of fired channels and deposited energy. The data from simulated \({}^{238}\mathrm{U}+{}^{238}\mathrm{U}\) collisions at 500 \(\mathrm{MeV}/\mathrm{u}\), generated by the IQMD event generator and subsequently modeled through the GEANT4 package, is employed to train and test the BDT model. The results showed the high accuracy of the multi-classification model adopted in ZDC for centrality determination, which is robust against variations in different factors of detector geometry and response. The study demonstrates a good performance of the CEE-ZDC for determining the centrality in nucleus-nucleus collisions. ZDC, Boosted Decision Trees, multi-classification, IQMD, centrality determination ## I Introduction The primary objective of heavy-ion collisions at different beam energies is to investigate strong interaction matter and comprehend the QCD phase diagram. The phase diagram provides information on the phase transition and critical point of the strongly interacting system, where hadron gases exist at lower temperatures and low baryon density, while at higher temperatures or densities, the hadronic boundary disappears, and confined quarks move freely throughout the system [1]. The Beam Energy Scan program of RHIC-STAR aims to approach the possible critical point from the high-energy side. Still, it is essential to study the phase diagram in the hadron phase and approach the critical point from the low-energy side [2; 3; 4]. The Cooling-Storage-Ring External-target Experiment (CEE) at the Heavy Ion Research Facility in Lanzhou (HIRFL), with its advanced spectrometer, provides significant opportunities to study the phase diagram at extremely high net baryon density levels with energies of several hundred AMeV [5]. The Zero Degree Calorimeter (ZDC), one of the sub-detector of CEE in the forward rapidity region, is designed to accurately determine the centrality and reaction plane of the collision events [6]. The collision events are typically classified into centrality classes representing certain fractions of the total reaction cross sections corresponding to specific intervals of impact parameter \(b\)[7]. Impact parameter \(b\) is an essential parameter to understand the initial overlap region of the colliding nuclei of collected data in heavy-ion collisions, which represents the distance between the nuclei centers in the plane transverse to the beam axis, determining the size and shape of the resulting medium. However, the impact parameter \(b\) is not directly measurable in experiments. To estimate centrality experimentally, the raw observables that scale monotonically with impact parameters could be used for classification according to centrality, e.g. the reconstructed tracks with central-barrel tracking detectors or the deposited energy in the forward calorimeters. Accurate centrality determination is a baseline for many physics analyses in heavy-ion collision experiments [8], particularly in the study of the search of observables sensitive to a possible phase transition or critical point by the analysis of fluctuations and correlations. In recent years, Machine Learning (ML) methods have gained significant attention for determining centrality class in heavy-ion collisions [8; 9]. Previous studies have treated centrality determination as a regression problem on impact parameters and utilized combined information from central tracking systems and forward calorimeters to train ML models. However, to avoid auto-correlation in physics analysis, this paper adopts a machine learning approach that solely utilizes raw experimental features from the forward calorimeter to determine centrality. We report the application of a multi-classification ML algorithm based on Boosted Decision Trees (BDT) as a centrality classifier using solely ZDC in \({}^{238}\mathrm{U}\) + \({}^{238}\mathrm{U}\) collisions at 500 \(\mathrm{MeV}/\mathrm{u}\) at the CEE. The ML inputs were generated using the Isospin dependent Quantum Molecular Dynamics (IQMD) generator [10]. Additionally, we present efficiency and purity measures related to the centrality determination performance of the ZDC with the model application. ## II CEE-ZDC The CEE, which adopts fixed-target-mode heavy-ion collisions, is the first large-scale nuclear experimental device operating in the GeV energy region in China. It is equipped with a set of sub-detectors, as shown in Fig. 1(a). The CEE detector system comprises a beam monitor, T0 detector [5], time projection chamber (TPC) [11], internal time-of-flight (iTOF) detector [12], a large superconducting dipole magnet, multiwire drift chamber (MWDC) [13], external time of-flight (eTOF) detector [14], and a zero-degree calorimeter (ZDC) [6]. The ZDC is centrally positioned at the end of the CEE, covering the pseudorapidity range of \(1.8<|\eta|<4.8\). The ZDC utilizes a symmetrical and fan-shaped layout, with 8 radial and 24 angular sections, and a maximum radius of 1 meter. The detector comprises trapezoidal modules equipped with uniform plastic scintillators that are coupled with a light guide and then connected to photomultiplier tubes (PMT) to convert charged particles into charge signals. To obtain a comprehensive signal, each module produces two charge signals for two dynodes of each PMT that are transmitted to two separate readout channels, resulting in a total of 384 (24 \(\times\) 8 \(\times\) 2) channels for the ZDC. The purpose of ZDC is to detect particle fragments in the forward rapidity region following semi-central and peripheral collisions, providing vital information for the precise reconstruction of the centrality and reaction plane of collision events [6, 15]. ## III Model training with simulated event The simulated data were generated by simulating \({}^{238}\)U + \({}^{238}\)U collisions at 500 MeV/\(u\) with the IQMD generator [10], and the generated particles were then transported through the apparatus using the GEANT4 package [16]. Determining centrality with only one forward-rapidity detector, like the ZDC, is challenging even when employing ML algorithms. Previous ML-based studies of centrality determination have relied on information from multiple subsystems within the detector, such as the tracks reconstructed from the central barrel detectors and deposited energy in forward calorimeters, revealing a strong correlation between the centrality class and observables. The CEE-ZDC is a non-tracking detector, and the number of spectator nucleons in a nucleus-nucleus collision is expected to be proportional to the deposited energy and the number of fired channels in the ZDC. However, the presence of a beam hole at the center of ZDC and limited detector acceptance result in a weak monotonic dependence between impact parameters and observables, as clearly illustrated in Fig. 2 for the number of fired channels and Fig. 2 for the deposited energy in the ZDC. Potential improvements in centrality determination can be achieved by utilizing data from ZDC-subrings in conjunction with the ZDC as the additional features in ML task. Moreover, it may be advantageous to use the energy deposited in the ZDC ring-by-ring and the number of fired channels event-by-event, and to exploit all inherent correlations between modules. Fig. 3 displays the number of fired ZDC channels per event distribution in the impact parameter range \(7<b\leq 10\) fm, as well as the deposited energy per event distribution in the ZDC ring in the impact parameter range \(0<b\leq 3\) fm shown in Fig. 3. The complex pattern and non-trivial decision boundary among the classes of event centrality present an ideal opportunity to apply ML techniques. Boosted Decision Trees (BDT), as a family of popular supervised learning algorithms for classification and regression problems, are extensively used to analyze data in high-energy-physics experiments. Extreme gradient Boosting (XGboost), one of the powerful BDTs based on the gradient boosting method, is adopted to solve the multi-classification problems for the centrality determination in this work. The physics features used as the inputs for the model training are deposited energy in full ZDC and ZDC substrings as well as the number of fired channels in ZDC. The simulated data is divided into 3 centrality classes based on the impact parameter listed in Table. 1. The samples are split into training and test samples of equal size for each centrality class. A state-of-the-art machine learning hyperparameter optimization with Fig. 1: (a) CEE detectors schematic layout. (b) ZDC detector layout. Fig. 3: (a) Number of fired ZDC channel per event distribution in impact parameter interval of \(7<b\leq 10\) fm. (b) Deposited energy of ZDC ring per event distribution in impact parameter interval of \(0<b\leq 3\) fm. Fig. 2: (a) The number of fired channels in ZDC as a function of impact parameters. (b) The deposited energy in ZDC as a function of impact parameters. Optuna is adopted to speed up optimization time and achieve the best performance of the training models [17]. ## IV Performance of the ML models The machine learning model was applied to both the training and test sets to visualize the distributions of the ML output scores and check for consistency between the two sets. For classification with three centrality classes (\(p_{i}\)), the model generated three scores representing the probability of belonging to each of the considered classes. As per construction, the probabilities for each centrality class are summed to one (\(\sum_{i=1}^{3}p_{i}=1\)). Fig. 4 illustrated the probability distributions of the central class (a) and peripheral class (b) for both the training and test sets. The probability distributions were close to unity for each probability distribution corresponding to the respective true class, while the other two distributions were shifted toward zero. The probability density function of the training and test samples for each centrality class showed good agreement, indicating that the model was not overfitting. The Receiver Operating Characteristic (ROC) curve is commonly used to evaluate the performance of a classification model, by plotting the True Positive Rate against the False Positive Rate for various threshold settings. The area under the ROC curve, known as ROC AUC, provides a global measure of the model's performance, ranging from 0.5 (random classification) to 1 (perfect classification), independent of the threshold and class distribution [18]. However, for multi-class classification, the ROC curve cannot be directly defined, and the "One-vs-One" approach is used to compute the overall average of the individual ROC AUCs for each pair of classes. In this study, the ROC curves and ROC AUC values obtained on the test set are reported in Figure 5. The high final ROC AUC value of approximately 0.96 indicates that the BDT model is highly effective in determining centrality. ## V Efficiency and purity of the centrality classification The performance of the centrality classification model was evaluated by calculating the efficiency and purity based on the ML output scores. Efficiency refers to the fraction of correctly classified events, while purity measures the fraction of events correctly classified for a particular centrality class out of all the events assigned to that class. The efficiency versus purity of the multi-classification models for each centrality class is shown in Fig. 6, where the red, green, and blue solid lines represent the central, semi-central, and peripheral classes, respectively. The peripheral class was found to be the most effectively classified, and the central class was found to be more challenging than the semi-central class in the higher efficiency region. The quantified values in Table. 2 indicate that even at very high purity levels, the efficiency of the peripheral class is not significantly compromised, and both the central and semi-central classes also exhibit promising values for efficiency at the high purity. These results indicate that the ML-based event centrality determination utilized in ZDC is effective. In addition, to evaluate the performance of the centrality determination with ZDC, the effects of several factors related \begin{table} \begin{tabular}{|c|c|} \hline Centrality class & \(b\) interval [fm] \\ \hline Central & \(0\leq b\leq 3\) \\ Semi-Central & \(3<b\leq 7\) \\ Peripheral & \(7<b\leq 10\) \\ \hline \end{tabular} \end{table} TABLE I: The centrality classes with respect to the impact parameter \(b\) intervals Fig. 4: The probability distributions of belonging to the central class (a) and peripheral class (b) for both the training and test sets. Fig. 5: ROC curves and AUCs with respect to different ”One-vs-One” cases are shown with the different line colors. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Efficiency Class & Central & Semi-Central & Peripheral \\ \hline Purity = 90\% & 67\% & 66\% & 97\% \\ Purity = 95\% & 41\% & 47\% & 94\% \\ Purity = 98\% & 11\% & 24\% & 93\% \\ \hline \end{tabular} \end{table} TABLE II: Efficiency and purity values for different centrality classes to the configuration of ZDC in the simulation data were systematically investigated. These factors included the thickness of the ZDC detector, hit efficiency, energy resolution, and heavy nuclei with de-excitation or without de-excitation (in the IQMD). The ZDC plastic scintillator thickness was varied from 1 cm to 4 cm, and the hit efficiency was varied with 90% and 95%. The energy deposited was also smeared with different sigma values of the Gaussian distributions. As shown in Fig. 7, the results indicated that the effect of these factors on the purity and efficiency of the centrality classification is minor. Among the tested factors, the ZDC detector thickness had the most significant impact, but even that effect is relatively small. In conclusion, it study suggests that the multi-classification adopted in ZDC is robust against variations in these factors, indicating the potential for reliable and accurate classification for the centrality with ZDC. ## VI Summary The study aimed to determine the centrality class in nucleus-nucleus collisions at the CEE-ZDC detector using a multi-classification model based on the XGBoost classifier. The ML model was trained and tested from simulation data from the IQMD event generator and then modeled through the GEANT4 package. The additional study examined various factors associated with the geometry and response of the ZDC detector, and the results indicate that the impact of these factors is minor, demonstrating the robustness of the XGBoost classifier in determining centrality. Future work may include improving the accuracy of centrality determination by incorporating regression tasks and exploring other machine learning algorithms. The study indicates the good performance of the CEE-ZDC for centrality determination in nucleus-nucleus collisions. ###### Acknowledgements. We thank Prof. Li Ou and Zhigang Xiao for generating IQMD data and fruitful discussions.
2305.14738
Deformations of weighted homogeneous surface singularities with big central node
We prove Koll\'{a}r conjecture for weighted homogeneous surface singularities with big central node. More precisely, we show that every irreducible component of the deformation space of the singularity is parametrized by a certain partial resolution which is known as a $P$-resolution.
Jaekwan Jeon, Dongsoo Shin
2023-05-24T05:18:21Z
http://arxiv.org/abs/2305.14738v2
# Deformations of weighted homogeneous surface singularities with big central node ###### Abstract. We prove Kollar conjecture for weighted homogeneous surface singularities with big central node. More precisely, we show that every irreducible component of the deformation space of the singularity is parametrized by a certain partial resolution which is known as a \(P\)-resolution. Key words and phrases:Picture deformation, Weighted homogeneous surface singularity, P-resolution 2010 Mathematics Subject Classification: 14B07 ###### Contents * 1 Introduction * 2 Sandwiched surface singularities * 2.1 Sandwiched surface singularities * 2.2 Picture deformations * 2.3 Incidence matrices * 3 Deformations of cyclic quotient surface singularities * 3.1 P-resolutions * 3.2 Stevens's description * 3.3 Stevens to Incidence matrix * 3.4 P-resolutions to Incidence matrices * 4 Incidence matrices under the different sandwiched structure * 4.1.1 The \(P\)-resolutions * 4.1.2 The \(P\)-resolutions * 4.2 The \(P\)-resolutions * 4.3 The \(P\)-resolutions * 4.4 P-resolutions to Incidence matrices * 5 Deformations of weighted homogeneous surface singularities with big central node * 5.1 Weighted homogeneous surface singularities * 5.1 Weighted homogeneous surface singularities * 5.2 Incidence matrices of weighted homogeneous surface singularities * 5.3 An example ## 1. Introduction J. Kollar and N. I. Shepherd-Barron(K-SB [9]) proved that each irreducible component of the deformation space of a quotient surface singularity is parametrized by certain partial resolution, known as a _\(P\)-resolution_. Building on this result, J. Kollar([7]) introduced a conjecture stating that every irreducible component of the deformation space of a rational surface singularity is parameterized by a certain partial modification of the singularity, known as a _\(P\)-modification_. A \(P\)-resolution of a singularity \((X,p)\) is a partial resolution \(f:Y\to X\) such that \(Y\) has only singularities of class T and the canonical divisor \(K_{Y}\) of \(Y\) is \(f\)-relatively ample. A singularity of class T is a cyclic quotient surface singularity admitting a \(\mathbb{Q}\)-Gorenstein smoothing. Since every irreducible component of the deformation space of a rational surface singularity contains a smoothing, there is a natural map ###### Abstract We consider the \(P\)-resolution of a \(P \(M\) satisfies contain the combinatorial equations of cyclic quotient surface singularities. In some sense, the matrix \(M\) is a combination of combinatorial incidence matrices of cyclic quotient surface singularities with special restrictions. We prove that the matrix \(M\) must be one of the cases \(A\) or \(B\) because of the restrictions. We know that, for cyclic quotient surface singularities, every map in Figure 1 is bijective. Therefore if a combinatorial incidence matrix of a cyclic quotient surface singularity is given, then we can find the corresponding \(P\)-resolution. Since we have already observed that the matrix \(M\) is a combination of combinatorial incidence matrices of cyclic quotient surface singularities, we construct the corresponding \(P\)-resolution of the matrix \(M\) by combining the \(P\)-resolutions of the cyclic quotient surface singularities. Finally, we verify that \(\phi_{PI}(f)=M\) by applying MMP method of Park-Shin([14]). ### Acknowledgements This article is a revision of Ph.D dissertation of J. Jeon presented at Department of Mathematics, Chungnam National University, Daejeon, Korea in 2023. ## 2. Sandwiched surface singularities We will briefly review some definitions and theorems based on the work of M. Spivakovsky[15] and de Jong-van Straten ([4]). ### Sandwiched surface singularities A sandwiched surface singularity \((X,p)\) is a normal surface singularity admitting a birational morphism \(X\to\mathbb{C}^{2}\). Since a sandwiched surface singularity is rational, it is characterized by its dual resolution graph: **Definition 2.1** (Spivakovsky [15]).: A weighted graph is _sandwiched_ if the graph contracts to a smooth point by properly adding \((-1)\)-nodes and contracting them. **Example 2.2**.: Consider the following weighted graph. If we attach two \((-1)\)-nodes on the western \((-3)\)-node, three \((-1)\)-nodes on the eastern \((-4)\)-node, four \((-1)\)-nodes on the southern \((-5)\)-node and two \((-1)\)-nodes on the central \((-6)\)-node, then the graph contracts to a smooth point. In [15, Proposition 1.11], Spivakovsky proved that the dual resolution graph of a sandwiched surface singularity is sandwiched. And conversely, for a given sandwiched graph, there exists a sandwiched surface singularity such that its dual resolution graph is the given graph. In a different aspect, T.de Jong and D.Van Straten show that every sandwiched surface singularity can be obtained from a plane curve singularity with weights assigned to the curves. For a plane curve germ \(C=\bigcup C_{i}\subset(\mathbb{C}^{2},0)\), we consider the minimal good resolution of \(C\). We track the multiplicities of strict transformations of \(C_{i}\) at infinitely near \(0\) for each blow-up of \(C_{i}\) to obtain the minimal good resolution except the final one. We denote the sum of the multiplicities by \(M(i)\). We then define a decorated curve: **Definition 2.3** (de Jong-van Straten [4, Definition 1.3]).: _A decorated curve is a pair \((C,l)\) such that_ 1. \(C=\bigcup\limits_{i=1}^{s}C_{i}\subset(\mathbb{C}^{2},0)\) _is a plane curve singularity at the origin_ 2. _a function_ \(l:T=\{1,\cdots,s\}\to\mathbb{Z}\) _assigning a number_ \(l(i)\) _to_ \(C_{i}\)__ 3. \(l(i)\geq M(i)\)__ The function \(l\) is the information of blow-ups: **Definition 2.4** (de Jong-van Straten [4], Definition (1,4)).: Let \((C,l)\) be a decorated curve. 1. The modification \(\widetilde{Z}(C,l)\to\mathbb{C}^{2}\) determined by \((C,l)\) is obtained from the minimal embedded resolution of \(C\) by \(l(i)-M(i)\) consecutive blow-ups at the \(i\)-th branch of \(C\). 2. The analytic space \(X(C,l)\) is obtained from \(\widetilde{Z}(C,l)\backslash\widetilde{C}\) by blowing down all exceptional divisors not intersecting \(\widetilde{C}\subset\widetilde{Z}(C,l)\). If \(l(i)\geq M(i)+1\), then the exceptional set not intersecting \(\widetilde{C}_{i}\) is connected([4]) and therefore we get one sandwiched surface singularity by blowing down. **Example 2.5**.: Let \(C\) be the ordinary cusp given by the equation \(y^{2}-x^{3}=0\). The following are the modifications \(\widetilde{Z}(C,l)\) for \(l=1,2,3,4\). The red lines are \((-1)\)-curves and the blue lines are exceptional curves will be contracted. We see that \(X(C,1)\) and \(X(C,2)\) have no singularity, \(X(C,3)\) has two singularities and \(X(C,4)\) has a sandwiched surface singularity. **Proposition 2.6** (de Jong-van Straten [4]).: _Any sandwiched singularity \(X\) is isomorphic to \(X(C,l)\) for some decorated curves \((C,l)\)._ ### Picture deformations From another point of view, the decoration \(l\) can be seen as a subscheme of points on \(\widetilde{C}\). Specifically, \(l(i)\) is a subscheme of the branch \(\widetilde{C}_{i}\). Similarly, if we consider \(m(i)\) as a subscheme of \(\widetilde{C}_{i}\), then the condition \(l(i)\geq m(i)\) can be interpreted as \(m\) being a subset of \(l\). **Definition 2.7** (de Jong-van Straten [4, 4.2]).: Let \((\Delta,0)\) be a small open ball. _A one-parameter deformation \((\mathscr{C},\mathscr{L})\) of a decorated curve \((C,l)\) over \(\Delta\)_ consists of 1. A \(\delta\)-constant deformation \(\mathscr{C}\to\Delta\) of \(C\), that is, \(\delta(C_{i,t})\) is constant for all \(t\in\Delta^{*}\). 2. A flat deformation \(\mathscr{L}\subset\widetilde{C}\times\Delta\) of the scheme \(l\) with the condition \(\mathscr{M}\subset\mathscr{L}\) where \(\mathscr{M}=\overline{\bigcup\limits_{t\in\Delta\setminus 0}m(C_{t})}\). Here, \[\delta(C_{i,t})=\sum\limits_{Q}\frac{m(C_{i,t},Q)(m(C_{i,t},Q)-1)}{2}\] where \(Q\) ranges over all the points infinitely near \(0\)(cf.[5, Proposition 3.34]). For convenience, we use the notation \(C_{i}\) instead of \(C_{i,t}\) **Theorem 2.8** (de Jong-van Straten [4, 4.4]).: _For any one-parameter deformation \((\mathscr{C},\mathscr{L})\) of a decorated curve \((C,l)\), there is a flat one parameter deformation \(\mathscr{X}\to\Delta\) with the property that (1) \(X_{0}=X(C,l)\). (2) \(X_{t}=X(C_{t},l_{t})\) for all \(t\in\Delta^{*}\). Moreover, every one parameter deformation of \(X(C,l)\) is obtained in this way._ We can also describe smoothings of a sandwiched surface singularity \(X(C,l)\). **Definition 2.9** (de Jong-van Straten [4, 4.6]).: A one-parameter deformation \((\mathscr{C},\mathscr{L})\) is called a _picture deformation_ if for \(t\in\Delta^{*}\), the divisor \(l_{t}\) on \(\widehat{C}_{t}\) is reduced. The definition means that \(\mathscr{C}\) has only ordinary \(m\)-tuple points. For convenience, the ordinary \(1\)-tuple point is called a free point, a non-singular point in the support of \(\mathscr{L}\). **Example 2.10** (Continued from 3.1).: We consider the following sandwiched structure. After the contraction, we obtain the following decorated curve. We obtain three picture deformations: **Theorem 2.11** (de Jong-van Straten [4, Lemma 4.7]).: _A generic smoothing of \(X(C,l)\) is realized by a picture deformation of \((C,l)\)._ ### Incidence matrices A picture deformation has a combinatorial aspect. **Definition 2.12** (de Jong-van Straten [4, p.483]).: The _incidence matrix_ of a picture deformation \((\mathscr{C},\mathscr{L})\) is the matrix \(I(\mathscr{C},\mathscr{L})\in M_{s,n}(\mathbb{Z})\) where \(I(\mathscr{C},\mathscr{L})_{i,j}\) is the multiplicity of \(C_{i}\) at \(P_{j}\). According to Konrad Mohring([10]), a general fiber \(X(C_{t},l_{t})\) of a smoothing of \(X(C,l)\) is blowing-ups of \(\mathscr{C}\) along the support of \(\mathscr{L}\). Thus, an incidence matrix encodes the intersection relations of \((-1)\)-curves on the Milnor fiber. From the \(\delta\)-constancy of \(\mathscr{C}\) and the flatness of \(\mathscr{L}\), we can formulate the necessary condition of the incidence matrices. **Definition 2.13** (de Jong-van Straten [4, 4, 12]).: _A combinatorial incidence matrix of a sandwiched surface singularity \(X(C,l)\) is a matrix \(M=(m_{ij})_{ssr}\) satisfying the following equations._ \[\begin{array}{c}\sum\limits_{j=1}^{r}\frac{m_{ij}(m_{ij}-1)}{2}=\delta(C_{i}) \text{ for all }i\\ \sum\limits_{j=1}^{r}m_{ij}m_{kj}=C_{i}.C_{k}\text{ for all }i\neq k\\ \sum\limits_{j=1}^{r}m_{ij}=l(i)\text{ for all }i\end{array} \tag{2.1}\] Every incidence matrix satisfies Equation 2.1. **Definition 2.14** (Park-Shin[14, Definition 2.19]).: Let \(\mathscr{C}(X)\) be the set of irreducible components of the reduce versal deformation space \(\operatorname{Def}(X)\) and let \(\mathscr{I}(X)\) be the set of all incidence matrices of \(X\) of a given sandwiched structure. The _incidence map_ of \(X\) is a map \[\phi_{I}:\mathscr{C}(X)\rightarrow\mathscr{I}(X)\] where, for each \(S\in\mathscr{C}(X)\), \(\phi_{I}(S)\) is defined by the incidence matrix corresponding to a picture deformation that parametrizes \(S\). ## 3. Deformations of cyclic quotient surface singularities In this section, we briefly review deformation theories of cyclic quotient surface singularities. And then we analyze (combinatorial) incidence matrices of cyclic quotient surface singularities. A cyclic quotient surface singularity \(X\) of type \(\frac{1}{n}(1,q)\) is a quotient surface singularity \(\mathbb{C}^{2}/G\) where \(G=\left(\begin{pmatrix}\zeta&0\\ 0&\zeta^{q}\end{pmatrix}\right)\), \(\zeta\) is a primitive \(n\)-th root of unity and \(1\leq q<n\). It is well known that the minimal resolution of a cyclic quotient surface singularity of type \(\frac{1}{n}(1,q)\) is a chain of \(\mathbb{CP}^{1}\)'s of self-intersection numbers \(-a_{1},\cdots,-a_{r}\) where \(a_{1},\ldots,a_{r}\) are Hirzebruch-Jung continued fraction of \(\frac{n}{q}=a_{1}-\frac{1}{a_{2}-\frac{1}{\ddots-\frac{1}{a_{r}}}}\). We use a dual resolution graph \[\xy(0,0)*{A_{1}};(-a_{1},0)*{A_{r}};(-a_{r},0)*{A_{1}};(-a_{1},0)*{A_{1}};(-a _{1},0)*{A_{2}};(-a_{1},0)*{A_{3}};(-a_{1},0)*{A_{4}};(-a_{1},0)*{A_{5}};(-a_ {1},0)*{A_{6}};(-a_{1},0)*{A_{7}};(-a_{1},0)*{A_{8}};(-a_{1},0)*{A_{9}};(-a_{ 1},0)*{A_{10}};(-a_{1},0)*{A_{11}};(-a_{1},0)*{A_{12}};(-a_{1},0)*{A_{13}};(-a _{1},0)*{A_{14}};(-a_{1},0)*{A_{15}};(-a_{1},0)*{A_{16}};(-a_{1},0)*{A_{17}}; (-a_{1},0)*{A_{18}};(-a_{1},0)*{A_{19}};(-a_{1},0)*{A_{19}};(-a_{1},0)*{A_{1 1}};(-a_{1},0)*{A_{12}};(-a_{1},0)*{A_{13}};(-a_{1},0)*{A_{14}};(-a_{1},0)*{A_{ 15}};(-a_{1},0)*{A_{16}};(-a_{1},0)*{A_{17}};(-a_{1},0)*{A_{18}};(-a_{1},0)*{A_{ 19}};(-a_{1},0)*{A_{19}};( **Proposition 3.3** (K-SB [9, proposition 3.11]).: 1. _The singularities_ \(\overset{\text{\tiny{$\overline{\phantom{\phantom{\phantom{\phantom{\phantom{ \phantom{\phantom{\phantom{\phantom{\phantom{\phantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantom{ {\phantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantom{ {\phantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantomphantom{ {\ That is, the set of all admissible integer sequence of length \(s\) that representing zero as the Hirzebruch-Jung continued fraction. **Proposition 3.10** (Stevens [16, Theorem 4.1]).: _Let \(X\) be a cyclic quotient surface singularity \(\frac{1}{n}(1,a)\) with \((n,a)=1\). Let \(n/(n-a)=[b_{1},\cdots,b_{s}]\) be the Hirzebruch-Jung continued fraction. Then \(K_{s}(n/(n-a))=\{\underline{k}\in K_{s}\mid k_{i}\leq b_{i}\}\) parametrizes irreducible components of \(\text{Def}(X)\). Therefore \(\underline{k}\) corresponds to \(P\)-resolutions._ **Example 3.11** (Continued from 3.1).: For the cyclic quotient surface singularity of \(\frac{1}{19}(1,11)\), \(K_{4}(19/19-11)=\{(1,2,2,1),(3,1,2,2),(2,1,3,1)\}\). Moreover, there is a geometric way to parametrize the set \(K_{s}(n/(n-a))\). **Proposition 3.12** (Stevens [16, 6.1]).: _Let \(\mathcal{P}_{s+1}\) be a convex \((s+1)\)-gon such that each vertex is named by \(b_{i}\) consecutively in a counterclockwise direction.(there is one unnamed vertex between the vertex \(b_{1}\) and \(b_{s}\)) Let \(\mathcal{T}(\mathcal{P}_{s+1})\) be the set of triangulations of \(\mathcal{P}_{s+1}\). Then there is a bijective map from \(\mathcal{T}(\mathcal{P}_{s+1})\) to \(K_{s}\) that assigning \(\theta\in\mathcal{T}(\mathcal{P}_{s+1})\) to \((k_{1},\cdots,k_{s})\) where \(k_{i}\) is the number of the triangles in \(\theta\) containing the vertex \(b_{i}\)._ **Example 3.13** (Continued from 3.11).: We have the convex \(5\)-gon whose vertices are named by \(3,2,3,2\) counterclockwise as follows. From these, we obtain five integer sequences \((1,2,2,1)\), \((3,1,2,2)\), \((1,3,1,2)\), \((2,1,3,1)\), \((2,2,1,3)\). Since \(k_{i}<b_{i}\), sequences that we want are \((1,2,2,1)\), \((3,1,2,2)\), \((2,1,3,1)\) and we check that these are the same with Example 3.11. ### Stevens to Incidence matrix From a sequence \(\underline{k}\in K_{r}(n/(n-a))\) and its triangulation \(\theta\), we can construct an incidence matrix and this incidence matrix corresponds to the P-resolution that parametrized by \(\underline{k}\). For the minimal resolution of a cyclic quotient surface singularity \(\overset{A_{1}}{\underset{-a_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset {-a_{r}}{\overset{A_{r}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r} }{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r} }{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{ \overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-a_{r}{\overset{A_{1}}{\underset{-}\overset{A_{1}}{\underset{-a_{r}}{\overset{A_{1}}{ \underset{-}\overset{A_{1}{\underset{-}\overset{A_{1}}{\underset{-}{\overset{A_{1}}{\overset{- }{\overset{A_{1}}{\overset{-{A_{1}}{\overset{-}\overset{A_{1}}{\overset{-{A_{1}}{\overset{-}{\ddotsddots}{ \overset{\ddots}{\ddots{\ddots{\ddots}{\ddots{\ddots{\ddots}\ddots{\ddots{\ddotsddots{\ a matrix \(M\), we define \(\int M\) as the matrix that its \(i\)-th row is the sum from 1st to \(i\)-th rows of \(M\). Then we have the following theorem. **Theorem 3.14** (Nemethi-Popescu-Pampu [11, 7.2]).: _Define a matrix_ \[D(\underline{b};\underline{k})=(D(\underline{k})\ |\ M_{s,b_{1}-k_{1}}(1)\ |\ \cdots\ |\ M_{s,b_{ \underline{b}}-k_{*}}(s)).\] _Then the matrix \(\int D(\underline{b},\underline{k})\) is the incidence matrix that corresponding to P-resolution parametrized \(\underline{k}\)._ **Example 3.15** (Continued from 3.13).: __ ### P-resolutions to Incidence matrices Park-Shin([14]) build an explicit algorithm to obtain the incidence matrix from a P-resolution of a sandwiched surface singularity by using the minimal model program. We summarize Sections 3, 5 and 6 of [14]. Let \(C=\bigcup C_{i}\subset\mathbb{C}^{2}\) be a decorated curve. There is a natural compactification \(D=\bigcup D_{i}\subset\mathbb{CP}^{2}\) of the decorated curve \((C,l)\) where \(D_{i}\) is a projective plane curve and \(D_{i}\bigcap\mathbb{C}^{2}=C_{i}\). Just as we constructed a sandwiched surface singularity \(X(C,l)\) from \(C=\bigcup C_{i}\) in Proposition 2.6, we can similarly construct a projective singular surface \(Y(D,l)\) from \(D=\bigcup D_{i}\). Then we have the following diagram: where \((V,E)\) and \((W,E)\) are minimal resolutions of \((X,p)\) and \((Y,p)\) respectively. Then we have: **Theorem 3.16** (Park-Shin [14, Theorem 3.2]).: _Any deformation of \(X(C,l)\) can be extended to a deformation of \(Y(D,l)\)._ Therefore we work on the compactified decorated curve \((D,l)\) and the singularity \(Y(D,l)\). Consider a one parameter smoothing \(\mathscr{Y}\to\Delta\) of the sandwiched surface singularity \(Y(D,l)\). Assume that there exists an \(M\)-resolution \(Z\to Y\) such that the \(\mathbb{Q}\)-Gorenstein smoothing \(\mathscr{Z}\to\Delta\) blows down to the smoothing \(\mathscr{Y}\to\Delta\). To apply the minimal model program, especially flips and divisorial contraction, we consider the morphism \(\mathscr{Z}\to\mathscr{Y}\) as an extremal neighborhood. **Definition 3.17** (cf.[6, Proposition 2.1], [18, Definition 2.5]).: Let \((Q\in Y)\) be a two-dimensional germ of a cyclic quotient surface singularity, \(f:Z\to Y\) be a partial resolution of \(Q\in Y\) such that \(f^{-1}(Q)=C\) is a smooth rational curve with one(or two) Wahl singularity(ies) of \(Z\) on it. Suppose that \(K_{Z}.C<0\). Let \((Z\subset\mathscr{Z})\to(0\in\Delta)\) be a \(\mathbb{Q}\)-Gorenstein smoothing of \(Z\) over small disk \(\Delta\). Let \((Y\subset\mathscr{Y})\to\Delta\) be the corresponding blow-down deformation of \(Y\). The induced birational morphism \((C\subset\mathscr{Z})\to(Q\in\mathscr{Y})\) is called an _extremal neighborhood of type mk1A(or mk2A)_. It is _flipping_ if the exceptional set is \(C\) and _divisorial_ if the exceptional set is of dimension \(2\). **Proposition 3.18** (Kollar-Mori [8, SS11 and Theorem 13.5]).: _Suppose that \(f:(C\subset\mathscr{Z})\to(Q\in\mathscr{Y})\) is a flipping extremal neighborhood of type mk1A or mk2A. Let \(f_{0}:(C\subset Z)\to(Q\in Y)\) be the contraction of \(C\) between the central fibers \(Z\) and \(Y\). Then there exists an extremal \(P\)-resolution \(f^{+}:(C^{+}\subset Z^{+})\to(Q\in Y)\) such that the flip \((C^{+}\subset\mathscr{Z}^{+})\to(Q\in\mathscr{Y})\) is obtained by the blow-down deformation of a \(\mathbb{Q}\)-Gorenstein smoothing of \(Z^{+}\). That is, we have the commutative diagram_ _which is restricted to the central fibers as follows:_ In this paper, we encounter only one type of flips. Consider an extremal neighborhood \(\mathscr{Z}\supset C\) where a Wahl singularity \([a_{1},\cdots,a_{r}]\) is on \(C\) and \(K_{Z}.C<0\). In the minimal resolution of \(Z\), the curve \(C\) becomes a \((-1)\)-curve. Assume that the \((-1)\)-curve intersects only the exceptional curve \(A_{r}\). Then we have: **Proposition 3.19** (Urzua [18, Proposition 2.15]).: _Assume that \(a_{i}\geq 3\) and \(a_{j}=\cdots=a_{r}=2\) for \(j>i\) for some \(i\). If \(a_{r}\geq 3\), then \(r=i\). Then the image of \(A_{1}\) in the extremal \(P\)-resolution \(Z^{+}\) is the curve \(C^{+}\) and there is a Wahl singularity \([a_{2},\dots,a_{i}-1]\) on \(C^{+}\) if \(i\geq 2\)._ In our situation, a decorated curve \(D_{i}\) intersect the curve \(C\). In general, after the flip, the curve \(D_{i}\) degenerates. **Proposition 3.20** (Urzua [17, Proposition 4.1]).: _Let the image of \(D_{i}\) be \(D_{i}^{+}\) after the flip. Then \(D_{i}^{+}=D_{i}^{\prime}+A_{1}\) where \(D_{i}^{\prime}\) is the strict transform of \(D_{i}\)._ **Example 3.21**.: Let \(\mathscr{L}\) be an extremal neighborhood such that a Wahl singularity \([a_{1},\cdots,a_{r}]\) is on a curve \(C\) and a curve \(D\) intersects \(C\) at the different point with the singularity. After the flip, the image \(C^{+}\) of \(C\) is the curve \(A_{1}\), and the curve \(D\) degenerates to \(D^{\prime}+A_{1}\). We use the following dual resolution graph notation. A divisorial contraction is just a blow-down of a \((-1)\)-curve in the special and general fiber of \(\mathscr{Z}\to\Delta\). **Proposition 3.22** (Urzua [18]).: _If an \(mk1A\) or \(mk2A\) is divisorial, then \((Q\in Y)\) is a Wahl singularity. In addition, the divisorial contraction \(F:\mathscr{Z}\to\mathscr{Y}\) induces the blowing-down of a \((-1)\)-curve between the smooth fibers of \(\mathscr{Z}\to\Delta\) and \(\mathscr{Y}\to\Delta\)._ An incidence matrix encodes the intersection relations of \((C_{i,t},l_{t})\). After blowing up at these points, the resulting object is a complement of the Milnor fiber, i.e., the general fiber of \(\mathscr{Z}\to\Delta\). Therefore, if we can locate \((-1)\)-curves in the general fiber, then we can induce the corresponding incidence matrix. For this, we use the following results. **Definition 3.23** (Urzua [17, Definition 2.1]).: A _W-surface_ is a normal projective surface \(S\) with a proper deformation \(\mathscr{S}\to\Delta\) such that 1. \(S\) has at most singularities of class \(T_{0}\) 2. \(\mathscr{S}\) is a normal complex \(3\)-fold where the canonical divisor \(K_{\mathscr{S}}\) is \(\mathbb{Q}\)-Cartier 3. The fiber \(S_{0}\) is reduced and isomorphic to \(S\) 4. The fiber \(S_{t}\) is nonsingular for \(t\neq 0\) **Proposition 3.24** (Urzua [17, Corollary 3.5]).: _If \(S_{0}\) is birational to \(S_{t}\) for \(t\neq 0\), then the smoothing \(\mathscr{S}\to\Delta\) can be reduced to a deformation \(\mathscr{S}^{\prime}\to\Delta\) whose central fiber \(S_{0}^{\prime}\) is smooth by applying a finite number of the divisorial contractions and the flips._ Assume that a compactified decorated curve \((D,l)\), its corresponding singularity \(Y(D,l)\) and a \(M\)-resolution \(Z\to Y(D,l)\) are given. The singularity \(Z\) is a \(W\)-surface with its smoothing \(\mathscr{Z}\to\Delta\). Since \(Z_{0}\) and \(Z_{t}\) have a \((+1)\)-curve, they are birational to \(\mathbb{C}^{2}\). Therefore we can apply the proposition 3.24 to the smoothing \(\mathscr{Z}\to\Delta\) **Proposition 3.25** (Park-Shin [14, Proposition 6.2]).: _By applying the divisorial contractions and flips to \((-1)\)-curves on the central fiber \(Z_{0}\) of \(\mathscr{Z}\to\Delta\), one can rum MMP to \(\mathscr{Z}\to\Delta\) untill one obtains a deformation \(\mathscr{Z}^{\prime}\to\Delta\) whose central fiber \(Z_{0}^{\prime}\) is smooth._ Steps to make the central fiber to be smooth are as follows. For a \((-1)\)-curve in \(Z_{0}\), 1. If a Wahl singularity is not on the \((-1)\)-curve, then contract it(divisorial contraction). 2. If a Wahl singularity is on the \((-1)\)-curve, then apply the flip. 3. If the Wahl singularity still remains(in fact, new Wahl singularity), there must be new \((-1)\)-curve pass through the singularity. Apply the flip again. 4. We can apply flips until the Wahl singularity disappear. Flips do not affect the general fiber but a divisorial contraction is just a blow-down of a \((-1)\)-curve on each fibers. Therefore, **Corollary 3.26** (Park-Shin [14, Corollary 6.3]).: _In the previous proposition, a general fiber \(Z_{t}\) of \(\mathscr{Z}\to\Delta\) is obtained by blowing up several times a general fiber \(Z_{t}^{\prime}\) of the smoothing \(\mathscr{Z}^{\prime}\to\Delta\) for \(\mathscr{Z}_{0}^{\prime}\)._ Park-Shin prove that a general fiber obtained as the previous corollary is the same with the general fiber of a generic smoothing of \(X(C,l)\) comes from a picture deformation \((\mathscr{C},\mathscr{L})\) by blowing ups. **Theorem 3.27** (Park-Shin [14, Theorem 6.4]).: _One can run the semi-stable MMP to \(\mathscr{Z}\to\Delta\) until one obtains the corresponding picture deformation \((\mathscr{D},\mathscr{L})\) of the compactified decorated curve \((D,l)\)._ **Example 3.28** (Continued from 3.5).: We find incidence matrices of the CQSS of \(\frac{1}{19}(1,11)\) under the usual sandwiched structure. The left side is a \(P\)-resolution, and the right side is the general fiber. A divisorial contraction is shorten to be d.c (1) The minimal resolution. We apply only divisorial contractions. It corresponds to the incidence matrix \[\begin{bmatrix}C_{1}&1&1&1\\ C_{2}&1&1&1\\ C_{3}&1&1&&1&1\\ C_{4}&1&1&&1&1\end{bmatrix}\] (2) The \(P\)-resolution with the Wahl singularity [4]. \(C_{1}\)\(C_{2}\)\(C_{3}\)\(C_{4}\)\(C_{5}\)\(C_{6}\)\(C_{7}\)\(C_{8}\)\(C_{9}\)\(C_{10}\)\(C_{11}\)\(C_{12}\)\(C_{13}\)\(C_{14}\)\(C_{15}\)\(C_{16}\)\(C_{17}\)\(C_{18}\)\(C_{19}\)\(C_{10}\)\(C_{12}\)\(C_{13}\)\(C_{14}\)\(C_{15}\)\(C_{16}\)\(C_{17}\)\(C_{18}\)\(C_{19}\)\(C_{19}\)\(C_{10}\)\(C_{12}\)\(C_{14}\)\(C_{15}\)\(C_{16}\)\(C_{17}\)\(C_{18}\)\(C_{19}\)\(C_{19}\)\(C_{10}\)\(C_{12}\)\(C_{13}\)\(C_{14}\)\(C_{15}\)\(C_{16}\)\(C_{17}\)\(C_{18}\)\(C_{19}\)\(C_{19}\)\(C_{10}\)\(C_{12}\)\(C_{13}\)\(C_{15}\)\(C_{16}\)\(C_{17}\)\(C_{18}\)\(C_{19}\)\(C_{19}\)\(C_{19}\)\(C_{12}\)\(C_{13}\)\(C_{15}\)\(C_{16}\)\(C_{18}\)\(C_{19}\)\(C_{19}\)\(C_{12}\)\(C_{13}\)\(C_{14}\)\(C_{15}\)\(C_{16}\)\(C_{17}\)\(C_{18}\)\(C_{19}\)\(C_{19}\)\(C_{19}\)\(C_{19}\)\(C_{18}\)\( It corresponds to the incidence matrix \[\begin{bmatrix}C_{1}&1&1&1&&\\ C_{2}&1&1&&1&\\ C_{3}&1&&1&1&1\\ C_{4}&1&&1&1&1\end{bmatrix}\] (3) The \(P\)-resolution with two Wahl singularities \([2,5]\) and \([4]\) \[\begin{array}{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\ It corresponds to the incidence matrix \[\begin{bmatrix}C_{1}&1&1&1&\\ C_{2}&1&1&&1&\\ C_{3}&1&&1&1&1\\ C_{4}&&1&1&1&1\end{bmatrix}\] Let \([a_{1},\cdots,a_{r}]\) be a cyclic quotient surface singularity with the usual sandwiched structure(cf Figure 2). For future reference, we present two lemmas that deal with specific situations. **Lemma 3.29**.: _Assume that \(a_{i}\geq 3\). Then there exists at least one decorated curve \(C_{i}\) connected to the exceptional curve \(A_{i}\) through a \((-1)\)-curve \(E\). If \(C_{i}\) has not free points in the picture deformation, then the curve \(A_{r}\) is an exceptional curve of a Wahl singularity in the corresponding \(P\)-resolution._ Proof.: If the decorated curve \(C_{i}\) does not have a free point, then it means that if a \((-1)\)-curve in the process of the flips and divisorial contractions is connected to \(C_{i}\), then it must be connected to other decorated curve. Particularly, the first \((-1)\)-curve connects the curve \(C_{i}\) and other curve. In the case, other decorated curve must degenerate to the exceptional curve \(A_{i}\). Therefore \(A_{i}\) is an exceptional curve of a Wahl singularity. **Lemma 3.30**.: _If there exists a column that all entries are \(1\) in an incidence matrix, then the curve \(A_{r}\) is not an exceptional curve of a Wahl singularity in the corresponding \(P\)-resolution._ Proof.: Suppose that the exceptional curve \(A_{r}\) is a exceptional curve of a Wahl singularity. After the flips until the Wahl singularity disappears, we arrive at the following step. We assume that \(A_{p}\) is the initial curve of the Wahl singularity and that the decorated curves \(C_{i}\) and \(C_{j}\) are connected to \(A_{p}\) through a \((-1)\)-curve respectively. Since \(A_{p}\) is the initial curve, \(a_{p}\) is greater than or equal to \(4\). Therefore, we can assume the two decorated curves. We follow the decorated curves \(C_{i}\) and \(C_{j}\). In this step, \(C_{i}\) and \(C_{j}\) are not connected. After divisorial contractions, we obtain: The \((-1)\)-curve does not connect \(C_{i}\) and the decorated curve that degenerates to the curve \(A_{p}\). Note that during divisorial contractions, every \((-1)\)-curve does not connect \(C_{i}\) and some decorated curves that degenerate. Therefore, there is no \((-1)\)-curve that connects all decorated curves. In the aspect of an incidence matrix, this means that there dose not exist a column whose entries are all \(1\). ## 4. Incidence matrices under the different sandwiched structure In this section, we figure out incidence matrices of cyclic quotient surface singularities under a different sandwiched structure. Let \((X,0)\) be a cyclic quotient surface singularity \(\frac{1}{n}(1,q)\) where \(n/q=[a_{1,n_{1}},\ldots,a_{1,1},d,a_{2,1},\ldots,a_{2,n_{2}}]\) and assume that \(d\geq 4\). Then the dual resolution graph is as shown in Figure 3. We denote exceptional \((-a_{i,j})\)-curves as capital letters \(A_{i,j}\). We say that the curves \(A_{1,j}\) are in the first branch and \(A_{2,j}\) are in the second branch. We call the curve of degree \(-d\) as the central curve \(A_{c}\). We use these notations to broaden our discussion to weighted homogeneous surface singularities. If we attach \((a_{i,n_{i}}-1)\)\((-1)\)-curves on \(A_{i,n_{i}}\) for \(i=1,2\), \((a_{1,j}-2)\)\((-1)\)-curves on \(A_{1,j}\) for \(j<n_{1}\), \((a_{2,j}-2)\)\((-1)\)-curves on \(A_{2,j}\) for \(j<n_{2}\) and \((d-3)\)\((-1)\)-curves on the central curve, the graph(Figure 3) is contracted to the central curve and finally a smooth point. The graph is therefore sandwiched. By attaching a decorated curve on each \((-1)\)-curve, we obtain a sandwiched structure of \(X\). We denote the decorated curves on the first branch as \(C_{1,j}\), second branch as \(C_{2,j}\) and the central curve as \(D_{k}\). The second subscript is ordered inside out. The number of decorated curves on the first branch is the length of the dual Hirzebruch-Jung continued fraction of \([a_{1,1},\ldots,a_{1,n_{1}}]\), denoted as \(m_{1}\). Similarly the number of decorated curves on the second branch is the length of the dual Hirzebruch-Jung continued fraction of \([a_{2,1},\ldots,a_{2,n_{2}}]\), denoted as \(m_{2}\). The number of decorated curves on the central curve is \(d-3\). See Figure 4. We want to classify the configurations of the incidence matrices of cyclic quotient surface singularities by analyzing the MMP-algorithm. We start from two types of P-resolutions(more precisely, M-resolutions). The first type is that the central curve is not an exceptional curve of a Wahl singularity. The second type is that the central curve is an exceptional curve of a Wahl singularity. Let be the Wahl singularity with \(n_{1}^{\prime}<n_{1}\) and \(n_{2}^{\prime}<n_{2}\). If Figure 4. Sandwiched structure of \(X\) Figure 3. dual resolution graph of \(X\) \([a_{1,n^{\prime}_{1}},\ldots,a_{1,1},d,a_{2,1},\cdots,a_{c}]\) is a Wahl singularity, then we say \(n^{\prime}_{2}=0\). We may assume that the initial curve is in the first branch or the central curve. **Definition 4.1** (Incidence matrix of type 1 and 2).: Let \(X\) be a cyclic quotient surface singularity with the sandwiched structure as in Figure 4. We call an incidence matrix of \(X\) is _type 1_ if it is induced from a P-resolution that the central curve of the minimal resolution is not an exceptional curve of a Wahl singularity. Otherwise, we call it _type 2-1_ if \(n^{\prime}_{2}>0\) and _type 2-2_ if \(n^{\prime}_{2}=0\). **Example 4.2** (Continued from 3.1).: Consider a sandwiched structure on the CQSS \(\frac{1}{19}(1,11)\) as Figure 5. Incidence matrices under the sandwiched structure are : \[\begin{bmatrix}C_{1}&1\\ C_{2}&1\\ C_{3}\\ C_{4}\end{bmatrix}\begin{bmatrix}C_{1}&1\\ 1\\ 1&1&1\\ 1&1&1\end{bmatrix}\begin{bmatrix}C_{1}&1&1\\ C_{2}&1&1\\ C_{3}&1&&1&1\\ C_{4}&&1&1&1\end{bmatrix}\begin{bmatrix}C_{1}&1&1&1\\ C_{2}&1&&1\\ C_{3}&1&1&&1\\ C_{4}&&1&1&1\end{bmatrix}\] The first one is of type 1. The second and third one are of type 2-2. We investigate incidence matrices of each type. First, we assume that the central curve is not an exceptional curve of a Wahl singularity. Then in the procedure of the MMP algorithm, any decorated curve does not degenerate to the central curve. Then the \((-1)\)-curve that attached on the decorated curve \(D_{k}\) can not be connected to other decorated curves. Therefore the \((-1)\)-curve corresponds to a free point \(p_{k}\) on \(D_{k}\). After flips and divisorial contractions, the central curve becomes a \((-1)\)-curve and all decorated curves are attached on the \((-1)\)-curve. This \((-1)\)-curve corresponds to a point \(p_{0}\) that all decorated curves pass through. See the Figure 6. Moreover, the \((-1)\)-curves that appear in each branch do not connect two decorated curves in the different branches. This means that any two decorated curves in the different branches do not intersect except \(p_{0}\). From the discussions so far, we obtain an incidence matrix of Figure 7. In Figure 7, blank entries mean 0 entries. And \(*\) entries mean that the entries are 0 or 1 but for each column consisting of \(*\) entries, at least one of the \(*\) entries in the column is 1. **Lemma 4.3** (Incidence matrix of type 1).: _An incidence matrix of type 1 is of the form as shown in Figure 7._ Second, we assume that the central curve is an exceptional curve of a Wahl singularity. Then some decorated curves degenerate to the central curve. By the following lemma, we can assume that only decorated curves in the first branch degenerate to the central curve. Figure 6. Divisorial contractions in the type 1 **Lemma 4.4** (Park-Shin [14, Lemma 5.16]).: _Let \(\xy@{-}^{A_{1,n^{\prime}_{1}}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{c}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@ {-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime} _{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{ 2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\) be the dual resolution graph of a Wahl singularity with \(A_{1,p}\) be its initial curve and \(A_{c}\) be the central curve. Let \([a_{1,n^{\prime}_{1}},\dots,a_{2,n^{\prime}_{2}}]\) be its Hirzebruch-Jung continued fraction. We consider the sandwiched structure that \((a_{i,n^{\prime}_{i}}-1)\)\((-1)\)-curves attached to \(A_{i,n^{\prime}_{i}}\) for \(i=1,2\) ; \((a_{i,j}-2)\)\((-1)\)-curves attached to \(A_{i,j}\) for \(i=1,2\) and \(1\leq j\leq n^{\prime}_{i}\) ; \((a_{1,p}-3)\)\((-1)\)-curves attached to \(A_{1,p}\). Let \(\mathfrak{L}=[a_{1,n^{\prime}_{1}},\dots,a_{2,n^{\prime}_{2}}]\) be the extremal neighborhood with the \((-1)\)-curves. Then we can apply the usual flips to \(\mathfrak{L}\) successively starting from the \((-1)\)-curves intersecting \(A_{1,n^{\prime}_{1}}\) to \((-1)\)-curves intersecting \(A_{1,p}\) until we obtain_ Proof.: The proof is similar to Lemma 5.16 of Park-Shin([14]). We follow the MMP algorithm on \[\xy@{-}^{A_{1,n^{\prime}_{1}}}\xy@{-}^{A_{1,n^{\prime}_{1}}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{ A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}} \xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\xy@{-}^{A_{2,n^{ \prime}_{2}}}\xy@{-}^{A_{2,n^{\prime}_{2}}}\] precisely. A \((-1)\)-curve passes through the singularity and a decorated curve \(C_{1,m_{1}}\) intersects the \((-1)\)-curve. We apply the flip to the \((-1)\)-curve. If \(a_{1,n^{\prime}_{1}}>2\), then we obtain \[\xy@{-}^{A_{1,n^{\prime}_{1}}}\xy@{-}^{A_{1,n^{\prime}_{1}}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@ {-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@ {-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@ {-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-} ^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@ {-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@ {-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_ {1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-} ^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}}\xy@{-}^{A_{1,p}} \xy@{-}^ with a degeneration \(C_{1,m_{1}-1}^{+}=C_{1,m_{1}-1}+A_{2,n_{2}^{\prime}-1}\). We continue until we obtain \[\overset{A_{1,p}}{\underset{-a_{1,p}-1}{\overset{A_{c}}{\underset{-d}{\overset{ A_{2,1}}{\underset{-a_{2,1}}{\underset{-a_{2,1}}{\underset{-a_{2,n_{2}^{ \prime}}}}}}}}}}\cdots\overset{A_{c}}{\underset{-d}{\overset{A_{2,1}}{ \underset{-a_{2,1}}{\underset{-a_{2,1}}{\underset{-a_{2,n_{2}^{\prime}}}}}}}} \cdots\overset{A_{2,n_{2}^{\prime}}}{\underset{-a_{2,n_{2}^{\prime}}}{ \underset{-a_{2,n_{2}^{\prime}}}{\underset{-a_{2,n_{2}^{\prime}}}{\underset{-a_ {2,n_{2}^{\prime}}}{\underset{-a_{2,n_{2}^{\prime}}}{\underset{-a_{2,n_{2}^{ \prime}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2} }}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n }}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n _{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n _{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}} {\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{ \underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a_{2,n_{2}}}{\underset{-a In the aspect of a combinatorial incidence matrix, we know that there are points for the intersection relations \(C_{1,j^{\prime}}.C_{2,j^{\prime\prime}}=1\). We denote these points as \(q_{1},\cdots,q_{g}\). The index \(g\) depends on the P-resolution. In principle, a decorated curve \(C_{1,j}\) on the first branch and \(C_{2,j^{\prime}}\) on the second branch cannot connected through a \((-1)\)-curve except the \((-1)\)-curve that comes from the central curve. Another possible case is that \(C_{1,j}\) degenerates to a curve of the second branch. In our case, the decorated curves \(C_{1,m_{1}},\ldots,C_{1,e}\) are the case. Let be the dual resolution graph of a Wahl singularity where \(n_{1}^{\prime}<n_{1}\) and \(n_{2}^{\prime}<n_{2}\) as in lemma 4.4. If \(n_{2}^{\prime}\geq 1\), then we obtain Figure 10 during the MMP algorithm. If decorated curves \(C_{1,m_{1}},\cdots,C_{1,e_{1}}(e\leq e_{1}\leq m_{1})\) degenerate to \(A_{2,n_{2}^{\prime}}\), then the \((-1)\)-curve in the left in Figure 10 connects all \(C_{1,m_{1}},\cdots,C_{1,e_{1}}\) and a decorated curve in the second branch. After the divisorial contraction, if the decorated curves \(C_{1,e_{1}-1},\cdots,C_{1,e_{2}}(e\leq e_{2}\leq e_{1})\) degenerate to \(C_{2,n_{2}^{\prime\prime}}\), then the \((-1)\)-curve in the right connects the decorated curves \(C_{1,e_{1}-1},\cdots,C_{1,e_{2}}\) and the decorated curve in the second branch. This \((-1)\)-curve is not connected to any of the decorated curves of \(C_{1,m_{1}},\cdots,C_{1,e_{1}}\). This process continues until the central curve becomes a \((-1)\)-curve. Therefore if we let \(\mathcal{C}_{j}\) be the set of decorated curves that degenerate to \(A_{2,j}\) but not \(A_{2,j+1}\) for \(j=1,\cdots n_{2}^{\prime}\), then\(\{\mathcal{C}_{j}\}_{j=1}^{g^{\prime}}\) is a partition of \(\{C_{1,m_{1}},\cdots,C_{1,e}\}\). By the above observation, there are \(g^{\prime}\) number of points \(q_{1},\cdots,q_{g^{\prime}}\) that all decorated curves in \(\mathcal{C}_{j}\) pass through only \(q_{j}\). In the aspect of an incidence matrix, there are stair-shaped sub-matrix of the Figure 11 in the columns \(q_{1},\cdots,q_{g}\). If \(n_{2}^{\prime}=0\), after the MMP algorithm on the second branch, we obtain Figure 12. From the \((-1)\)-curve, we know that the decorated curves \(C_{1,m_{1}},\ldots,C_{1,e}\) and \(C_{2,1},\ldots,C_{2,m_{2}}\) pass through a point \(q_{1}\). Therefore the column \(q_{1}\) consists of \(1\) for rows \(C_{1,m_{1}},\ldots,C_{1,e}\) and \(C_{2,1},\ldots,C_{2,m_{2}}\). That is, every \((-1)\)-curve that con Figure 11. Stair-shaped sub-matrix nected to the central curve connects all decorated curves \(C_{1,m_{1}},\cdots,C_{1,e}\). Therefore the entries of the columns \(q_{1},\cdots,q_{g}\) are all 1. **Lemma 4.5** (Incidence matrix of type 2-1 and 2-2).: _Incidence matrices of type 2-1 and 2-2 are of the form shown in Figure 9. In addition, type 2-1 contains a stair-shaped sub-matrix in the columns \(q_{1},\ldots,q_{g^{\prime}}\). For type 2-2, the column \(q_{1}\) consists of \(1\)_ For the type 2-1, we can show that the stair-shaped sub-matrix is unique in the given incidence matrix. We consider the sub-matrix \([D_{1},C_{1}]\)(Figure 13). In Figure 4, if we only contract the \((-1)\)-curves on the first branch and the central curve, then we know that the first branch is contracted and the central curve becomes a \((-d+2)\)-curve. Therefore if we ignore the second branch and if \(d=3\), then only by the \((-1)\)-curves that we mentioned now, the graph will be contracted to a smooth point. See the Figure 14. We consider \([D_{1},C_{1}]\) as an incidence matrix of Figure 14. Let \([2,b_{1,1},\cdots,b_{1,m_{1}}]\) be the dual H-J continued fraction of \([3,a_{1,1},\cdots,a_{1,n_{1}}]\). By Proposition 3.12, there is an integer sequence \(\underline{k}\) and triangulation \(\theta\) that generate the incidence matrix \([D_{1},C_{1}]\). Here, we assign the vertices of the convex \((m_{1}+2)\)-gon to \(d,C_{1,1},\ldots,C_{1,m_{1}},N\) counterclockwise. Since the matrix in Figure 13 contains two columns, \(p_{0}\) and \(p_{1}\), we can deduce the existence of two triangles, namely \(\triangle(d,C_{1,1},C_{1,e})\) and \(\triangle(d,C_{1,e},N)\), as shown in Figure 15. Due to the diagonal \(\overline{C_{1,e},N}\), there exist triangles \(\triangle(N,C_{1,e},C_{1,e_{1}}),\triangle(N,C_{1,e_{1}},C_{1,e_{2}}),\ldots, \triangle(N,C_{1,e_{e}})\) for some \(e<e_{1}<\cdots<e_{o}\leq m_{1}\). Two consecutive triangles \(\triangle(N,C_{1,e_{j}},N,C_{1,e_{j+1}})\) and \(\triangle(N,C_{1,e_{j+1}},N,C_{1,e_{j+2}})\) form a'stair' pattern, as depicted in Figure 16. Therefore we find the stair-shaped sub-matrix in the triangulation method that we observed Figure 14. Resolution graph of \([D_{1},C_{1}]\) Figure 13. \([D_{1},C_{1}]\) in Lemma 4.5. Furthermore, we see that the set of triangles that make the stair-shaped sub-matrix is the only one we found. ## 5. Deformations of weighted homogeneous surface singularities with big central node In this section, we introduce combinatorial incidence matrices, which are denoted as cases \(A\) and \(B\), of a weighted homogeneous surface singularity. Then, we prove that every combinatorial incidence matrix of a weighted homogeneous surface singularity with \(d\geq t+3\) is only one of the cases. And we construct \(P\)-resolutions only from the combinatorial information of the cases. Finally, we show that the constructed \(P\)-resolutions actually induce the given combinatorial incidence matrices. ### Weighted homogeneous surface singularities In this section, \((X,0)\) is a weighted homogeneous surface singularity. The singularity \((X,0)\) is a two dimensional singularity with a good \(\mathbb{C}^{*}\)-action(Orlik-Wagreich [12]). The dual resolution graph of the singularity \((X,0)\) is star-shaped. That is, there exist a central node of degree \(-d\) and \(t\)-branches. Each branch is the dual resolution Figure 16. Two triangles induce a ’stair’ Figure 15. triangles that induce columns \(p_{0}\) and \(p_{1}\) graph of a cyclic quotient surface singularity. Therefore we assign the singularity \((X,0)\) to \((d,(n_{1},q_{1}),\ldots,(n_{t},q_{t}))\) with \(n_{i}/q_{i}=[a_{i,1},\ldots,a_{i,n_{i}}]\). We assume that the \(a_{i,1}\)-curve is connected to the central curve. Assume that \(d\geq t+1\). If we attach \((a_{i,n_{i}}-1)\) (\(-1\))-curve to \((a_{i,n_{i}})\)-curve, \((a_{i,j}-2)\) (\(-1\))-curve to \(a_{i,j}\)-curve for \(j<n_{i}\) and \((d-t-1)\) (\(-1\))-curve to the central curve, then the graph contracts to a smooth point(Refer Figure 17). Therefore the graph is sandwiched and we obtain a sandwiched structure of the singularity by attaching decorated curves to the (\(-1\))-curves. Decorated curves connected to a curve of \(i\)-th branch through the (\(-1\))-curve is labeled by \(C_{i,j}\). The second sub script is labeled as in cyclic quotient surface singularities. We frequently examine sub-matrices of a (combinatorial) incidence matrix \(M\) that are composed of certain rows representing decorated curves. We indicate the sub-matrix that consists of decorated curves on the \(i\)-th branch as \(M_{i}\). Furthermore, we use the notation \([M_{i},M_{j}]\) for the sub-matrix that comprises decorated curves on both the \(i\)-th and \(j\)-th branches, despite it resembling the parallel sum of two matrices. Let the combinatorial equations of the cyclic quotient surface singularity of \(\frac{1}{n_{i}}(1,q_{i})\) be \(l(C_{i,j})=a\) and \(C_{i,j}.C_{i,j^{\prime}}=b\). Then the combinatorial equations of the sandwiched structure of a weighted homogeneous surface singularity is \[l(C_{i,j})=a+1 \tag{5.1}\] \[l(D_{k})=2\] \[C_{i,j}.C_{i,j^{\prime}}=b+1\] \[C_{i,j}.C_{i^{\prime},j^{\prime}}=1\] \[C_{i,j}.D_{k}=1\] for \(i,i^{\prime}=1,\ldots,m_{i}\), \(j,j^{\prime}=1,\ldots,t\), \(k=1,\ldots,d-t-1\), \(i\neq i^{\prime}\), \(j\neq j^{\prime}\). The difference comes from the central curve. From this observation, we expect that a combinatorial incidence matrix of \((X,0)\) contains an incidence matrix of a cyclic quotient surface singularity. ### Incidence matrices of weighted homogeneous surface singularities In this subsection, we classify the combinatorial incidence matrices of \(X\) based on its sandwiched structure, as illustrated in Figure 17. Let \(M\) be a combinatorial incidence matrix of the singularity \(X\). We define the sub-matrices of \(M\) as \(M_{i}\), which consists of the rows \(C_{i,1},\ldots,C_{i,m_{i}}\) for each \(i=1,\ldots,t\), and \(D\), which consists of \(D_{1},\ldots,D_{d-t-1}\). We give a lemma about the sub-matrices. Figure 17. Sandwiched structure of \(X\) **Lemma 5.1**.: _Let \(M\) be a combinatorial incidence matrix of a singularity \((X,0)\) with the sandwiched structure as Figure 17. Let \([M_{i},M_{j},D]\) be the sub-matrix of \(M\) consisting of \(M_{i}\), \(M_{j}\) and \(D\) for \(i\neq j\). Then the sub-matrix \([M_{i},M_{j},D]\) is an incidence matrix of the cyclic quotient surface singularity \([a_{i,n_{j}},\ldots,a_{i,1},d-t+2,a_{j,1},\ldots,a_{j,m_{j}}]\) with the sandwiched structure as in Figure 18._ Proof.: Consider the combinatorial equations of \(C_{i,1},\cdots,C_{i,m_{i}},C_{j,1},\cdots,C_{j,m_{j}},D_{1},\cdots,D_{d-t-1}\) that obtained from the sandwiched structure of Figure 18. It is actually the same with the equations that obtained from Figure 17. Therefore the sub-matrix \([M_{i},M_{j},D]\) satisfies the equations. **Lemma 5.2**.: _The sub matrix \(D\) is a \((d-t-1)\times(d-t)\) matrix_ \[\begin{bmatrix}1&1&&\\ \vdots&&\ddots&\\ 1&&&1\end{bmatrix}\] _where entries of the first column are all \(1\) and the rest is the \((d-t-1)\times(d-t-1)\) identity matrix._ Proof.: Decorated curves \(D_{1},\ldots,D_{d-t-1}\) satisfy the following equations. \[l(D_{k}) =2 \tag{5.2}\] \[D_{k}.D_{l} =1\] for all \(k,l=1,\ldots,d-t-1\) and \(k\neq l\). Since \(d\geq t+3\), we have at least two decorated curves denoted by \(D_{k}\). If \(d-t-1=2\), then we have two decorated curves \(D_{1}\) and \(D_{2}\). There exists only one matrix that satisfies Equation 5.2. \[\begin{bmatrix}&p_{0}&p_{1}&p_{2}\\ \hline D_{1}&1&1&\\ D_{2}&1&&1\end{bmatrix} \tag{5.3}\] If \(d-t-1=3\), we have three decorated curves \(D_{1}\), \(D_{2}\) and \(D_{3}\). We can find matrices satisfying Equations 5.2 by adding the decorated curve \(D_{3}\) to the matrix 5.3 to satisfy Equation 5.2. We have two such matrices. \[\begin{bmatrix}&p_{0}&p_{1}&p_{3}&p_{4}\\ \hline D_{1}&1&1&\\ D_{2}&1&&1\\ D_{3}&1&&&1\end{bmatrix}\ \begin{bmatrix}&p_{0}&p_{1}&p_{2}\\ \hline D_{1}&1&1&\\ D_{2}&1&&1\\ D_{3}&&1&1\end{bmatrix} \tag{5.4}\] We will now show that the right sub-matrix cannot be a valid sub-matrix of \(M\). Consider the decorated curve \(C_{1,1}\) in the first branch. From Equation 5.1, we have the intersection relation \(C_{1,1}.D_{i}=1\) for \(i=1,2,3\). Suppose \(C_{1,1}\) intersects \(D_{1}\) at \(p_{0}\). Then \(C_{1,1}\) must also intersect \(D_{2}\) at the same point. To satisfy the intersection relation, \(C_{1,1}\) must intersect \(D_{3}\) at \(p_{1}\) or \(p_{2}\). However, this causes \(C_{1,1}\) to intersect Figure 18. Sandwiched structure on \([M_{i},M_{j},D]\) \(D_{2}\) one more time than it should, violating the intersection relation. Therefore, this sub-matrix cannot appear in any combinatorial incidence matrix. If \(d-t-1\geq 4\), we have only one choice from the left one of matrices 5.4 \[\begin{bmatrix}&p_{0}&p_{1}&\cdots&p_{d-t-1}\\ \hline D_{1}&1&1&\\ \vdots&\vdots&&\ddots&&\\ D_{d-t-1}&1&&&1\end{bmatrix} \tag{5.5}\] that the lemma claimed. Denote the intersection point of all \(D_{i}\)(the first column of 5.5 ) as \(p_{0}\) and the others as \(p_{1},\cdots,p_{d-t-1}\) in the matrix \(M\). **Theorem 5.3**.: _Every combinatorial incidence matrix of \(X\) can be classified into two cases._ **Case A**_. All entries of the column \(p_{0}\) are \(1\). The rest consists of block sub-matrices as follows._ \[\begin{array}{|c|ccccccccc|}&p_{0}&p_{1}&\cdots&p_{d-t-1}&&&&&&&&\\ \hline C_{1,1}&1&&&&*&\cdots&*&&&&\\ \vdots&&&&\vdots&\ddots&\vdots&-M^{\prime}_{1}&&&&\\ C_{1,m_{1}}&1&&&&*&\cdots&*&&&&\\ \hline\vdots&\vdots&&&&&&&&\ddots&\ddots&\ddots&&&&\\ \vdots&\vdots&&&&&&&&\ddots&\ddots&\ddots&&&&\\ \hline C_{t,1}&1&&&&&&&&*&\cdots&*&&&&\\ \vdots&1&&&&&&&&\vdots&\ddots&\vdots&-M^{\prime}_{t}\\ C_{t,m_{t}}&1&&&&&&&&*&\cdots&*&&&&\\ \hline D_{1}&1&1&&&&&&&&&&\\ \vdots&\vdots&&\ddots&&-D^{\prime}&&&&&&&&\\ D_{d-t-1}&1&&&&1&&&&&&&&\\ \end{array} \tag{5.6}\] \(M^{\prime}_{i}\) _means the corresponding block sub-matrices_ **Case B**_. Some entries are \(0\) in the column \(p_{0}\). Rows containing \(0\)-entries in the column \(p_{0}\) come from only one branch. We may assume that the branch is the first branch and the rows are \(C_{1,e},\ldots,C_{1,m_{1}}\) for \(1\leq e\leq m_{1}\). Moreover, there is at most one sub-matrix \(M_{i}\) such that \([M_{1},M_{i},D]\) is type 2-1 in definition 4.1. We may assume that \(i=2\). Then each sub-matrix \([C_{1},C_{j},D]\) is type 2-2 for \(j=3,\ldots,t\)._ _If every sub-matrix \([M_{1},M_{i},D]\) is type 2-2 for all \(i=2,\cdots,t\), then the combinatorial incidence matrix is of the following form._ \[\begin{array}{|l|cccccccccccccccc|}&p_{0}&p_{1}&\cdots&p_{d-t-1}&q_{2,1}& \cdots&q_{2,g_{2}}&q_{3,1}&\cdots&q_{3,g_{2}}&q_{4,1}&\cdots&q_{d,g_{t}}&\cdots\\ \hline C_{1,1}&1&&&&&&&&&&&&&&&&\\ C_{1,e-1}&1&&&&&&&&&&&&&&&&\\ \hline C_{1,e}&1&\cdots&1&1&\cdots&1&1&\cdots&1&1&\cdots&1&\ast&\cdots&\ast\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ C_{1,m_{1}}&1&\cdots&1&1&\cdots&1&1&\cdots&1&\ast&\cdots&\ast\\ \hline C_{2,1}&1&&&&\ast&\cdots&\ast&&&&&&&&\\ \vdots&\vdots&&&&\vdots&\vdots&&&&&&&&&&&&\\ C_{2,m_{2}}&1&&&&\ast&\cdots&\ast&&&&&&&&\\ \hline C_{3,1}&1&&&&\ast&\cdots&\ast&&&&&&&&\\ \vdots&\vdots&&&&\vdots&\vdots&&&&\vdots&\vdots&&&&\\ C_{3,m_{3}}&1&&&&\ast&\cdots&\ast&&&&&&&&\\ \hline\vdots&\vdots&&&&&&&&&&&&\ast&\cdots&\ast&&&&\\ \vdots&\vdots&&&&&&&&&&\vdots&\vdots&&\\ \vdots&\vdots&&&&&&&&&&\ast&\cdots&\ast&&&&\\ \hline D_{1}&1&1&0&0&&&&&&&&&&&&\\ D_{d-t-1}&1&0&0&1&&&&&&&&\end{array} \tag{5.7}\] _If the sub-matrix \([M_{1},M_{2},D]\) is type 2-1, then the combinatorial incidence matrix is of the following form._ \[\begin{array}{|l|cccccccccccc|}&p_{0}&p_{1}&\cdots&p_{d-t-1}&q_{2,1}&\cdots&q _{2,g^{\prime}}&q_{3,1}&\cdots&q_{3,g_{3}}&&q_{4,1}&\cdots&q_{4,g_{t}}&\cdots\\ \hline C_{1,1}&1&&&&&&&&&&&&&&&&\\ C_{1,e-1}&1&&&&&&&&&&&&&&&&\\ \hline C_{1,e}&1&\cdots&1&\ast&\cdots&\ast&1&\cdots&1&&1&\cdots&1\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\vdots&\vdots\\ C_{1,m_{1}}&1&\cdots&1&\ast&\cdots&\ast&1&\cdots&1&&1&\cdots&1\\ \hline C_{2,1}&1&&&&\ast&\cdots&\ast&&&&&&&&\\ \vdots&\vdots&&&&\vdots&\ddots&\vdots&&&&&&&&\\ C_{2,m_{2}}&1&&&&\ast&\cdots&\ast&&&&&&&&\\ \hline C_{3,1}&1&&&&\ast&\cdots&\ast&\ast&\cdots&\ast&&&&\\ \vdots&\vdots&&&&&&\vdots&\vdots&\vdots&\vdots&\ddots&\ast&&&&\\ C_{3,m_{3}}&1&&&&\ast&\cdots&\ast&\cdots&\ast&&&&\\ \hline\vdots&\vdots&&&&&&&&&&&&\ast&\cdots&\ast\\ \vdots&\vdots&&&&&&&&&&&&\ast&\cdots&\ast\\ \vdots&\vdots&&&&&&&&&&&&\ast&\cdots&\ast\\ \hline D_{1}&1&0&0&&&&&&&&&&&&\\ D_{d-t-1}&1&0&0&1&&&&&&&&\\ \end{array} \tag{5.8}\] _The columns \(q_{2,1},\ldots,q_{2,g^{\prime}}\) are columns of type 2-1 containing a stair-shaped submatrix that we mentioned in Lemma 4.5. The column \(q_{i,g_{i}}(i=3,\ldots,t)\) is the column of type 2-2 that we mentioned in the same lemma._ **Lemma 5.4**.: _The block sub-matrix \(M_{i}^{\prime}\) of the matrix 5.6 is an incidence matrix of a cyclic quotient surface singularity \([a_{i,n_{i}},\ldots,a_{i,1}]\)._ Proof.: Similar to the proof of lemma 5.1. proof of theorem 5.3.: **Case A**. Assume that all entries of the column \(p_{0}\) of the matrix \(M\) are 1. Then the intersection relations \(C_{i,j}.C_{i^{\prime},j^{\prime}}=1\) where \(i\neq i^{\prime}\) in Equation 5.1 are satisfied at \(p_{0}\). Therefore there are no additional intersection points between any two decorated curves \(C_{i,j}\) and \(C_{i^{\prime},j^{\prime}}\). In the aspect of combinatorial incidence matrices, there are no columns \(p\) such that \(M(C_{i,j},p)=M(C_{i^{\prime},j^{\prime}},p)=1\) for \(i\neq i^{\prime}\) except the column \(p_{0}\). Therefore by proper column exchanging, we can make the block sub-matrices \(M^{\prime}_{1},\ldots,M^{\prime}_{t}\) as in the matrix 5.6. **Case B**. Assume that some entries of the column \(p_{0}\) are \(0\). Let \(C_{i,j}\) be such rows that \(M(C_{i,j},p_{0})=0\). It is equivalent to that every decorated curve passes through \(p_{0}\) except curves \(C_{i,j}\). To satisfy the intersection relation \(C_{i,j}.D_{k}=1\) for \(k=1,\ldots,d-t-1\), the curves \(C_{i,j}\) must pass through \(p_{1},\ldots,p_{d-t-1}\). It induces the columns \(p_{1},\ldots,p_{d-t-1}\) of the matrices 5.7 and 5.8. To show that the rows containing \(0\)-entries of the column \(p_{0}\) come from only one branch, assume that \(M(C_{1,1},p_{0})=M(C_{2,1},p_{0})=0\). This is equivalent to that \(C_{1,1}\) and \(C_{2,1}\) does not pass through \(p_{0}\). Since \(d\geq t+3\), there are at least two points \(p_{1}\) and \(p_{2}\). To satisfy the intersection relation \(C_{1,1}.D_{k}=1\) and \(C_{2,1}.D_{k}=1\) for \(k=1,2\), the curves \(C_{1.1}\) and \(C_{2,1}\) pass through \(p_{1}\) and \(p_{2}\). Then \(C_{1,1}.C_{2,1}\geq 2\). But \(C_{1,1}.C_{2,1}=1\) by the intersection relation. Therefore decorated curves that do not pass through \(p_{0}\) come from only one branch. We may assume that this branch is the first branch. To show the rest of the lemma, we suppose that two sub-matrices \([M_{1},M_{2},D]\) and \([M_{1},M_{3},D]\) are the type 2-1. By Lemma 5.1, we consider \([M_{1},M_{2},D]\) and \([M_{1},M_{3},D]\) are incidence matrices of cyclic quotient surface singularities. By Lemma 4.5, the stair-shaped sub-matrix exists only one in \(M_{1}\). Therefore some decorated curves in \(M_{2}\) and \(M_{3}\) intersect at some \(q_{j}\). But the intersection relations \(C_{2,j}.C_{3,j^{\prime}}=1\) are already satisfied at the column \(p_{0}\). This contradiction means that there is at most one sub-matrix of type 2-1. **Theorem 5.5**.: _The map \(\phi_{\text{PI}}:\mathscr{P}(X)\to C\mathscr{I}(X)\) from the set of \(P\)-resolutions of \(X\) to the set of combinatorial incidence matrices of \(X\) is surjective._ Proof.: We construct a P-resolution of \(X\) from a given combinatorial incidence matrix \(M\) of \(X\) for each case in theorem 5.3. We then show that the constructed P-resolution induces the given combinatorial incidence matrix by using the MMP algorithm. **Case A** All entries of the column \(p_{0}\) are \(1.\)(Matrix 5.6). By eliminating the column \(p_{0}\) of the matrix \(M\), we obtain block sub-matrices \(M^{\prime}_{1},\cdots,M^{\prime}_{t}\) and \(D^{\prime}=I_{(d-t-1)\times(d-t-1)}\). By lemma 5.4, we consider the sub-matrices \(M_{i}\) as incidence matrices of cyclic quotient surface singularities \([a_{i,n_{i}},\ldots,a_{i,1}]\). We can find the P-resolution for each cyclic quotient surface singularity that induces the incidence matrix \(M^{\prime}_{i}\) respectively. This means that we know where the T-singularities are located, that is, which exceptional curves in the minimal resolution are contracted to be a T-singularity. Since each dual resolution graph of the cyclic quotient surface singularity is an branch of the dual graph of \(X\), we can locate T-singularities on each branch. Therefore we obtain a P-resolution of \(X\). We apply the MMP algorithm to the P-resolution that we construct now(refer Figure 19). Note that the MMP algorithm on each P-resolution of \([a_{i,n_{i}},\ldots,a_{i,1}]\) is the same with the MMP algorithm on each branch of the P-resolution of \(X\). Therefore the MMP algorithm induces the same matrices \(M^{\prime}_{i}\). The \((-1)\)-curves that appears on each branch do not connect two decorated curves in different branches. A \((-1)\)-curve connecting them is only the central curve. The column \(p_{0}\) comes from this \((-1)\)-curve. The sub-matrix \(D^{\prime}\) comes from the \((-1)\)-curves attached to the decorated curves \(D_{i}\) respectively. **Case B**\(M(C_{1,e},p_{0})=\cdots=M(C_{1,m_{1}},p_{0})=0\). **Case B**\(-\,\mathbf{1}\)) We assume that the sub-matrix \([M_{1},M_{i},D]\) is type 2-2 for all \(i=2,\cdots,t\)(Matrix 5.7). We define \(t-1+s=\sum\limits_{i=2}^{t}g_{i}\). That is, \(s\) is the sum of \(g_{i}\) such that \(g_{i}\geq 2\). We consecutively blow up the intersection of the central curve and \(A_{i,1}\) in the minimal resolution to make \((g_{i}-2)\)\((-2)\)-curves for \(i=2,\cdots,t\). We will locate T-singularities on \([a_{1,n_{1}},\ldots,a_{1,1},d+s]\) and \([a_{i,n_{i}},\ldots,a_{i,1}+1,2,\ldots,2]\) for \(i=2,\cdots,t\). (1) Consider a sub-matrix \([M_{1},D]\)(Figure 20). We add decorated curves \(E_{1},\ldots,E_{t-1+s}\) Figure 19. MMP on **CaseA** such that combinatorial equations are \[l(E_{i}) =2\] \[E_{i}.E_{i^{\prime}} =1\] \[C_{1,j}.E_{i} =1\] for \(i,i^{\prime}=1,\ldots,t-1+s\)(Figure 21) The combinatorial equations of \(E_{i}\) are the same with those of \(D_{k}\). Therefore we can consider this matrix as an incidence matrix of a cyclic quotient surface singularity \([a_{1,n_{1}},\ldots,a_{1,1},d+s]\)(Ref Lemma 5.1). We can find the \(P\)-resolution of the cyclic quotient surface singularity that induces the matrix \([C_{1},D,E]\). Since every decorated curve \(D_{k}\) and \(E_{l}\) has no free point, we know that the \((-d-s)\)-curve is an exceptional curve of a Wahl singularity by Lemma 3.29. Moreover, the decorated curves \(C_{1,e},\cdots,C_{1,m_{1}}\) degenerate to the \((-d-s)\)-curve after flips because of the columns \(p_{1},\cdots,p_{d-k}\). Let \(M_{1}^{\prime}\) be the matrix obtained from \(M_{1}\) by deleting the columns \(q_{2,1},\ldots,q_{t,g_{t}}\). If we progress the MMP algorithm until the last \(-(d+s)\)-curve becomes \(-(t+s)\)-curve, then we obtain the matrix \([M_{1}^{\prime},D]\)(See Figure 22). In fact, the additional curves \(E_{1},\ldots,E_{t-1+s}\) correspond to the \((t-1)\) branches. (2) Consider the sub-matrix \(M_{i}\) for \(i=2,\cdots,t\). Figure 21. \([M_{1},D,E]\) Figure 20. \([M_{1},D]\) \[M_{i}=\begin{bmatrix}&p_{0}&q_{i,1}&\cdots&q_{i,g_{i}}\\ \hline C_{i,1}&1&*&\cdots&*&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&1&*&\cdots&*&*&\cdots&*\end{bmatrix}\] We consider three cases whether \(g_{i}=1\), \(g_{i}=2\) or \(g_{i}\geq 3\). Assume \(g_{i}=1\). Then \[M_{i}=\begin{bmatrix}&p_{0}&q_{i,1}\\ \hline C_{i,1}&1&1&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&1&1&*&\cdots&*\end{bmatrix}\] We delete the column \(p_{0}\) and we denote this matrix as \(M_{i}^{\prime}\). \[M_{i}^{\prime}=\begin{bmatrix}&q_{i,1}\\ \hline C_{i,1}&1&*&\cdots&*\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&1&*&\cdots&*\end{bmatrix}\] This matrix satisfies the combinatorial constraints of the quotient surface singularity \([a_{i,n_{i}},\ldots,a_{i,1}]\). Therefore the matrix is an incidence matrix of the cyclic quotient surface singularity. We can find the corresponding P-resolution. The exceptional curve \(A_{i,1}\) is not an exceptional curve of a Wahl singularity because of the column \(q_{i,1}\)(Lemma 3.30). Assume that \(g_{i}=2\). Figure 22. partial MMP on \([M_{1},M_{2},D,E]\) Define a matrix \(M_{i}^{\prime}\) by deleting the column \(p_{0}\) from \(M_{i}\) and \(M_{i}^{\prime\prime}\) by deleting the columns \(q_{i,1}\) and \(q_{i,2}\) from \(M_{i}^{\prime}\). \[M_{i}^{\prime}=\left[\begin{array}{c|ccccc}&q_{i,1}&q_{i,2}&&&\\ \hline C_{i,1}&*&*&*&\cdots&*\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&*&*&\cdots&*\\ \end{array}\right],\,M_{i}^{\prime\prime}=\left[\begin{array}{c|ccccc}&&&\\ \hline C_{i,1}&*&\cdots&*\\ \vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&*&\cdots&*\\ \end{array}\right]\] We add a row \(F\) to \(M_{i}^{\prime}\): \[\left[\begin{array}{c|ccccc}&q_{i,1}&q_{i,2}&&&\\ \hline C_{i,1}&*&*&*&...&*\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&*&*&\ast&...&*\\ F&1&1&&&\\ \end{array}\right]\] Note that the intersection relation between \(F\) and \(C_{i,1},\cdots,C_{i,m_{i}}\) is the same with the relation between \(C_{1,e}\) and \(C_{i,1},\cdots,C_{i,m_{i}}\). That is, \(F.M_{i,j}=1\) for \(j=1,\cdots,m_{i}\) and \(l(F)=2\). Therefore this matrix satisfies the combinatorial equations of the following sandwiched structure. Since the curve \(F\) has no free point, the curve \(A_{i,1}\) is an exceptional curve of a Wahl singularity(Lemma 3.29). Note that if we continue the MMP-algorithm until the \((-a_{i,1}-1)\)-curve becomes a \((-2)\)-curve, then we obtain the matrix \(M_{i}^{\prime\prime}\). Assume that \(g_{i}\geq 3\). Define a matrix \(M_{i}^{\prime}\) by deleting the column \(p_{0}\) and \(M_{i}^{\prime\prime}\) by deleting the columns \(q_{i,1},\cdots,q_{i,g_{i}}\). \[M_{i}^{\prime}=\left[\begin{array}{c|ccccc}&q_{i,1}&\cdots&q_{i,g_{i}}&&&\\ \hline C_{i,1}&*&\cdots&*&*&\cdots&*\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&*&\cdots&*&\cdots&*\\ \end{array}\right],\,M_{i}^{\prime\prime}=\left[\begin{array}{c|ccccc}&&&\\ \hline C_{i,1}&*&\cdots&*\\ \vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&*&\cdots&*\\ \end{array}\right]\] This matrix satisfies the combinatorial equations of the quotient surface singularity \([a_{i,n_{i}},\ldots,a_{i,1}]\). Then we add a row \(F\) to \(M_{i}^{\prime}\): \[\left[\begin{array}{c|ccccc}&q_{i,1}&\cdots&q_{i,g_{i}}&&&\\ \hline C_{i,1}&*&\cdots&*&*&\cdots&*\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ C_{i,m_{i}}&*&\cdots&*&*&\cdots&*\\ \end{array}\right]\] Note that the intersection relation between \(F\) and \(C_{i,1},\cdots,C_{i,m_{i}}\) is the same with the relation between \(C_{1,e}\) and \(C_{i,1},\cdots,C_{i,m_{i}}\). That is, \(F.C_{i,j}=1\) for \(j=1,\cdots,m_{i}\) and \(l(F)=g_{i}\). Therefore this matrix satisfies the combinatorial equations of the following sandwiched structure where \((g_{i}-2)\) (\(-2\))-curves are added. The matrix is an incidence matrix of the cyclic quotient surface singularity with the sandwiched structure. Therefore we can find the corresponding P-resolution. Since the decorated curve \(F\) has no free points, the last \((-2)\)-curve is an exceptional curve of a Wahl singularity(Lemma 3.29). Note that if we progress the MMP-algorithm until the \((-a_{i,1}-1)\)-curve becomes \((-2)\)-curve, then we obtain the matrix \(M_{i}^{\prime\prime}\). We apply flips and divisorial contractions to the \(P\)-resolution of \(X\). We progress until the \((-d-s)\)-curve becomes \((-t-s)\)-curve, the \(-(a_{i,1})\)-curve becomes a \((-1)\)-curve if \(g_{i}=1\), \(-(a_{i,1})\)-curve becomes a \((-2)\)-curve if \(g_{i}\geq 1\), then we arrive at the followings. As we have noted at each step, we obtain the sub-matrices \([M_{1}^{\prime},D],M_{2}^{\prime\prime},\cdot\cdot,M_{t}^{\prime\prime}\). If \(g_{i}=1\), then all decorated curves \(C_{i,1},\cdot\cdot\cdot,C_{i,m_{i}}\) are connected to the central curve through the \((-1)\)-curve. Since the decorated curves \(C_{1,e},\ldots,C_{1,m_{1}}\) degenerate to the central curve, the \((-1)\)-curve connects the decorated curves \(C_{1,e},\cdot\cdot\cdot,C_{1,m_{1}}\) and \(C_{i,1},\cdot\cdot\cdot,C_{i,m_{i}}\). The \((-1)\)-curve corresponds to the column \(q_{i,1}\). If \(g_{i}\geq 2\), then we can consider that the additional curve \(F\) is replaced by the degenerated curves \(C_{1,e},\cdot\cdot\cdot,C_{1,m_{1}}\). This means that in the combinatorial incidence matrix, \(F\) is replaced by \(C_{1,e},\cdot\cdot\cdot,C_{1,m_{1}}\) and we obtain the columns \(q_{i,1},\cdot\cdot\cdot,q_{i,g_{i}}\). **Case \(\mathbf{B-2}\))** We assume that \([M_{1},M_{2},D]\) is the type 2-1 and \([M_{1},M_{i},D]\) is type 2-2 for \(i=3,\cdot\cdot,t\)(Matrix 5.8). We define \(t-2+s=\sum\limits_{i=3}^{t}g_{i}\). That is, \(s\) is the sum of \(g_{i}\) such that \(g_{i}\geq 2\). We consecutively blow up the intersection of the central curve and \(A_{i,1}\) in the minimal resolution of \((X,0)\) to make \((g_{i}-2)\)\((-2)\)-curves for \(i=3,\cdot\cdot,t\). We will locate T-singularities on \([a_{1,n_{1}},\ldots,a_{1,1},-d-s,a_{2,1},\ldots,a_{2,n_{2}}]\) and \([a_{i,n_{i}},\ldots,a_{i,1}+1,2,\ldots,2]\) for \(i=3,\cdot\cdot,t\). (1) Consider a matrix \([M_{1},M_{2},D]\)(Figure 23). We add decorated curves \(E_{1},\ldots,E_{t-2\ast s}\) such that combinatorial equations are \[l(E_{i})=2\] \[E_{i}.E_{i^{\prime}}=1\] \[C_{1,j}.E_{i}=1\] \[C_{2,j^{\prime}}.E_{i}=1\] for \(i,i^{\prime}=1,\ldots,t-2+s\). The combinatorial equations of \(E_{i}\) are the same with those of \(D_{k}\). Therefore we can consider this matrix as an incidence matrix of a cyclic quotient surface singularity \([a_{1,n_{1}},\ldots,a_{1,1},d+s,a_{2,1},\ldots,a_{2,n_{2}}]\)(Ref Lemma 5.1). We can find the \(P\)-resolution of the cyclic quotient surface singularity that induces the matrix \([M_{1},M_{2},D,E]\). Figure 24. \([M_{1},M_{2},D]\) Figure 23. \([M_{1},M_{2},D]\) Let \(M_{1}^{\prime}\) be a matrix obtained from \(M_{1}\) by deleting the columns \(q_{3,1},\cdots,q_{t,g_{t}}\). If we progress the MMP-algorithm until the central curve becomes \((t-2+s-1)\)-curve, then we obtain the matrix \([M_{1}^{\prime},M_{2},D]\)(See Figure 25). (2) For the sub-matrices \(M_{i}\) for \(i=3,\cdots,t\), we obtain \(P\)-resolutions as in the **Case \(\mathbf{B-1}\)**. (3) We found \(P\)-resolutions of \([a_{1,n_{1}},\ldots,a_{1,1},d+s,a_{2,1},\ldots,a_{2,n_{2}}]\) and \([a_{i,n_{i}},\ldots,a_{i,1}]\), we can locate \(T\)-singularities on the minimal resolution of \((X,0)\). (4) We run the MMP algorithm on the constructed \(P\)-resolution until the \((-d-s)\)-curve becomes a \((-t-s+1)\)-curve, the \(-a_{i,1}\)-curve becomes a \((-1)\)-curve when \(g_{i}=2\) and the \(-(a_{i,1}+1)\)-curve becomes a \((-2)\)-curve when \(g_{i}\geq 2\), then we obtain the sub-matrices \([M_{1}^{\prime},M_{2},D],M_{3}^{\prime\prime},\ldots,M_{t}^{\prime\prime}\). (4) We run the MMP algorithm on the constructed \(P\)-resolution until the \((-d-s)\)-curve becomes a \((-t-s+1)\)-curve, the \(-a_{i,1}\)-curve becomes a \((-1)\)-curve when \(g_{i}=2\) and the \(-(a_{i,1}+1)\)-curve becomes a \((-2)\)-curve when \(g_{i}\geq 2\), then we obtain the sub-matrices \([M_{1}^{\prime},M_{2},D],M_{3}^{\prime\prime},\ldots,M_{t}^{\prime\prime}\). Figure 25. partial MMP on \([M_{1},M_{2},D]\) Similar to the **Case B - 1**, we can check that the \(P\)-resolution induces the given combinatorial incidence matrix. In general, it is not guaranteed that the ampleness still holds after blow-ups. Therefore, in case B, we have to check the ampleness on the \((-1)\)-curves near the central curve. The ampleness is equivalent to that the sum of the discrepancies of two curves connected through a \((-1)\)-curve being equal to or less than \(-1\). Simplifying cases, we have (1) One blow up where \(a\geq 5\) and \(d\geq 3\). (2)Blow up \(n\) times where \(a\geq 4+n\). Let the cases above be case \(1,2,3\) and \(4\). We need some upper bounds for discrepancies of Wahl singularities. The following can be found in the appendix of [2]. We use the description of discrepancies of Urzua-Vilches([19]). Let \(Y=\frac{1}{n^{2}}(1,na-1)\) be a Wahl singularity and \(f:\widehat{Y}\to Y\) be the minimal resolution of \(Y\). Then the canonical divisor \(K_{\nabla}\) of \(\widehat{Y}\) is represented as \(K_{\widehat{Y}}=f^{*}K_{Y}+\sum m_{i}E_{i}\) for exceptional curves \(E_{i}\) of \(f\). The \(m_{i}\) is called the discrepancy of \(E_{i}\). It is well known that \(-1<m_{i}<0\) because \(Y\) is a terminal singularity. Let \([a_{1},\cdots,a_{r}]\) be the Hirzebruch-Jung continued fraction of a Wahl singularity(Wahl continued fraction, for short) and \(m_{i}\) be the discrepancy corresponding to \(a_{i}\). We define an integer sequence \(\delta_{1},\cdots,\delta_{r}\) in the following inductive way. For \(r=1\), that is, for [4], we assign an integer \(\delta_{1}=1\) to [4]. If an integer sequence \(\delta_{1},\cdots,\delta_{r}\) is assigned to a Wahl singularity \([a_{1},\cdots,a_{r}]\), then we assign \[\delta_{1},\cdots,\delta_{r},\delta_{1}+\delta_{r}\text{ to }[a_{1}+1,a_{2}, \cdots,a_{r},2],\] \[\delta_{1}+\delta_{r},\delta_{1},\cdots,\delta_{r}\text{ to }[2,a_{1},\cdots,a_{ r-1},a_{r}+1].\] Then the discrepancy \(m_{i}\) is \(\left(-1+\frac{\delta_{i}}{\delta_{1}+\delta_{r}}\right)\). **Lemma 5.6** (Urzua-Vilches [19, Lemma 4.4]).: _Let \([a_{1},\cdots,a_{t}]\) be a Wahl singularity, assume \(t\geq 2\) and \(a_{t}=2\), and let us denote its discrepancies by \(m_{1},\cdots,m_{t}\). Then we have the following bounds: (Type M) If \(a_{2}=a_{3}=\cdots=a_{t}\), then \(m_{1}=-1+1/(a_{1}-2)\) and \(m_{t}=-1/(a_{1}-2)\). (Type B) Otherwise, \(m_{1}=-1+\mu\) and \(m_{t}=-\mu\), where \(1/a_{1}<\mu<1/(a_{1}-1)\)._ **Lemma 5.7**.: _Let \([a_{1},\cdots,a_{r}]\) be a Wahl continued fraction with \(a_{1}\geq 3\). Then the discrepancy of \(a_{1}\) is less than \(\frac{2-a_{1}}{a_{1}-1}\)._ Proof.: The last number \(a_{r}\) must be \(2\) because of the inductive construction of Wahl singularities. Then it is direct from Lemma 5.6. In the case 1, let the discrepancies for the \((-a)\)-curve be \(m_{a}\) and the discrepancies for the \((-b)\)-curve be \(m_{b}\). Then \(m_{a}+m_{b}<\frac{2-a}{a_{1}+1}+\frac{2-b}{b-1}<-\frac{1}{2}-\frac{1}{2}=-1\). **Lemma 5.8**.: _Let \([a_{1},\cdots,a_{r}]\) be a Wahl continued fraction with \(a_{1}=a_{2}=\cdots=a_{n}=2\) and \(a_{n+1}\geq 3\) for \(1\leq n<r\). Then the discrepancy of \(a_{1}\) is less than \(-1/(n+2)\)._ Proof.: Consider the inverse of \([a_{1},\cdots,a_{r}]:[a_{r},\cdots,a_{1}]\). Then \(a_{r}\) must be \(n+2\) because of the inductive construction of Wahl singularities. Then it follows directly from Lemma 5.6. In the case \(3\), let the discrepancies for the \((-a)\)-curve be \(m_{a}\) and the discrepancies for the \((-b)\)-curve be \(m_{b}\). Then \(m_{a}+m_{b}<\frac{2-a}{a-1}-\frac{1}{n+1}<\frac{-n-2}{n+3}-\frac{1}{n+1}< \frac{-n^{2}-4n-5}{n^{2}+4n+3}=-1-\frac{2}{n^{2}+4n+3}<-1\). **Lemma 5.9**.: _Let \([a_{1},\cdots,a_{t},\cdots,a_{r}]\) be a Wahl continued fraction with \(a_{t}\geq 5\). Then the discrepancy \(m_{t}\) of \(a_{t}\) is less than or equal to \((-a_{t}+1)/a_{t}\)._ Proof.: Let \(Y\) be the Wahl singularity corresponding to the given fraction. Then \(K_{\overline{Y}}=f^{*}K_{Y}+\sum\limits_{i=1}^{r}m_{i}E_{i}\) where \(E_{i}^{2}=-a_{i}\). By multiplying \(E_{t}\), we obtain \(-2+a_{t}=m_{t-1}+m_{t+1}-m_{t}a_{t}\). Therefore, \(m_{t}=(2-a_{t}+m_{t-1}+m_{t+1})/a_{t}\). If we show that \(m_{t-1}+m_{t+1}\leq-1\), then we conclude that \(m_{t}\leq(-a_{t}+1)/a_{t}\). We consider two cases. First, assume that \(E_{t}\) is the initial curve of \(Y\). Note that \([a_{1},\cdots,a_{t},\cdots,a_{r}]\) must be constructed from \([3,5,2]\) or \([2,5,3]\). Without loss of generality, assume that it is constructed from \([3,5,2]\). Then the \(\delta\) sequence assigned to \([3,5,2]\) is \((2,1,3)\). If the sequence \((\delta_{1},\cdots,\delta_{r})\) is assigned to \([a_{1},\cdots,a_{t},\cdots,a_{r}]\), then \(\delta_{t-1}=2\) and \(\delta_{t+1}=3\). Note also that \(\delta_{1}+\delta_{r}\geq 2+3=5\). From the \(\delta\) sequence, we obtain a bound \(m_{t-1}+m_{t+1}=\left(-1+\frac{\delta_{t-1}}{\delta_{1}+\delta_{r}}\right)+ \left(-1+\frac{\delta_{t+1}}{\delta_{1}+\delta_{r}}\right)=\left(-2+\frac{ \delta_{t+1}+\delta_{t+1}}{\delta_{1}+\delta_{r}}\right)\leq-2+\frac{5}{5}=-1\). Second, assume that \(E_{t}\) is not the initial curve and that the initial curve is left side of \(E_{t}\). We track the inductive process to obtain \([a_{1},\cdots,a_{r}]\). Starting from \[[4]\leftrightarrow(1),\] we obtain \[[a_{s}-1,\cdots,a_{t-1}]\leftrightarrow(\delta_{s},\cdots,\delta_{t-1}).\] By adding a \(2\) to the right side, we obtain \[[a_{s},\cdots,a_{t-1},2]\leftrightarrow(\delta_{s},\cdots,\delta_{t-1},\delta _{s}+\delta_{t-1}).\] To make the number \(2\) to be \(a_{t}\), we add \((a_{t}-2)\)\(2\) to the left and we get \[[2,\cdots,2,\cdots,a_{t}]\leftrightarrow((a_{t}-1)\delta_{s}+(a_{t}-2)\delta _{t-1},\cdots,2\delta_{u}+\delta_{t-1},\delta_{s},\cdots,\delta_{t-1},\delta _{s}+\delta_{t-1}).\] To fix the number \(a_{t}\), we must add a \(2\) to the right and we get \[[3,\cdots,2,\cdots,a_{t},2]\leftrightarrow((a_{t}-1)\delta_{s}+(a_{t}-2) \delta_{t-1},\cdots,2\delta_{u}+\delta_{t-1},\delta_{s},\cdots,\delta_{t-1}, \delta_{s}+\delta_{t-1},a_{t}\delta_{s}+(a_{t}-1)\delta_{t-1}).\] Finally, we obtain \[[a_{1},\cdots,a_{r}]\leftrightarrow(\delta_{1},\cdots,\delta_{r}).\] Therefore we have \[m_{t-1}+m_{t+1} =\left(-1+\frac{\delta_{t-1}}{\delta_{1}+\delta_{r}}\right)+ \left(-1+\frac{a_{t}\delta_{s}+(a_{t}-1)\delta_{t-1}}{\delta_{1}+\delta_{r}}\right)\] \[=\left(-2+\frac{a_{t}\delta_{s}+a_{t}\delta_{t-1}}{\delta_{1}+ \delta_{t}}\right)\] \[<\left(-2+\frac{a_{t}\delta_{s}+a_{t}\delta_{t-1}}{(a_{t}-1) \delta_{s}+(a_{t}-2)\delta_{t-1}+a_{t}\delta_{s}+(a_{t}-1)\delta_{t-1})}\right)\] \[=\left(-2+\frac{a_{t}\delta_{s}+a_{t}\delta_{t-1}}{(2a_{t}-1) \delta_{s}+(2a_{t}-3)\delta_{t-1}}\right)\] \[<-1\] In the cases 2 and 4, let the discrepancies for the \((-a)\)-curve be \(m_{a}\), the discrepancies for the \((-d)\)-curve be \(m_{d}\) and the discrepancies for the \((-2)\)-curve be \(m_{2}\). Then \(m_{a}+m_{d}<\frac{1-a_{a}}{a_{t}}+\frac{2-d}{d-1}<-\frac{4}{5}-\frac{1}{2}<-1\). And \(m_{a}+m_{2}<\frac{1-a_{t}}{a_{t}}-\frac{1}{n+1}<-\frac{3-n}{4+n}-\frac{1}{n+1}= \frac{-n^{2}-5n-7}{n^{2}+5n+4}=-1-\frac{3}{n^{2}+5n+4}<-1\). The ampleness of each case is confirmed. **Remark 5.10**.: In the definition of case B, the condition 'decorated curves that degenerate to the central curve come from only one branch'is essential for finding the corresponding \(P\)-resolutions. Even if \(d=t+2\), if the condition is still satisfied, then we can construct \(P\)-resolutions in similar way. But there exist combinatorial incidence matrices that the condition is not satisfied. If \(d=t+2\), then there are incidence matrices that do not correspond to P-resolutions. For example, for a weighted homogeneous surface singularity of type \((6,(2,1),(2,1),(2,1),(2,1))\), we have the following combinatorial incidence matrix. \[\begin{bmatrix}C_{1}&1&1&1&&\\ C_{2}&1&&&1&1\\ C_{3}&&1&1&&1\\ C_{4}&&1&&1&&\\ D_{1}&1&1&&&1\end{bmatrix}\] We expect that it is a non-cyclic normal singularity admitting a \(\mathbb{Q}\)-Gorenstein smoothing. ### An example We consider a weighted homogeneous surface singularity of type \((6,(3,5),(9,13),(7,10))\). Then its dual resolution graph is Figure 23. **Case A)** We have a following combinatorial incidence matrix of **Case A** \[\begin{bmatrix}&p_{0}&&&&&&&&\\ \hline C_{1,1}&1&1&1&0&&&&\\ C_{1,2}&1&1&1&0&1&&&&\\ \hline C_{2,1}&1&&&&1&1&1&0&&&&\\ C_{2,2}&1&&&&1&1&0&1&&&&\\ C_{2,3}&1&&&&1&0&1&1&&&&\\ C_{2,4}&1&&&&1&0&1&1&&&&\\ \hline C_{3,1}&1&&&&&&&&1&1&1&0&\\ C_{3,2}&1&&&&&&&&1&1&0&1&\\ C_{3,3}&1&&&&&&&&1&0&1&1&\\ \hline D_{1}&1&&&&&&&&&&&&1&0\\ D_{2}&1&&&&&&&&&&&&0&1\end{bmatrix}\] Figure 26. Dual resolution graph of a WHSS of type \((6,(3,5),(9,13),(7,10))\) Then we obtain three sub-matrices \[\begin{bmatrix}C_{1,1}&1&1&1&0\\ C_{1,2}&1&1&0&1\end{bmatrix}\begin{bmatrix}C_{2,1}&1&1&1&1&0\\ C_{2,2}&1&1&1&0&1\\ C_{2,3}&1&1&0&1&1\\ C_{2,4}&1&0&1&1&1\end{bmatrix}\begin{bmatrix}C_{3,1}&1&1&1&1&0\\ C_{3,2}&1&1&1&0&1\\ C_{3,3}&1&1&0&1&1\end{bmatrix}\] We can find corresponding \(P\)-resolutions. From these \(P\)-resolutions, we get a \(P\)-resolution of \(X\). **Case B)** We have an incidence matrix of case B. \[\begin{bmatrix}C_{1,1}&1&1&1&0\\ C_{1,2}&1&1&1&0&1\\ C_{2,1}&1&&&1&1&1&0\\ C_{2,2}&1&&&1&&1&0&1\\ C_{2,3}&1&&&1&&&0&1&1\\ C_{2,4}&1&&&1&&&0&1&1\\ C_{3,1}&1&&&1&1&&&1&0\\ C_{3,2}&1&&&1&1&&&1&0&1\\ C_{3,3}&1&&&1&1&&&0&1&1\\ D_{1}&1&1&0&&&&&&\\ D_{2}&1&0&1&&&&&&\\ \end{bmatrix}\] Then \([M_{1},M_{3},D]\) is of type 2-1 and \(g^{\prime}=2\). And \([M_{1},M_{2},D]\) is of type 2-2 and \(g_{2}=1\). Therefore we obtain two sub-matrices. \[\begin{bmatrix}C_{1,1}&1&1&1&0\\ C_{1,2}&1&1&1&0&1\\ C_{3,1}&1&&&1&1&1&0\\ C_{3,2}&1&&&1&1&0&1\\ C_{3,3}&1&&&1&1&0&1\\ D_{1}&1&1&0&0&&&\\ D_{2}&1&0&1&0&&&\\ F_{1}&1&0&0&1&&&\\ \end{bmatrix}\begin{bmatrix}C_{2,1}&1&1&1&1&0\\ C_{2,2}&1&1&1&0&1\\ C_{2,3}&1&1&0&1&1\\ C_{2,4}&1&0&1&1&1\end{bmatrix}\] From these incidence matrices, we obtain corresponding \(P\)-resolutions. By combining them, we get a \(P\)-resolution of \(X\).
2303.01905
Classification of five-dimensional symmetric Leibniz algebras
In this paper we give the complete classification of $5$-dimensional complex solvable symmetric Leibniz algebras.
Iroda Choriyeva, Abror Khudoyberdiyev
2023-03-03T13:05:43Z
http://arxiv.org/abs/2303.01905v1
# Classification of five-dimensional symmetric Leibniz algebras ###### Abstract In this paper we give the complete classification of 5-dimensional complex solvable symmetric Leibniz algebras. **Key words**: symmetric Leibniz algebra, solvable algebra, automorphism.. **Mathematics Subject Classification**: 17A30, 17A32 ## Introduction A left (resp. right) Leibniz algebra is a nonassociative algebra where the left (resp. right) multiplications are derivations. Note that left (right) Leibniz algebras introduced by Bloh in 1965 under the name of D-algebras [10]. In 1993 the Leibniz algebras were rediscovered by Jean-Louis Loday [24], as a generalization of Lie algebras with no symmetry requirements. If a nonassociative algebra is both a left Leibniz algebra and a right Leibniz algebra, it is called a symmetric Leibniz algebra [25]. These latter algebras had been considered in [9], appearing in the study of some bi-invariant connections on Lie groups. In recent years, the theory of Leibniz algebras has been intensively studied. During the last 30 years the theory of Leibniz algebras has been actively studied and many results on Lie algebras have been extended to Leibniz algebras (see for example [5, 6, 16, 17, 23, 28, 29]). Recently, S. Benayadi and S. Hidri in [9] investigated the structure of left (resp. right) Leibniz algebras endowed with invariant, non-degenerate and symmetric bilinear forms, which are called quadratic left (resp. right) Leibniz algebras. In particular, they prove that a quadratic left (or right) Leibniz algebra is a symmetric Leibniz algebra. At the same time, the variety of symmetric Leibniz algebras plays an important role in one-sided Leibniz algebras (more, about symmetric Leibniz algebras see, [7, 8, 9, 20, 21, 26]). Symmetric Leibniz algebras are related to Lie racks [1] and every symmetric Leibniz algebra is flexible, power-associative and nil-algebra with nilindex 3 [15]. A symmetric Leibniz algebra under commutator and anticommutator multiplications gives a Poisson algebra [3]. The classification, up to isomorphism, of any class of algebras is a fundamental and very difficult problem. It is one of the first problem that one encounters when trying to understand the structure of a member of this class of algebras. There are many results related to the algebraic classification of small-dimensional algebras in the varieties of Lie, Leibniz, Jordan, Zinbiel and many other algebras. In particular, 5-dimensional nilpotent, restricted Lie algebras [14], 6-dimensional nilpotent Lie algebras [13, 18], \(6\)-dimensional solvable Lie algebras [30], \(4\)-dimensional solvable Leibniz algebras [11], \(5\)-dimensional solvable Leibniz algebras with three dimensional nilradicals [22], and some others have been described. The list of all real and complex Lie algebras up to dimension six can be found from the L. Snobl and P. Winternitz's monograph [27]. The purpose of the present work is to continue the study of symmetric Leibniz algebras. Since the algebraic classification of all \(5\)-dimensional nilpotent symmetric Leibniz algebras is given in [4], we reduce our attention to the classification of \(5\)-dimensional solvable symmetric Leibniz algebras. In [7], a useful characterization of symmetric Leibniz algebras is given. Using these characterization in [1] it was given natural method for classification symmetric Leibniz algebras. It should be noted that in this method plays an important role the center of the underlying Lie algebra. Using this method we give the description of five-dimensional solvable symmetric Leibniz algebras. For this purpose we consider all five-dimensional solvable Lie algebras with non-zero center (even split algebras) and obtain the complete list of five-dimensional solvable symmetric Leibniz algebras. Our main result related to the algebraic classification of variety of solvable symmetric Leibniz algebras are summarized below. **Main Theorem**.: _Up to isomorphism, there are infinitely many isomorphism classes of complex \(5\)-dimensional solvable (non-split, non-nilpotent, non Lie) symmetric Leibniz algebras, described explicitly in Section 2 (see Theorems 2.1, 2.2, 2.3 and 2.4) in terms of \(24\) one-parameter families, \(6\) two-parameter families, \(1\) three-parameter family and \(37\) additional isomorphism classes._ **Remark**.: _Since there is no non-solvable, non-split, non Lie \(5\)-dimensional symmetric Leibniz algebras, previous Main Theorem give us the complete classification of \(5\)-dimensional complex symmetric Leibniz algebras._ Throughout the paper all the algebras (vector spaces) considered are finite-dimensional and over the field of complex numbers. Also in tables of multiplications of algebras we give nontrivial products only. ## 1 Preliminaries In this section we give necessary definitions and preliminary results. **Definition 1.1**.: _An algebra \((\mathcal{L},[-,-])\) over a field \(F\) is called Lie algebra if for any \(x,y,z\in\mathcal{L}\) the following identities:_ \[[x,y]=-[y,x],\quad[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0\] _hold._ **Definition 1.2**.: _An algebra \((\mathcal{L},\cdot)\) is said to be a symmetric Leibniz algebra, if for any \(x,y,z\in\mathcal{L}\) it satisfies the following identities:_ \[x\cdot(y\cdot z)=(x\cdot y)\cdot z+y\cdot(x\cdot z),\quad(x\cdot y)\cdot z=x \cdot(y\cdot z)+(x\cdot z)\cdot y.\] Any Lie algebra is a symmetric Leibniz algebra. However, the class of symmetric Leibniz algebras is far more bigger than the class of Lie algebras. Let \(\mathcal{L}\) be a vector space equipped with a billinear map \(\cdot:\mathcal{L}\times\mathcal{L}\to\mathcal{L}\). For all \(x,y\in\mathcal{L}\), we define [-,-] and \(\circ\) as follows \[[x,y]=\frac{1}{2}(x\cdot y-y\cdot x)\qquad x\circ y=\frac{1}{2}(x\cdot y+y \cdot x).\] **Proposition 1.3**.: _[_7_]_ _Let \((\mathcal{L},\cdot)\) be an algebra. The following assertions are equivalent:_ 1. \((\mathcal{L},\cdot)\) _is a symmetric Leibniz algebra._ 2. _The following conditions hold:_ 1. \((\mathcal{L},[-,-])\) _is a Lie algebra._ 2. _For any_ \(u,v\in\mathcal{L}\)_,_ \(u\circ v\) _belongs to the center of_ \((\mathcal{L},[-,-])\)_._ 3. _For any_ \(u,v\in\mathcal{L}\)_,_ \(([u,v])\circ w=0\) _and_ \((u\circ v)\circ w=0\)_._ According to this proposition, any symmetric Leibniz algebra is given by a Lie algebra \((\mathcal{L},[-,-])\) and a bilinear symmetric form \(\omega:\mathcal{L}\times\mathcal{L}\to Z(\mathcal{L})\) where \(Z(\mathcal{L})\) is the center of the Lie algebra, such that for any \(x,y,z\in\mathcal{L}\) \[\omega([x,y],z)=\omega(\omega(x,y),z)=0. \tag{1.1}\] Then the product of the symmetric Leibniz algebra is given by \[u\cdot v=[u,v]+u\circ v.\] **Proposition 1.4**.: _[_1_]_ _Let \((\mathcal{G},[-,-])\) be a Lie algebra and \(\omega\) and \(\mu\) two solutions of (1.1). Then \((\mathcal{G},\cdot_{\omega})\) is isomorphic to \((\mathcal{G},\cdot_{\mu})\) if and only if there exists an automorphism \(A\) of \((\mathcal{G},[-,-])\) such that_ \[\mu(u,v)=A^{(-1)}\omega(Au,Av).\] For an arbitrary symmetric Leibniz algebra \((L,\cdot)\) we define the _derived_ and _central series_ as follows: \[L^{[1]}=L,\ L^{[s+1]}=L^{[s]}\cdot L^{[s]},\quad s\geq 1,\] \[L^{1}=L,\ L^{k+1}=L^{k}\cdot L,\quad k\geq 1.\] **Definition 1.5**.: _An \(n\)-dimensional symmetric Leibniz algebra \(L\) is called solvable (nilpotent) if there exists \(s\in N\) (\(k\in N\)) such that \(L^{[s]}=0\) (\(L^{k}=0\)). Such minimal number is called index of solvability (nilpotency)._ ### Classification of symmetric Leibniz algebras up to dimension four The classification of Leibniz algebras were obtained up to dimension four in the works [2, 11, 12, 19]. From the list of these classification we can obtain the list of symmetric Leibniz algebras in low dimensional. First we give the list of two and three dimensional non-Lie non split symmetric Leibniz algebras. \begin{tabular}{l l l l} \hline \(\lambda_{2}\) & : & \(e_{1}\cdot e_{1}=e_{2}\) & \\ \hline \(\mathcal{N}_{1}\) & : & \(e_{1}\cdot e_{2}=e_{3}\), & \(e_{2}\cdot e_{1}=e_{3}\) & \\ \(\mathcal{N}_{2}^{\alpha}\) & : & \(e_{1}\cdot e_{1}=\alpha e_{3}\), & \(e_{2}\cdot e_{1}=e_{3}\), & \(e_{2}\cdot e_{2}=e_{3}\) \\ \(\mathcal{R}_{1}\) & : & \(e_{1}\cdot e_{2}=e_{1}\), & \(e_{1}\cdot e_{2}=-e_{1}\), & \(e_{2}\cdot e_{2}=e_{3}\) \\ \hline \end{tabular} Note that, the algebras \(\lambda_{2}\), \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}^{\alpha}\) are nilpotent and \(\mathcal{R}_{1}\) is non-nilpotent solvable algebra. In the following table we give the list of 4-dimensional non-Lie solvable symmetric Leibniz algebras. \begin{tabular}{c c c c c c} \(S_{1}\) & : & \(e_{1}\cdot e_{1}=e_{4}\), & \(e_{1}\cdot e_{2}=e_{3}\), & \(e_{2}\cdot e_{1}=-e_{3}\), & \(e_{2}\cdot e_{2}=e_{4}\), \\ & & \(e_{2}\cdot e_{3}=e_{4}\), & \(e_{3}\cdot e_{2}=-e_{4}\) & \\ \(S_{2}\) & : & \(e_{1}\cdot e_{1}=e_{4}\), & \(e_{1}\cdot e_{2}=e_{3}\), & \(e_{2}\cdot e_{1}=-e_{3}\) & \(e_{2}\cdot e_{3}=e_{4}\), & \(e_{3}\cdot e_{2}=-e_{4}\) \\ \(S_{3}\) & : & \(e_{1}\cdot e_{2}=e_{3}+e_{4}\), & \(e_{2}\cdot e_{1}=-e_{3}\), & \(e_{2}\cdot e_{2}=e_{4}\), & \(e_{2}\cdot e_{3}=e_{4}\), & \(e_{3}\cdot e_{2}=-e_{4}\) \\ \(S_{4}\) & : & \(e_{1}\cdot e_{2}=e_{3}\), & \(e_{2}\cdot e_{1}=-e_{3}\), & \(e_{2}\cdot e_{2}=e_{4}\), & \(e_{2}\cdot e_{3}=e_{4}\), & \(e_{3}\cdot e_{2}=-e_{4}\) \\ \(S_{5}\) & : & \(e_{1}\cdot e_{2}=e_{3}\), & \(e_{2}\cdot e_{1}=-e_{3}\), & \(e_{2}\cdot e_{3}=e_{4}\), & \(e_{3}\cdot e_{2}=-e_{4}\) & \\ \hline \(L_{1}\) & : & \(e_{1}\cdot e_{1}=e_{4}\), & \(e_{1}\cdot e_{2}=-e_{2}\), & \(e_{2}\cdot e_{1}=e_{2}\) & \\ & & \(e_{1}\cdot e_{3}=e_{3}\), & \(e_{3}\cdot e_{1}=-e_{3}\), & \(e_{2}\cdot e_{3}=e_{4}\), & \(e_{3}\cdot e_{2}=-e_{4}\) & \\ \(L_{2}^{\alpha}\) & : & \(e_{1}\cdot e_{1}=\alpha e_{4}\), & \(e_{1}\cdot e_{2}=e_{4}\), & \(e_{2}\cdot e_{2}=e_{4}\), & \(e_{1}\cdot e_{3}=-e_{3}\), & \(e_{3}\cdot e_{1}=e_{3}\) \\ \(L_{3}\) & : & \(e_{1}\cdot e_{1}=2e_{4}\), & \(e_{2}\cdot e_{2}=e_{4}\), & \(e_{1}\cdot e_{3}=-e_{3}\), & \(e_{3}\cdot e_{1}=e_{3}\) \\ \(L_{4}^{\alpha}\) & : & \(e_{1}\cdot e_{1}=e_{4}\), & \(e_{1}\cdot e_{2}=-e_{2}\), & \(e_{2}\cdot e_{1}=e_{2}\), & \(e_{1}\cdot e_{3}=-xe_{3}\), & \(e_{3}\cdot e_{1}=e_{3}\) \\ \(L_{5}^{\alpha}\) & : & \(e_{1}\cdot e_{2}=-e_{2}\), & \(e_{2}\cdot e_{1}=e_{2}\), & \(e_{1}\cdot e_{3}=(\alpha-1)e_{4}\), & \(e_{3}\cdot e_{1}=(\alpha+1)e_{4}\), \\ \(L_{6}\) & : & \(e_{1}\cdot e_{1}=e_{4}\), & \(e_{2}\cdot e_{1}=e_{2}+e_{3}\), & \(e_{1}\cdot e_{2}=-e_{2}-e_{3}\), & \(e_{1}\cdot e_{3}=-e_{3}\), & \(e_{3}\cdot e_{1}=e_{3}\) \\ \(L_{7}\) & : & \(e_{1}\cdot e_{1}=e_{4}\), & \(e_{1}\cdot e_{2}=-e_{2}\), & \(e_{2}\cdot e_{1}=e_{2}\), & \(e_{1}\cdot e_{3}=-e_{4}\), & \(e_{3}\cdot e_{1}=e_{4}\) \\ \hline \end{tabular} The algebras \(S_{i}\) are nilpotent and \(L_{j}\) are non-nilpotent solvable. It should be noted that in [4] (Theorem B) the list of 4-dimensional non-nilpotent, non-Lie symmetric Leibniz algebras are given. The algebras \(\mathcal{L}_{35}^{\alpha\neq-1}\) and \(\mathcal{L}_{27}\) in [4] are written as \(L_{5}^{\alpha}\) in our list and since the algebra \(\mathcal{L}_{14}\) is split we omit it here. ## 2 Classification of five-dimensional solvable symmetric Leibniz algebras Now we give the classification of five-dimensional solvable symmetric Leibniz algebras. Since the list of 5-dimensional nilpotent symmetric Leibniz algebras is given in [4], we reduce our attention to the classification of 5-dimensional non-nilpotent solvable symmetric Leibniz algebras. For this purpose we use the fact that any symmetric Leibniz algebras can be constructed by Lie algebras with non-zero center. In this section we use the term just solvable symmetric Leibniz algebra, instead of non-nilpotent, non-Lie solvable symmetric Leibniz algebra. ### Five-dimensional solvable symmetric Leibniz algebras, whose underline Lie algebra is non-split First we give the description of all five-dimensional solvable symmetric Leibniz algebras, whose underline Lie algebra is non-split. Below, we give the list of non-split 5-dimensional complex solvable Lie algebras with non zero center [27]: \begin{tabular}{l l l l l l l} \(\mathcal{S}_{5.1}\): & \([e_{2},e_{5}]=e_{1}\), & \([e_{3},e_{5}]=e_{2}\), & \([e_{4},e_{5}]=e_{4}\). \\ \(\mathcal{S}_{5.2}\): & \([e_{2},e_{5}]=e_{1}\), & \([e_{3},e_{5}]=e_{3}\), & \([e_{4},e_{5}]=e_{3}+e_{4}\). \\ \(\mathcal{S}_{5.3}\): & \([e_{2},e_{5}]=e_{1}\), & \([e_{3},e_{5}]=e_{3}\), & \([e_{4},e_{5}]=\lambda e_{4}\). \\ \(\mathcal{S}_{5.14}\): & \([e_{3},e_{2}]=e_{1}\), & \([e_{3},e_{5}]=e_{2}\), & \([e_{4},e_{5}]=e_{4}\). \\ \(\mathcal{S}_{5.15}\): & \([e_{3},e_{2}]=e_{1}\), & \([e_{2},e_{5}]=e_{2}\), & \([e_{3},e_{5}]=-e_{3}\), & \([e_{4},e_{5}]=e_{1}\). \\ \(\mathcal{S}_{5.17}\): & \([e_{3},e_{2}]=e_{1}\), & \([e_{2},e_{5}]=e_{2}\), & \([e_{3},e_{5}]=-e_{3}\), & \([e_{4},e_{5}]=\lambda e_{4}\). \\ \(\mathcal{S}_{5.18}\): & \([e_{3},e_{2}]=e_{1}\), & \([e_{2},e_{5}]=-e_{2}\), & \([e_{3},e_{5}]=e_{3}+e_{4}\), & \([e_{4},e_{5}]=e_{4}\). \\ \(\mathcal{S}_{5.20}\): & \([e_{3},e_{2}]=e_{1}\), & \([e_{1},e_{5}]=e_{1}\), & \([e_{2},e_{5}]=e_{2}\), & \([e_{3},e_{5}]=e_{4}\). \\ \end{tabular} \[\begin{array}{ \begin{tabular}{l l l l l} & \(e_{2}\cdot e_{3}=-e_{1}\), & \(e_{5}\cdot e_{1}=-e_{1}\), & \(e_{5}\cdot e_{2}=-e_{2}\), & \(e_{5}\cdot e_{3}=-e_{4}\), \\ & & \(e_{5}\cdot e_{5}=e_{4}\) & & \\ \hline \(\mathcal{L}_{19}\) & : & \(e_{3}\cdot e_{2}=e_{1}\), & \(e_{1}\cdot e_{5}=e_{1}\), & \(e_{2}\cdot e_{5}=e_{2}\), & \(e_{3}\cdot e_{5}=2e_{4}\), \\ & & \(e_{2}\cdot e_{3}=-e_{1}\), & \(e_{5}\cdot e_{1}=-e_{1}\), & \(e_{5}\cdot e_{2}=-e_{2}\) & \\ \hline \(\mathcal{L}_{20}\) & : & \(e_{3}\cdot e_{2}=e_{1}\), & \(e_{1}\cdot e_{5}=e_{1}\), & \(e_{2}\cdot e_{5}=e_{2}\), & \(e_{3}\cdot e_{5}=2e_{4}\), \\ & & \(e_{2}\cdot e_{3}=-e_{1}\), & \(e_{5}\cdot e_{1}=-e_{1}\), & \(e_{5}\cdot e_{2}=-e_{2}\), & \(e_{5}\cdot e_{5}=e_{4}\) \\ \hline \(\mathcal{L}_{21}\) & : & \(e_{3}\cdot e_{2}=e_{1}\), & \(e_{1}\cdot e_{5}=e_{1}\), & \(e_{2}\cdot e_{5}=e_{2}\), & \(e_{3}\cdot e_{5}=e_{4}\), \\ & & \(e_{2}\cdot e_{3}=-e_{1}\), & \(e_{5}\cdot e_{1}=-e_{1}\), & \(e_{5}\cdot e_{2}=-e_{2}\), & \(e_{5}\cdot e_{3}=-e_{4}\), \\ & & \(e_{3}\cdot e_{3}=e_{4}\) & & \\ \hline \(\mathcal{L}_{22}\) & : & \(e_{3}\cdot e_{2}=e_{1}\), & \(e_{1}\cdot e_{5}=e_{1}\), & \(e_{2}\cdot e_{5}=e_{2}\), & \(e_{3}\cdot e_{5}=e_{4}\), \\ & & \(e_{2}\cdot e_{3}=-e_{1}\), & \(e_{5}\cdot e_{1}=-e_{1}\), & \(e_{5}\cdot e_{2}=-e_{2}\), & \(e_{5}\cdot e_{3}=-e_{4}\), \\ & & \(e_{3}\cdot e_{3}=e_{4}\) & & \\ \hline \(\mathcal{L}_{23}\) & : & \(e_{3}\cdot e_{2}=e_{1}\), & \(e_{1}\cdot e_{5}=e_{1}\), & \(e_{2}\cdot e_{5}=e_{2}\), & \(e_{3}\cdot e_{5}=2e_{4}\), \\ & & \(e_{2}\cdot e_{3}=-e_{1}\), & \(e_{5}\cdot e_{1}=-e_{1}\), & \(e_{5}\cdot e_{2}=-e_{2}\), & \(e_{5}\cdot e_{5}=\alpha e_{4}\), \\ & & \(e_{3}\cdot e_{3}=e_{4}\) & & \\ \hline \(\mathcal{L}_{24}\) & : & \(e_{4}\cdot e_{2}=e_{1}\), & \(e_{2}\cdot e_{4}=-e_{1}\), & \(e_{4}\cdot e_{3}=e_{2}\), & \(e_{3}\cdot e_{4}=-e_{2}\), \\ & & \(e_{2}\cdot e_{5}=e_{2}\), & \(e_{5}\cdot e_{2}=-e_{2}\), & \(e_{3}\cdot e_{5}=-2e_{3}\), & \(e_{5}\cdot e_{3}=2e_{3}\), \\ & & \(e_{4}\cdot e_{5}=e_{4}\), & \(e_{5}\cdot e_{4}=-e_{4}\), & \(e_{5}\cdot e_{5}=e_{1}\) & \\ \hline \(\mathcal{L}_{25}^{\alpha,\beta,7}\) & : & \(e_{2}\cdot e_{4}=e_{2}\), & \(e_{3}\cdot e_{5}=e_{3}\), & \(e_{4}\cdot e_{5}=(\beta+1)e_{1}\), & \(e_{4}\cdot e_{4}=\alpha e_{1}\), \\ & & \(e_{4}\cdot e_{2}=-e_{2}\), & \(e_{5}\cdot e_{3}=-e_{3}\), & \(e_{5}\cdot e_{4}=(\beta-1)e_{1}\), & \(e_{5}\cdot e_{5}=\gamma e_{1}\) \\ \hline \end{tabular} Proof.: **Case 1.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.1}:\ \ [e_{2},e_{5}]=e_{1},\ \ \ [e_{3},e_{5}]=e_{2},\ \ \ [e_{4},e_{5}]=e_{4}.\] Since \(Z(\mathcal{S}_{5.1})=\{e_{1}\}\), then by straightforward computations, we get that the corresponding symmetric bilinear form \(\omega:\mathcal{S}_{5.1}\times\mathcal{S}_{5.1}\to Z(\mathcal{S}_{5.1})\) satisfying the equation (1.1) is \[\omega(e_{3},e_{3})=\alpha e_{1},\ \ \ \omega(e_{3},e_{5})=\beta e_{1},\ \ \ \omega(e_{5},e_{5})=\gamma e_{1},\ \ \ (\alpha,\beta,\gamma)\neq(0,0,0).\] Thus, we have the following class of symmetric Leibniz algebras \[\begin{array}{llll}\mathcal{L}_{\omega}&:&e_{2}\cdot e_{5}=e_{1},&\ \ \ \ e_{3}\cdot e_{5}=e_{2}+\beta e_{1},&\ \ \ \ e_{4}\cdot e_{5}=e_{4},&\ \ \ \ e_{3}\cdot e_{3}=\alpha e_{1},\\ &e_{5}\cdot e_{2}=-e_{1},&\ \ \ e_{5}\cdot e_{3}=-e_{2}+\beta e_{1},&\ \ \ e_{5}\cdot e_{4}=-e_{4},&\ \ \ e_{5}\cdot e_{5}=\gamma e_{1}.\end{array}\] By Proposition 1.4, we have that, two symmetric Leibniz algebras \(\mathcal{L}_{\omega}\) and \(\mathcal{L}_{\mu}\) of this class are isomorphic if and only if there exists an automorphism \(T\) of the Lie algebra \(\mathcal{S}_{5.1}\), such that \(\mu(u,v)=T^{-1}\omega(Tu,Tv)\). Since the matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{5.1}\) is \[T=\left(\begin{array}{cccc}a_{1}&a_{2}&a_{3}&0&a_{4}\\ 0&a_{1}&a_{2}&0&a_{5}\\ 0&0&a_{1}&0&a_{6}\\ 0&0&0&a_{7}&a_{8}\\ 0&0&0&0&1\end{array}\right),\] we have the restriction \[\mu(e_{3},e_{3})=\alpha a_{1}e_{1},\quad\mu(e_{3},e_{5})=(\alpha a_{6}+\beta)e_{1 },\quad\mu(e_{5},e_{5})=\frac{\alpha a_{6}^{2}+2\beta a_{6}+\gamma}{a_{1}}e_{1}.\] Now we consider following subcases: * Let \(\alpha\neq 0\), then choosing \(a_{1}=\frac{1}{\alpha}\), \(a_{6}=-\frac{\beta}{\alpha}\), we get that \(\mu(e_{3},e_{3})=e_{1}\), \(\mu(e_{3},e_{5})=0\), \(\mu(e_{5},e_{5})=(\alpha\gamma-\beta^{2})e_{1}\) and obtain the algebra \(\mathcal{L}_{1}^{\alpha}\). * Let \(\alpha=0\), then we consider following subcases: * If \(\beta\neq 0\), then choosing \(a_{6}=-\frac{\gamma}{2\beta}\), we get that \(\mu(e_{3},e_{3})=0\), \(\mu(e_{3},e_{5})=\beta e_{1}\), \(\mu(e_{5},e_{5})=0\) and obtain the algebra \(\mathcal{L}_{2}^{\alpha\neq 0}\). * If \(\beta=0\), then \(\gamma\neq 0\) and choosing \(a_{1}=\gamma\), we get that \(\mu(e_{3},e_{3})=0\), \(\mu(e_{3},e_{5})=0\), \(\mu(e_{5},e_{5})=e_{1}\) and obtain the algebra \(\mathcal{L}_{3}\). **Case 2.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.2}:\ \ [e_{2},e_{5}]=e_{1},\ \ [e_{3},e_{5}]=e_{3},\ \ [e_{4},e_{5}]=e_{3}+e_{4}.\] Since \(Z(\mathcal{S}_{5.2})=\{e_{1}\}\), then by straightforward computations, we get that the corresponding symmetric bilinear form \(\omega:\mathcal{S}_{5.2}\times\mathcal{S}_{5.2}\to Z(\mathcal{S}_{5.2})\) satisfying the equation (1.1) is \[\omega(e_{2},e_{2})=\alpha e_{1},\quad\omega(e_{2},e_{5})=\beta e_{1},\quad \omega(e_{5},e_{5})=\gamma e_{1},\quad(\alpha,\beta,\gamma)\neq(0,0,0).\] Thus, we have the following class of symmetric Leibniz algebras \[\begin{array}{lll}\mathcal{L}_{\omega}&:&e_{2}\cdot e_{5}=(\beta+1)e_{1},&e _{5}\cdot e_{2}=(\beta-1)e_{1},&e_{3}\cdot e_{5}=e_{3},&e_{5}\cdot e_{3}=-e_{3 },\\ &&e_{4}\cdot e_{5}=e_{3}+e_{4},&e_{5}\cdot e_{4}=-e_{3}-e_{4},&e_{2}\cdot e_{2 }=\alpha e_{1},&e_{5}\cdot e_{5}=\gamma e_{1}.\end{array}\] Since the matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{5.2}\) is \[T=\left(\begin{array}{cccc}a_{1}&a_{2}&0&0&a_{3}\\ 0&a_{1}&0&0&a_{4}\\ 0&0&a_{5}&a_{6}&a_{7}\\ 0&0&0&a_{5}&a_{8}\\ 0&0&0&0&1\end{array}\right),\] we have the restriction \[\mu(e_{2},e_{2})=\alpha a_{1}e_{1},\quad\mu(e_{2},e_{5})=(\alpha a_{4}+\beta)e _{1},\quad\mu(e_{5},e_{5})=\frac{\alpha a_{4}^{2}+2\beta a_{4}+\gamma}{a_{1}} e_{1}.\] Now we consider following cases: * Let \(\alpha\neq 0\), then choosing \(a_{1}=\frac{1}{\alpha}\), \(a_{4}=-\frac{\beta}{\alpha}\), we get that \(\mu(e_{2},e_{2})=e_{1}\), \(\mu(e_{2},e_{5})=0\), \(\mu(e_{5},e_{5})=(\alpha\gamma-\beta^{2})e_{1}\) and obtain the algebra \(\mathcal{L}_{4}^{\alpha}\). * Let \(\alpha=0\). * If \(\beta\neq 0\), then choosing \(a_{4}=-\frac{\gamma}{2\beta}\), we get that \(\mu(e_{2},e_{2})=0\), \(\mu(e_{2},e_{5})=\beta e_{1}\), \(\mu(e_{5},e_{5})=0\) and obtain the algebra \(\mathcal{L}_{5}^{\alpha\neq 0}\). * If \(\beta=0\), then \(\gamma\neq 0\) and choosing \(a_{1}=\gamma\), we get that \(\mu(e_{2},e_{2})=0\), \(\mu(e_{2},e_{5})=0\), \(\mu(e_{5},e_{5})=e_{1}\). Thus we have the algebra \(\mathcal{L}_{6}\). **Case 3.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.3}:\ \ [e_{2},e_{5}]=e_{1},\ \ \ [e_{3},e_{5}]=e_{3},\ \ [e_{4},e_{5}]= \lambda e_{4}.\] Then \(Z(\mathcal{S}_{5.3})=\{e_{1}\}\) and symmetric bilinear form \(\omega:\mathcal{S}_{5.3}\times\mathcal{S}_{5.3}\to Z(\mathcal{S}_{5.3})\) satisfying the equation (1.1) is \[\omega(e_{2},e_{2})=\alpha e_{1},\ \ \ \omega(e_{2},e_{5})=\beta e_{1},\ \ \ \omega(e_{5},e_{5})=\gamma e_{1},\ \ \ (\alpha,\beta,\gamma)\neq(0,0,0).\] Thus, we have the following class of symmetric Leibniz algebras \[\begin{array}{lll}\mathcal{L}_{\omega}&:&e_{2}\cdot e_{5}=(\beta+1)e_{1},&e _{5}\cdot e_{2}=(\beta-1)e_{1},&e_{3}\cdot e_{5}=e_{3},&e_{5}\cdot e_{3}=-e_{3 },\\ &&e_{4}\cdot e_{5}=\lambda e_{4},&e_{5}\cdot e_{4}=-\lambda e_{4},&e_{2}\cdot e _{2}=\alpha e_{1},&e_{5}\cdot e_{5}=\gamma e_{1}\end{array}\] Since the matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{5.3}\) is \[T=\left(\begin{array}{cccc}a_{1}&a_{2}&0&0&a_{3}\\ 0&a_{1}&0&0&a_{4}\\ 0&0&a_{5}&0&a_{6}\\ 0&0&0&a_{5}&a_{7}\\ 0&0&0&0&1\end{array}\right),\ \mbox{for}\ \lambda\neq 0,\ \ \ T=\left(\begin{array}{cccc}a_{1}&a_{2}&0&0&a_{3}\\ 0&a_{1}&0&0&a_{4}\\ 0&0&a_{5}&b_{1}&a_{6}\\ 0&0&b_{2}&a_{5}&a_{7}\\ 0&0&0&0&1\end{array}\right)\ \mbox{for}\ \lambda=0,\] we have the restriction \[\mu(e_{2},e_{2})=\alpha a_{1}e_{1},\ \ \ \mu(e_{2},e_{5})=(\alpha a_{4}+\beta)e_{1},\ \ \ \mu(e_{5},e_{5})=\frac{\alpha a_{4}^{2}+2\beta a_{4}+\gamma}{a_{1}}e_{1}.\] Now we consider following cases: * Let \(\alpha\neq 0\), then choosing \(a_{1}=\frac{1}{\alpha}\), \(a_{4}=-\frac{\beta}{\alpha}\), we get that \(\mu(e_{2},e_{2})=e_{1}\), \(\mu(e_{2},e_{5})=0\), \(\mu(e_{5},e_{5})=(\alpha\gamma-\beta^{2})e_{1}\) and obtain the algebra \(\mathcal{L}_{7}^{\alpha,\beta}\). * Let \(\alpha=0\). * If \(\beta\neq 0\), then choosing \(a_{4}=-\frac{\gamma}{2\beta}\), we get that \(\mu(e_{2},e_{2})=0\), \(\mu(e_{2},e_{5})=\beta e_{1}\), \(\mu(e_{5},e_{5})=0\) and obtain the algebra \(\mathcal{L}_{8}^{\alpha,\beta\neq 0}\). * If \(\beta=0\), then \(\gamma\neq 0\) and choosing \(a_{1}=\gamma\), we get that \(\mu(e_{2},e_{2})=0\), \(\mu(e_{2},e_{5})=0\), \(\mu(e_{5},e_{5})=e_{1}\) and obtain the algebra \(\mathcal{L}_{9}^{\alpha}\). **Case 4.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.14}:\ \ [e_{3},e_{2}]=e_{1},\ \ \ [e_{3},e_{5}]=e_{2},\ \ [e_{4},e_{5}]=e_{4}.\] Then \(Z(\mathcal{S}_{5.14})=\{e_{1}\}\) and \[\omega(e_{3},e_{3})=\alpha e_{1},\quad\omega(e_{3},e_{5})=\beta e_{1},\quad \omega(e_{5},e_{5})=\gamma e_{1},\quad(\alpha,\beta,\gamma)\neq(0,0,0).\] Thus, we have the following class of symmetric Leibniz algebras \[\begin{array}{lll}\mathcal{L}_{\omega}&:&e_{3}\cdot e_{2}=e_{1},&e_{3}\cdot e _{5}=\beta e_{1}+e_{2},&e_{4}\cdot e_{5}=e_{4},&e_{3}\cdot e_{3}=\alpha e_{1}, \\ &&e_{2}\cdot e_{3}=-e_{1},&e_{5}\cdot e_{3}=\beta e_{1}-e_{2},&e_{5}\cdot e_{4}= -e_{4},&e_{5}\cdot e_{5}=\gamma e_{1},\end{array}\] Since the matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{5.14}\) is \[T=\left(\begin{array}{cccc}a_{1}^{2}&a_{1}a_{5}&a_{2}&0&a_{3}\\ 0&a_{1}&a_{4}&0&a_{5}\\ 0&0&a_{1}&0&0\\ 0&0&0&a_{6}&a_{7}\\ 0&0&0&0&1\end{array}\right),\] we have the restriction \[\mu(e_{3},e_{3})=\alpha e_{1},\quad\mu(e_{3},e_{5})=\frac{\beta}{a_{1}}e_{1}, \quad\mu(e_{5},e_{5})=\frac{\gamma}{a_{1}^{2}}e_{1}.\] Now we consider following cases: * If \(\beta=0,\gamma=0\), then we have the algebra \(\mathcal{L}_{10}^{\alpha}\). * If \(\beta=0,\gamma\neq 0\), then have the algebra \(\mathcal{L}_{11}^{\alpha}\). * If \(\beta\neq 0\), then choosing \(a_{1}=\beta\), we obtain the algebra \(\mathcal{L}_{12}^{\alpha,\beta}\). **Case 5.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.15}:\ \ [e_{3},e_{2}]=e_{1},\ \ [e_{2},e_{5}]=e_{2},\ \ [e_{3},e_{5}]=-e_{3},\ \ [e_{4},e_{5}]=e_{1}.\] Then \(Z(\mathcal{S}_{5.15})=\{e_{1}\}\) and \[\omega(e_{4},e_{4})=\alpha e_{1},\quad\omega(e_{4},e_{5})=\beta e_{1},\quad \omega(e_{5},e_{5})=\gamma e_{1},\] where \((\alpha,\beta,\gamma)\neq(0,0,0)\). Thus, we have the following class of symmetric Leibniz algebras \[\begin{array}{lll}\mathcal{L}_{\omega}&:&e_{3}\cdot e_{2}=e_{1},&e_{2}\cdot e _{5}=e_{2},&e_{3}\cdot e_{5}=-e_{3},&e_{4}\cdot e_{5}=(\beta+1)e_{1},&e_{4} \cdot e_{4}=\alpha e_{1},\\ &&e_{2}\cdot e_{3}=-e_{1},&e_{5}\cdot e_{2}=-e_{2},&e_{5}\cdot e_{3}=e_{3},&e_ {5}\cdot e_{4}=(\beta-1)e_{1},&e_{5}\cdot e_{5}=\gamma e_{1},\end{array}\] By Proposition 1.4, we have that, two symmetric Leibniz algebras \(\mathcal{L}_{\omega}\) and \(\mathcal{L}_{\mu}\) of this class are isomorphic if and only if there exists an automorphism \(T\) of the Lie algebra \(\mathcal{S}_{5.15}\), such that \(\mu(u,v)=T^{-1}\omega(Tu,Tv)\). Since the matrix form of the group of automorphisms of the algebra is \[T=\left(\begin{array}{cccc}a_{3}a_{5}&-a_{3}a_{6}&-a_{4}a_{5}&a_{1}&a_{2}\\ 0&a_{3}&0&0&a_{4}\\ 0&0&a_{5}&0&a_{6}\\ 0&0&0&a_{3}a_{5}&a_{7}\\ 0&0&0&0&1\end{array}\right),\] we have the restriction \[\mu(e_{4},e_{4})=a_{3}a_{5}ae_{1},\quad\mu(e_{4},e_{5})=(a_{7}\alpha+\beta)e_{1 },\quad\mu(e_{5},e_{5})=\frac{a_{7}^{2}\alpha+2a_{7}\beta+\gamma}{a_{3}a_{5}}e _{1}.\] Now we consider following cases: * If \(\alpha\neq 0\), then choosing \(a_{3}=\frac{1}{a_{3}}\), \(a_{7}=-\frac{\beta}{\alpha}\), we have the algebra \(\mathcal{L}_{13}^{\alpha}\). * If \(\alpha=0\) and \(\beta\neq 0\), then choosing \(a_{7}=-\frac{\gamma}{2\beta}\), we obtain the algebra \(\mathcal{L}_{14}^{\alpha\neq 0}\). * If \(\alpha=0\) and \(\beta=0\), then \(\gamma\neq 0\) and choosing \(a_{7}=\gamma\), we obtain the algebra \(\mathcal{L}_{15}\). **Case 6.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.17}:\ \ [e_{3},e_{2}]=e_{1},\ \ [e_{2},e_{5}]=e_{2},\ \ [e_{3},e_{5}]=-e_{3},\ \ [e_{4},e_{5}]= \lambda e_{4}.\] Since \(Z(\mathcal{S}_{5.17})=\{e_{1}\}\), then by straightforward computations, we get that the corresponding symmetric bilinear form \(\omega:\mathcal{S}_{5.17}\times\mathcal{S}_{5.17}\to Z(\mathcal{S}_{5.17})\) satisfying the equation (1.1) is \[\omega(e_{5},e_{5})=\alpha e_{1},\] where \(\alpha\neq 0\). Hence, in this case we obtain the algebra \(\mathcal{L}_{16}^{\alpha\neq 0}\). **Case 7.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.18}:\ \ [e_{3},e_{2}]=e_{1},\ \ [e_{2},e_{5}]=-e_{2},\ \ [e_{3},e_{5}]=e_{3}+e_{4},\ \ [e_{4},e_{5}]=e_{4}.\] Since \(Z(\mathcal{S}_{5.18})=\{e_{1}\}\), then by straightforward computations, we get that the corresponding symmetric bilinear form \(\omega:\mathcal{S}_{5.18}\times\mathcal{S}_{5.18}\to Z(\mathcal{S}_{5.18})\) satisfying the equation (1.1) is \[\omega(e_{5},e_{5})=\alpha e_{1}\] where \(\alpha\neq 0\). In this case we obtain the algebra \(\mathcal{L}_{17}^{\alpha}\). **Case 8.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.20}:\ \ [e_{3},e_{2}]=e_{1},\ \ [e_{1},e_{5}]=e_{1},\ \ [e_{2},e_{5}]=e_{2},\ \ [e_{3},e_{5}]=e_{4}.\] Since \(Z(\mathcal{S}_{5.20})=\{e_{4}\}\), then by straightforward computations, we get that the corresponding symmetric bilinear form \(\omega:\mathcal{S}_{5.20}\times\mathcal{S}_{5.20}\to Z(\mathcal{S}_{5.20})\) satisfying the equation (1.1) is \[\omega(e_{3},e_{3})=\alpha e_{4},\quad\omega(e_{3},e_{5})=\beta e_{4},\quad \omega(e_{5},e_{5})=\gamma e_{4},\] where \((\alpha,\beta,\gamma)\neq(0,0,0)\). Thus, we have the following class of symmetric Leibniz algebra \[\begin{array}{cccccccc}\mathcal{L}_{\omega}&:&e_{3}\cdot e_{2}=e_{1},&e_{1} \cdot e_{5}=e_{1},&e_{2}\cdot e_{5}=e_{2},&e_{3}\cdot e_{5}=(\beta+1)e_{4},&e_{3 }\cdot e_{3}=\alpha e_{4},\\ &e_{2}\cdot e_{3}=-e_{1},&e_{5}\cdot e_{1}=-e_{1},&e_{5}\cdot e_{2}=-e_{2},&e_{ 5}\cdot e_{3}=(\beta-1)e_{4},&e_{5}\cdot e_{5}=\gamma e_{4},\end{array}\] Since matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{5.20}\) is \[T=\left(\begin{array}{cccc}a_{3}a_{5}&a_{1}&-a_{4}a_{5}&0&a_{2}\\ 0&a_{3}&0&0&a_{4}\\ 0&0&a_{5}&0&0\\ 0&0&a_{6}&a_{7}&a_{8}\\ 0&0&0&0&1\end{array}\right),\] we have the restriction \[\mu(e_{3},e_{3})=\frac{\alpha a_{5}^{2}}{a_{7}}e_{4},\quad\mu(e_{3},e_{5})= \frac{\beta a_{5}}{a_{7}}e_{4},\quad\mu(e_{5},e_{5})=\frac{\gamma}{a_{7}}e_{4}.\] Now we consider following cases: * Let \(\alpha=\beta=0\) and \(\gamma\neq 0\), then choosing \(a_{7}=\gamma\), we have the algebra \(\mathcal{L}_{18}\). * Let \(\alpha=0\), \(\beta\neq 0\) and \(\gamma=0\), then choosing \(a_{5}=\frac{\alpha}{\beta}\), we obtain the algebra \(\mathcal{L}_{19}\). * Let \(\alpha=0\), \(\beta\neq 0\) and \(\gamma\neq 0\), then choosing \(a_{5}=\frac{\gamma}{\beta}\), \(a_{7}=\gamma\), we obtain the algebra \(\mathcal{L}_{20}\). * Let \(\alpha\neq 0\), \(\beta=0\) and \(\gamma=0\), then choosing \(a_{7}=\alpha a_{5}^{2}\), we obtain the algebra \(\mathcal{L}_{21}\). * Let \(\alpha\neq 0\), \(\beta=0\) and \(\gamma\neq 0\), then choosing \(a_{7}=\gamma\), \(a_{5}=\sqrt{\frac{\gamma}{\alpha}}\), we obtain the algebra \(\mathcal{L}_{22}\). * Let \(\alpha\neq 0\), \(\beta\neq 0\), then choosing \(a_{5}=\frac{\beta}{\alpha}\), \(a_{7}=\frac{\beta^{2}}{\alpha}\), we obtain the algebra \(\mathcal{L}_{23}^{\alpha}\). **Case 9.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.33}:\ \ [e_{4},e_{2}]=e_{1},\ \ [e_{4},e_{3}]=e_{2},\ \ [e_{2},e_{5}]=-e_{2},\ \ [e_{3},e_{5}]=-2e_{3},\ \ [e_{4},e_{5}]=e_{4}.\] Since \(Z(\mathcal{S}_{5.33})=\{e_{1}\}\), then by straightforward computations, we get that the corresponding symmetric bilinear form \(\omega:\mathcal{S}_{5.33}\times\mathcal{S}_{5.33}\to Z(\mathcal{S}_{5.33})\) satisfying the equation (1.1) is \[\omega(e_{5},e_{5})=\alpha e_{1},\] where \(\alpha\neq 0\). Hence, in this case we obtain the algebra \(\mathcal{L}_{24}\). **Case 10.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{5.39}:\ \ [e_{2},e_{4}]=e_{2},\ \ [e_{3},e_{5}]=e_{3},\ \ [e_{4},e_{5}]=e_{1}.\] Since \(Z(\mathcal{S}_{5.39})=\{e_{1}\}\), then by straightforward computations, we get that the corresponding symmetric bilinear form \(\omega:\mathcal{S}_{5.39}\times\mathcal{S}_{5.39}\to Z(\mathcal{S}_{5.39})\) satisfying the equation (1.1) is \[\omega(e_{4},e_{4})=\alpha e_{1},\quad\omega(e_{4},e_{5})=\beta e_{1},\quad \omega(e_{5},e_{5})=\gamma e_{1},\] where \((\alpha,\beta,\gamma)\neq(0,0,0)\). The group of automorphisms of the algebra \(\mathcal{S}_{5.39}\) is \[T=\left(\begin{array}{ccccc}1&0&0&a_{1}&a_{2}\\ 0&a_{3}&0&a_{4}&a_{5}\\ 0&0&a_{6}&a_{7}&a_{8}\\ 0&0&0&1&0\\ 0&0&0&0&1\end{array}\right)\quad\text{or}\quad T=\left(\begin{array}{ccccc}- 1&0&0&a_{1}&a_{2}\\ 0&0&a_{3}&a_{4}&a_{5}\\ 0&a_{6}&0&a_{7}&a_{8}\\ 0&0&0&0&1\\ 0&0&0&1&0\end{array}\right),\] and we have the three-parametric family \(\mathcal{L}_{25}^{\alpha,\beta,\gamma}\), where \(\mathcal{L}_{25}^{\alpha,\beta,\gamma}\simeq\mathcal{L}_{25}^{-\gamma,-\beta,- \alpha}\). Classification of five-dimensional solvable symmetric Leibniz algebras, whose underline Lie algebra is \(\mathcal{S}_{4}\oplus\mathbb{C}\) In this subsection we give the classification of five-dimensional solvable symmetric Leibniz algebras, whose underline Lie algebra is a direct sum of four-dimensional non-split algebra and one-dimensional abelian ideal, i.e. \(\mathcal{S}_{4}\oplus\mathbb{C}\). For this purpose, we give the list of complex 4-dimensional non-split solvable Lie algebras [27]: \[\begin{array}{llllll}\mathcal{S}_{4.1}&:&[e_{2},e_{4}]=e_{1},&[e_{3},e_{4} ]=e_{3}\\ \mathcal{S}_{4.2}&:&[e_{1},e_{4}]=e_{1},&[e_{2},e_{4}]=e_{1}+e_{2},&[e_{3},e_ {4}]=e_{2}+e_{3}\\ \mathcal{S}_{4.3}&:&[e_{1},e_{4}]=e_{1},&[e_{2},e_{4}]=ae_{2},&[e_{3},e_{4}]= be_{3},&0<|b|\leq|a|\leq 1\\ \mathcal{S}_{4.4}&:&[e_{1},e_{4}]=e_{1},&[e_{2},e_{4}]=e_{1}+e_{2},&[e_{3},e_ {4}]=ae_{3},&a\neq 0\\ \mathcal{S}_{4.6}&:&[e_{2},e_{3}]=e_{1},&[e_{2},e_{4}]=e_{2},&[e_{3},e_{4}]=- e_{3}\\ \mathcal{S}_{4.8}&:&[e_{2},e_{3}]=e_{1},&[e_{1},e_{4}]=(1+a)e_{1},&[e_{2},e_{4}]= e_{2},&[e_{3},e_{4}]=ae_{3},&0<|a|\leq 1\\ \mathcal{S}_{4.10}&:&[e_{2},e_{3}]=e_{1},&[e_{1},e_{4}]=2e_{1},&[e_{2},e_{4}]= e_{2},&[e_{3},e_{4}]=e_{2}+e_{3}\\ \mathcal{S}_{4.11}&:&[e_{2},e_{3}]=e_{1},&[e_{1},e_{4}]=e_{1},&[e_{2},e_{4}]= e_{2}\\ \mathcal{S}_{4.12}&:&[e_{1},e_{3}]=e_{1},&[e_{2},e_{4}]=e_{2}.&\end{array}\] **Theorem 2.2**.: _Let \(L\) be a complex five-dimensional solvable symmetric Leibniz algebra, whose underline Lie algebra is \(\mathcal{S}_{4}\oplus\mathbb{C}\), then it is isomorphic to one of the following pairwise non-isomorphic algebras_ \[\begin{array}{llllll}\mathcal{L}_{26}^{\alpha,\beta}&:&e_{3}\cdot e_{4}=e_{ 3},&e_{2}\cdot e_{4}=(\alpha+1)e_{1},&e_{2}\cdot e_{2}=e_{5},\\ &&e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=(\alpha-1)e_{1},&e_{4}\cdot e_{4}= \beta e_{1}+e_{5}\\ \hline\mathcal{L}_{27}^{\alpha}&:&e_{3}\cdot e_{4}=e_{3},&e_{2}\cdot e_{4}=( \alpha+1)e_{1},&e_{2}\cdot e_{2}=e_{5},\\ &&e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=(\alpha-1)e_{1},&e_{4}\cdot e_{4}= e_{1}\\ \hline\mathcal{L}_{28}^{\alpha}&:&e_{3}\cdot e_{4}=e_{3},&e_{2}\cdot e_{4}=( \alpha+1)e_{1},&e_{2}\cdot e_{2}=e_{5},\\ &&e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=(\alpha-1)e_{1}\\ \hline\mathcal{L}_{29}^{\alpha}&:&e_{3}\cdot e_{4}=e_{3},&e_{2}\cdot e_{4}=e_{1} +e_{5},&e_{2}\cdot e_{2}=e_{1},\\ &&e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=-e_{1}+e_{5},&e_{4}\cdot e_{4}= \alpha e_{1}\\ \hline\mathcal{L}_{30}&:&e_{3}\cdot e_{4}=e_{3},&e_{2}\cdot e_{4}=e_{1}+e_{5},& e_{4}\cdot e_{4}=e_{1},\\ &&e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=-e_{1}+e_{5}\\ \hline\mathcal{L}_{31}&:&e_{3}\cdot e_{4}=e_{3},&e_{2}\cdot e_{4}=e_{1}+e_{5}, &e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=-e_{1}+e_{5}\\ &&e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=-e_{1}+e_{5}\\ \hline\mathcal{L}_{32}&:&e_{3}\cdot e_{4}=e_{3},&e_{2}\cdot e_{4}=e_{1},&e_{2} \cdot e_{2}=e_{1},\end{array}\] \begin{tabular}{l l l l l} & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{4}=e_{5}\) \\ \hline \(\mathcal{L}_{33}^{a}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=(\alpha+1)e_{1}\), & \(e_{4}\cdot e_{4}=e_{5}\), \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=(\alpha-1)e_{1}\) & \\ \hline \(\mathcal{L}_{34}^{a}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=e_{1}\), & \(e_{2}\cdot e_{2}=e_{1}\), \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{4}=\alpha e_{1}\) \\ \hline \(\mathcal{L}_{35}^{a}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=(\alpha+1)e_{1}\), & \(e_{5}\cdot e_{5}=e_{1}\), \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=(\alpha-1)e_{1}\) & \\ \hline \(\mathcal{L}_{36}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=e_{1}\), & \(e_{5}\cdot e_{5}=e_{1}\), \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{4}=e_{1}\) \\ \hline \(\mathcal{L}_{37}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=e_{1}\), & \(e_{2}\cdot e_{5}=e_{1}\), \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=-e_{1}\), & \(e_{5}\cdot e_{2}=e_{1}\) \\ \hline \(\mathcal{L}_{38}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=e_{1}\), & \(e_{2}\cdot e_{5}=e_{1}\), & \(e_{4}\cdot e_{4}=e_{1}\), \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=-e_{1}\), & \(e_{5}\cdot e_{2}=e_{1}\) \\ \hline \(\mathcal{L}_{39}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=e_{1}\), & \(e_{4}\cdot e_{5}=e_{1}\), & \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=-e_{1}\), & \(e_{5}\cdot e_{4}=e_{1}\) & \\ \hline \(\mathcal{L}_{40}\) & : & \(e_{3}\cdot e_{4}=e_{3}\), & \(e_{2}\cdot e_{4}=e_{1}\), & \(e_{4}\cdot e_{5}=e_{1}\), & \(e_{2}\cdot e_{2}=e_{1}\), \\ & & \(e_{4}\cdot e_{3}=-e_{3}\), & \(e_{4}\cdot e_{2}=-e_{1}\), & \(e_{5}\cdot e_{4}=e_{1}\) & \\ \hline \(\mathcal{L}_{41}\) & : & \(e_{1}\cdot e_{4}=e_{1}\), & \(e_{2}\cdot e_{4}=e_{1}+e_{2}\), & \(e_{3}\cdot e_{4}=e_{2}+e_{3}\), & \\ & & \(e_{4}\cdot e_{1}=-e_{1}\), & \(e_{4}\cdot e_{2}=-e_{1}-e_{2}\), & \(e_{4}\cdot e_{3}=-e_{2}-e_{3}\), & \(e_{4}\cdot e_{4}=e_{5}\) \\ \hline \(\mathcal{L}_{42}^{a,b}\) & : & \(e_{1}\cdot e_{4}=e_{1}\), & \(e_{2}\cdot e_{4}=ae_{2}\), & \(e_{3}\cdot e_{4}=be_{3}\), & \\ & & \(e_{4}\cdot e_{1}=-e_{1}\), & \(e_{4}\cdot e_{2}=-ae_{2}\), & \(e_{4}\cdot e_{3}=-be_{3}\), & \(e_{4}\cdot e_{4}=e_{5}\) \\ \hline \(\mathcal{L}_{43}^{a\neq 0}\) & : & \(e_{1}\cdot e_{4}=e_{1}\), & \(e_{2}\cdot e_{4}=e_{1}+e_{2}\), & \(e_{3}\cdot e_{4}=\alpha e_{3}\), & \\ & & \(e_{4}\cdot e_{1}=-e_{1}\), & \(e_{4}\cdot e_{2}=-e_{1}-e_{2}\), & \(e_{4}\cdot e_{3}=-ae_{3}\), & \(e_{4}\cdot e_{4}=e_{5}\) \\ \hline \(\mathcal{L}_{44}\) & : & \(e_{2}\cdot e_{3}=e_{1}\), & \(e_{2}\cdot e_{4}=e_{2}\), & \(e_{3}\cdot e_{4}=-e_{3}\), & \(e_{4}\cdot e_{4}=e_{5}\), \\ & & \(e_{3}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{2}=-e_{2}\), & \(e_{4}\cdot e_{3}=e_{3}\) & \\ \hline \(\mathcal{L}_{45}\) & : & \(e_{2}\cdot e_{3}=e_{1}\), & \(e_{2}\cdot e_{4}=e_{2}\), & \(e_{3}\cdot e_{4}=-e_{3}\), & \\ & & \(e_{3}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{2}=-e_{2}\), & \(e_{4}\cdot e_{3}=e_{3}\), & \(e_{5}\cdot e_{5}=e_{1}\) \\ \hline \(\mathcal{L}_{46}\) & : & \(e_{2}\cdot e_{3}=e_{1}\), & \(e_{2}\cdot e_{4}=e_{2}\), & \(e_{3}\cdot e_{4}=-e_{3}\), & \(e_{4}\cdot e_{4}=e_{1}\), \\ & & \(e_{3}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{2}=-e_{2}\), & \(e_{4}\cdot e_{3}=e_{3}\), & \(e_{5}\cdot e_{5}=e_{1}\) \\ \hline \(\mathcal{L}_{47}\) & : & \(e_{2}\cdot e_{3}=e_{1}\), & \(e_{2}\cdot e_{4}=e_{2}\), & \(e_{3}\cdot e_{4}=-e_{3}\), & \(e_{4}\cdot e_{5}=e_{1}\), \\ & & \(e_{3}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{2}=-e_{2}\), & \(e_{4}\cdot e_{3}=e_{3}\), & \(e_{5}\cdot e_{4}=e_{1}\) \\ \hline \(\mathcal{L}_{48}\) & : & \(e_{2}\cdot e_{3}=e_{1}\), & \(e_{1}\cdot e_{4}=(a+1)e_{1}\), & \(e_{2}\cdot e_{4}=e_{2}\), & \(e_{3}\cdot e_{4}=ae_{3}\), \\ & & \(e_{3}\cdot e_{2}=-e_{1}\), & \(e_{4}\cdot e_{1}=-(a+1)e_{1}\), & \(e_{4}\cdot e_{2}=-e_{2}\), & \(e_{4}\cdot e_{3}=-ae_{3}\), \\ & & \(e_{4}\cdot e_{4}=e_{5}\) & & & & \\ \hline \(\mathcal{L}_{49}\) & : & \(e_{2}\cdot e_{3}=e_{1}\), & \(e_{1}\cdot e_{4}=2e_{1}\), \[\begin{array}{llllll}\mathcal{L}_{54}&:&e_{3}\cdot e_{2}=e_{1},&e_{1}\cdot e_{4}=e _{1},&e_{2}\cdot e_{4}=e_{2},&e_{3}\cdot e_{3}=e_{5},\\ &&e_{2}\cdot e_{3}=-e_{1},&e_{4}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_{4 }\cdot e_{4}=e_{5}\\ \hline\mathcal{L}_{55}^{a}&:&e_{3}\cdot e_{2}=e_{1},&e_{1}\cdot e_{4}=e_{1},&e_ {2}\cdot e_{4}=e_{2},&e_{3}\cdot e_{4}=e_{5},\\ &&e_{2}\cdot e_{3}=-e_{1},&e_{4}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_ {4}\cdot e_{3}=e_{5},\\ &&e_{3}\cdot e_{3}=e_{5},&e_{4}\cdot e_{4}=\alpha e_{5}&\\ \hline\mathcal{L}_{56}&:&e_{1}\cdot e_{3}=e_{1},&e_{2}\cdot e_{4}=e_{2},&e_{ 2}\cdot e_{4}=e_{2},&e_{3}\cdot e_{3}=e_{5}&\\ &&e_{3}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_{3}\cdot e_{3}=e_{5}&\\ &&e_{3}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_{4}\cdot e_{3}=e_{5}&\\ \hline\mathcal{L}_{57}^{a}&:&e_{1}\cdot e_{3}=e_{1},&e_{2}\cdot e_{4}=e_{2},&e_ {3}\cdot e_{3}=\alpha e_{5},&e_{3}\cdot e_{4}=e_{5},\\ &&e_{3}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_{4}\cdot e_{3}=e_{5}&\\ \hline\mathcal{L}_{58}^{a,\beta}&:&e_{1}\cdot e_{3}=e_{1},&e_{2}\cdot e_{4}=e_ {2},&e_{3}\cdot e_{3}=\alpha e_{5},&e_{3}\cdot e_{4}=\beta e_{5},\\ &&e_{3}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_{4}\cdot e_{3}=\beta e_{5 },&e_{4}\cdot e_{4}=e_{5}.\end{array}\] Proof.: **Case 1.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{4.1}\oplus\mathbb{C}:\ \ [e_{2},e_{4}]=e_{1},\ \ [e_{3},e_{4}]=e_{3}.\] Then, we have \(Z(\mathcal{S}_{4.1}\oplus\mathbb{C})=\{e_{1},e_{5}\}\) and by straightforward computations we get that the corresponding symmetric bilinear form satisfying the equation (1.1) is \[\omega(e_{2},e_{2})=\alpha_{1}e_{1}+\beta_{1}e_{5},\quad\omega(e_{2},e_{4})= \alpha_{2}e_{1}+\beta_{2}e_{5},\quad\omega(e_{4},e_{4})=\alpha_{3}e_{1}+\beta_ {3}e_{5},\] where \((\beta_{1},\beta_{2},\beta_{3})\neq(0,0,0)\) or \[\omega(e_{2},e_{2})=\alpha_{1}e_{1},\quad\omega(e_{2},e_{4})=\alpha_{2}e_{1}, \quad\omega(e_{4},e_{4})=\alpha_{3}e_{1},\] \[\omega(e_{2},e_{5})=\alpha_{4}e_{1},\quad\omega(e_{4},e_{5})=\alpha_{5}e_{1}, \quad\omega(e_{5},e_{5})=\alpha_{6}e_{1},\] where \((\alpha_{4},\alpha_{5},\alpha_{6})\neq(0,0,0)\). Thus, we consider two subcases. In the first subcases we have the class of symmetric Leibniz algebras \[\begin{array}{llll}\mathcal{L}_{\omega}&:&e_{2}\cdot e_{4}=e_{1},&e_{3} \cdot e_{4}=e_{3},&e_{2}\cdot e_{4}=\alpha_{2}e_{1}+\beta_{2}e_{5},&e_{2}\cdot e _{2}=\alpha_{1}e_{1}+\beta_{1}e_{5},\\ &&e_{4}\cdot e_{2}=-e_{1},&e_{4}\cdot e_{3}=-e_{3},&e_{4}\cdot e_{2}=\alpha_{2}e _{1}+\beta_{2}e_{5},&e_{4}\cdot e_{4}=\alpha_{3}e_{1}+\beta_{3}e_{5}.\end{array}\] Since the matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{4.1}\oplus\mathbb{C}\) is \[T=\left(\begin{array}{cccc}a_{1}&a_{2}&0&a_{3}&a_{4}\\ 0&a_{1}&0&a_{5}&0\\ 0&0&a_{6}&a_{7}&0\\ 0&0&0&1&0\\ 0&a_{8}&0&a_{9}&a_{10}\end{array}\right),\] for the for the first subcase we have the restriction \[\begin{array}{llll}\mu(e_{2},e_{2})&=&\frac{a_{1}(a_{10}a_{1}-a_{4}\beta_{1} )}{a_{10}}e_{1}+\frac{a_{1}^{2}\beta_{1}}{a_{10}}e_{5},\\ \mu(e_{2},e_{4})&=&\frac{a_{5}(a_{10}a_{1}-a_{4}\beta_{1})+a_{10}a_{2}-a_{4} \beta_{2}}{a_{10}}e_{1}+\frac{a_{1}(a_{5}\beta_{1}+\beta_{2})}{a_{10}}e_{5},\\ \mu(e_{4},e_{4})&=&\frac{a_{5}^{2}(a_{10}a_{1}-a_{4}\beta_{1})+2a_{5}(a_{10}a_{ 2}-a_{4}\beta_{2})+a_{10}a_{3}-a_{4}\beta_{3}}{a_{10}}e_{1}+\frac{a_{3}^{2} \beta_{1}+2a_{5}\beta_{2}+\beta_{3}}{a_{10}}e_{5}.\end{array}\] Now we consider following cases: * Let \(\beta_{1}\neq 0\), then choosing \(a_{4}=\frac{a_{10}a_{1}}{\beta_{1}}\), \(a_{5}=-\frac{\beta_{2}}{\beta_{1}}\), \(a_{10}=a_{1}^{2}\beta_{1}\), we get that \[\mu(e_{2},e_{2})=e_{5},\quad\mu(e_{2},e_{4})=\delta_{1}e_{1},\quad\mu(e_{4},e_{ 4})=\frac{\delta_{2}}{a_{1}}e_{1}+\frac{\delta_{3}}{a_{1}^{2}}e_{5},\] where \(\delta_{1}\), \(\delta_{2}\), \(\delta_{3}\) are new parameters, which depends on \(\alpha_{1},\alpha_{2},\alpha_{3},\beta_{1},\beta_{2},\beta_{3}\). * If \(\delta_{3}\neq 0\), then choosing \(a_{1}=\sqrt{\delta_{3}}\), we have the algebra \(\mathcal{L}_{26}^{\alpha,\beta}\). * If \(\delta_{3}=0\), \(\delta_{2}\neq 0\), then choosing \(a_{1}=\delta_{2}\), we have the algebra \(\mathcal{L}_{27}^{\alpha}\). * If \(\delta_{2}=0\), \(\delta_{3}=0\), then we get have the algebra \(\mathcal{L}_{28}^{\alpha}\). * Let \(\beta_{1}=0\), \(\beta_{2}\neq 0\), then choosing \(a_{4}=-\frac{a_{5}a_{10}a_{1}+a_{10}a_{2}}{\beta_{2}}\), \(a_{5}=-\frac{\beta_{3}}{2\beta_{2}}\), \(a_{10}=a_{1}\beta_{2}\), we get that \[\mu(e_{2},e_{2})=a_{1}\alpha_{1}e_{1},\quad\mu(e_{2},e_{4})=e_{5},\quad\mu(e_{ 4},e_{4})=\frac{\delta}{a_{1}}e_{1}.\] * If \(\alpha_{1}\neq 0\), then choosing \(a_{1}=\frac{1}{\alpha_{1}}\), we have the algebra \(\mathcal{L}_{29}^{\alpha}\). * If \(\alpha_{1}=0\), \(\delta\neq 0\), then choosing \(a_{1}=\delta\), we have the algebra \(\mathcal{L}_{30}\). * If \(\alpha_{1}=0\), \(\delta=0\), then we obtain the algebra \(\mathcal{L}_{31}\). * Let \(\beta_{1}=0\), \(\beta_{2}=0\), then \(\beta_{3}\neq 0\) and choosing \(a_{4}=\frac{a_{5}^{2}a_{10}a_{1}+2a_{5}a_{10}a_{2}}{\beta_{3}}\), \(a_{10}=\beta_{3}\), we have \[\mu(e_{2},e_{2})=a_{1}\alpha_{1}e_{1},\quad\mu(e_{2},e_{4})=(a_{5}a_{1}+\alpha _{2})e_{1},\quad\mu(e_{4},e_{4})=e_{5}.\] * If \(\alpha_{1}\neq 0\), then choosing \(a_{1}=\frac{1}{\alpha_{1}}\), we have the algebra \(\mathcal{L}_{32}\). * If \(\alpha_{1}=0\), then we obtain the algebra \(\mathcal{L}_{33}\). Now we consider second subcase, i.e. the symmetric bilinear form is \[\omega(e_{2},e_{2})=\alpha_{1}e_{1},\quad\omega(e_{2},e_{4})=\alpha_{2}e_{1}, \quad\omega(e_{4},e_{4})=\alpha_{3}e_{1},\] \[\omega(e_{2},e_{5})=\alpha_{4}e_{1},\quad\omega(e_{4},e_{5})=\alpha_{5}e_{1}, \quad\omega(e_{5},e_{5})=\alpha_{6}e_{1},\] where \((\alpha_{4},\alpha_{5},\alpha_{6})\neq(0,0,0)\). Then we have the following restriction \[\mu(e_{2},e_{2}) = \frac{a_{5}^{2}a_{1}+2a_{1}a_{8}a_{4}+a_{8}^{2}a_{6}}{a_{1}}e_{1},\] \[\mu(e_{2},e_{4}) = \frac{a_{1}a_{5}a_{1}+a_{1}a_{2}+(a_{1}a_{5}a_{8})a_{4}+a_{8}a_{5 }+a_{8}a_{9}a_{6}}{a_{1}}e_{1},\] \[\mu(e_{2},e_{5}) = \frac{(a_{1}a_{4}+a_{8}a_{6})a_{10}}{a_{1}}e_{1},\] \[\mu(e_{4},e_{4}) = \frac{a_{5}^{2}a_{1}+2a_{5}a_{2}+a_{3}+2a_{5}a_{9}a_{4}+2a_{9}a_{ 5}+a_{9}^{2}a_{6}}{a_{1}}e_{1},\] \[\mu(e_{4},e_{5}) = \frac{(a_{5}a_{4}+a_{5}+a_{9}a_{6})a_{10}}{a_{1}}e_{1},\] \[\mu(e_{5},e_{5}) = \frac{a_{5}^{2}a_{0}a_{6}}{a_{1}}e_{1}.\] * Let \(\alpha_{6}\neq 0\), then choosing \(a_{10}=\sqrt{\frac{a_{1}}{a_{6}}}\), \(a_{8}=-\frac{a_{1}\alpha_{4}}{a_{6}}\) and \(a_{9}=-\frac{a_{9}\alpha_{4}+\alpha_{5}}{a_{6}}\), we get that \[\mu(e_{2},e_{2})=a_{1}\delta_{1}e_{1},\quad\mu(e_{2},e_{4})=(a_{5}\delta_{1}+ \delta_{2})e_{1},\quad\mu(e_{4},e_{4})=\frac{a_{5}^{2}\delta_{1}+2a_{5}\delta_{2 }+\delta_{3}}{a_{1}}e_{1},\] \[\mu(e_{2},e_{5})=0,\quad\mu(e_{4},e_{5})=0,\quad\mu(e_{5},e_{5})=e_{1}.\] * If \(\delta_{1}\neq 0\), then choosing \(a_{1}=\frac{1}{\delta_{1}}\), \(a_{5}=-\frac{\delta_{2}}{\delta_{1}}\), we obtain the algebra \(\mathcal{L}_{34}^{a}\). * If \(\delta_{1}=0\), \(\delta_{2}\neq 0\), then choosing \(a_{5}=-\frac{\delta_{3}}{2\delta_{2}}\), we have the algebra \(\mathcal{L}_{35}^{a}\). * If \(\delta_{1}=0\), \(\delta_{2}=0\), then in case of \(\delta_{3}=0\), we have the algebra \(\mathcal{L}_{35}^{a=0}\) and in case of \(\delta_{3}\neq 0\), choosing \(a_{1}=\delta_{3}\), we have the algebra \(\mathcal{L}_{36}\). * Let \(\alpha_{6}=0\), \(\alpha_{4}\neq 0\), then choosing \(a_{10}=\frac{1}{\alpha_{4}}\), \(a_{5}=-\frac{\alpha_{5}}{\alpha_{4}}\), \(a_{8}=-\frac{a_{1}a_{1}}{2\alpha_{4}}\), \(a_{9}=\frac{\alpha_{1}a_{5}-\alpha_{2}a_{4}}{a_{4}^{2}}\), we get \[\begin{array}{ll}\mu(e_{2},e_{2})=0,&\mu(e_{2},e_{4})=0,&\mu(e_{4},e_{4})= \frac{\delta}{a_{1}}e_{1},\\ \mu(e_{2},e_{5})=e_{1},&\mu(e_{4},e_{5})=0,&\mu(e_{5},e_{5})=0.\end{array}\] Hence, in this case we obtain the algebras \(\mathcal{L}_{37}\) and \(\mathcal{L}_{38}\) depending on whether \(\delta=0\) or not. * Let \(\alpha_{6}=0\), \(\alpha_{4}=0\), then \(\alpha_{5}\neq 0\) and choosing \(a_{8}=-\frac{a_{1}a_{5}\alpha_{1}+a_{1}a_{2}}{a_{5}}\), \(a_{9}=-\frac{a_{2}^{2}\alpha_{1}+2a_{5}\alpha_{2}+\alpha_{3}}{2a_{5}}\), \(a_{10}=\frac{a_{1}}{a_{5}}\), we get that \[\begin{array}{ll}\mu(e_{2},e_{2})=a_{1}\alpha_{1}e_{1},&\mu(e_{2},e_{4})=0,& \mu(e_{4},e_{4})=0,\\ \mu(e_{2},e_{5})=0,&\mu(e_{4},e_{5})=e_{1},&\mu(e_{5},e_{5})=0.\end{array}\] Hence, in this case we obtain the algebras \(\mathcal{L}_{39}\) and \(\mathcal{L}_{40}\) depending on whether \(\alpha_{1}=0\) or not. **Case 2.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{4.2}\oplus\mathbb{C}:\ \ [e_{1},e_{4}]=e_{1},\ \ [e_{2},e_{4}]=e_{1}+e_{2},\ \ [e_{3},e_{4}]=e_{2}+e_{3}.\] Since \(Z(\mathcal{S}_{4.2}\oplus\mathbb{C})=\{e_{5}\}\), then doing a straightforward computations, we get the corresponding symmetric bilinear form satisfying the equation (1.1) is \[\omega(e_{4},e_{4})=\alpha e_{5}.\] Hence, we obtain the algebra \(\mathcal{L}_{41}\). **Case 3.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{4.3}\oplus\mathbb{C}:\ \ [e_{1},e_{4}]=e_{1},\ \ [e_{2},e_{4}]=ae_{2},\ \ [e_{3},e_{4}]=be_{3}.\] Since \(Z(\mathcal{S}_{4.3}\oplus\mathbb{C})=\{e_{5}\}\), then doing a straightforward computations, we get the corresponding symmetric bilinear form satisfying the equation (1.1) is \[\omega(e_{4},e_{4})=\alpha e_{5}.\] Thus, in this case we obtain the algebra \({\cal L}^{a}_{43}.\) **Case 5.** Let \(L\) be a complex five-dimensional solvable symmetric Leibniz algebra, whose underline Lie algebra is \[{\cal S}_{4.6}\oplus{\mathbb{C}}:\ \ [e_{2},e_{3}]=e_{1},\ \ \ [e_{2},e_{4}]=e_{2},\ \ \ [e_{3},e_{4}]=-e_{3}.\] Then we have \(Z({\cal S}_{4.6}\oplus{\mathbb{C}})=\{e_{1},e_{5}\}\) and doing a straightforward computations, we get the corresponding symmetric bilinear form satisfying the equation (1.1) has the form \[\omega(e_{4},e_{4})=\alpha e_{1}+\beta e_{5},\ \ \ \beta\neq 0\] or \[\omega(e_{4},e_{4})=\alpha_{1}e_{1},\ \ \ \omega(e_{4},e_{5})=\alpha_{2}e_{1},\ \ \ \omega(e_{5},e_{5})=\alpha_{3}e_{1},\ \ \ (\alpha_{2},\alpha_{3})\neq(0,0).\] Since the matrix form of the group of automorphisms of the algebra \({\cal S}_{4.6}\oplus{\mathbb{C}}\) is \[T=\left(\begin{array}{cccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\ 0&a_{6}&0&a_{7}&0\\ 0&0&a_{8}&a_{9}&0\\ 0&0&0&1&0\\ 0&0&0&a_{10}&a_{11}\end{array}\right)\] in the first case we have the restriction \[\mu(e_{4},e_{4})=\frac{a_{11}\alpha-a_{5}\beta}{a_{1}a_{11}}e_{1}+\frac{\beta }{a_{11}}e_{5}.\] Then choosing \(a_{11}=\beta,\)\(a_{5}=\frac{a_{11}\alpha}{\beta},\) we get that \(\mu(e_{4},e_{4})=e_{5}\) and obtain the algebra \({\cal L}_{44}.\) In the second case we have the restriction \[\mu(e_{4},e_{4})=\frac{\alpha_{1}+2a_{10}\alpha_{2}+a_{10}^{2}\alpha_{3}}{a_{ 1}}e_{1},\ \ \ \mu(e_{4},e_{5})=\frac{a_{11}(\alpha_{2}+a_{10}\alpha_{3})}{a_{1}}e_{1},\ \ \ \mu(e_{5},e_{5})=\frac{a_{11}^{2}\alpha_{3}}{a_{1}}e_{1}.\] * Let \(\alpha_{3}\neq 0,\) then choosing \(a_{11}=\sqrt{\frac{a_{1}}{a_{3}}},\)\(a_{10}=-\frac{a_{2}}{\alpha_{3}},\) we get that \(\mu(e_{4},e_{4})=\frac{\alpha}{a_{1}}e_{1},\)\(\mu(e_{4},e_{5})=0\) and \(\mu(e_{5},e_{5})=e_{1}.\) In this case we have the algebras \({\cal L}_{45}\) and \({\cal L}_{46}\) depending on whether \(\alpha=0\) or not. * Let \(\alpha_{3}=0,\) then \(\alpha_{2}\neq 0,\) and \(a_{10}=-\frac{a_{1}}{2\alpha_{2}},\) we get that \(\mu(e_{4},e_{4})=0\) and obtain the the algebra \({\cal L}_{47}.\) **Case 6.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[{\cal S}_{4.8}\oplus{\mathbb{C}}:\ \ [e_{2},e_{3}]=e_{1},\ \ \ [e_{1},e_{4}]=(a+1)e_{1},\ \ \ [e_{2},e_{4}]=e_{2},\ \ \ [e_{3},e_{4}]=ae_{3}.\] Since \(Z({\cal S}_{4.8}\oplus{\mathbb{C}})=\{e_{5}\},\) then doing a straightforward computations, we get the corresponding symmetric bilinear form satisfying the equation (1.1) is \[\omega(e_{4},e_{4})=\alpha e_{5}.\] Thus, in this case we obtain the algebra \({\cal L}^{a}_{48}.\) **Case 7.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{4.10}\oplus\mathbb{C}:\ \ [e_{2},e_{3}]=e_{1},\ \ \ [e_{1},e_{4}]=2e_{1},\ \ \ [e_{2},e_{4}]=e_{2},\ \ [e_{3},e_{4}]=e_{2}+e_{3},\] Since \(Z(\mathcal{S}_{4.10}\oplus\mathbb{C})=\{e_{5}\}\), then doing a straightforward computations, we get the corresponding symmetric bilinear form satisfying the equation (1.1) is \[\omega(e_{4},e_{4})=\alpha e_{5}.\] Thus, in this case we obtain the algebra \(\mathcal{L}_{49}\). **Case 8.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{4.11}\oplus\mathbb{C}:\ \ [e_{2},e_{3}]=e_{1},\ \ \ [e_{1},e_{4}]=e_{1},\ \ [e_{2},e_{4}]=e_{2}.\] Since \(Z(\mathcal{S}_{4.11}\oplus\mathbb{C})=\{e_{5}\}\), then doing a straightforward computations, we get the corresponding symmetric bilinear form satisfying the equation (1.1) is \[\omega(e_{3},e_{3})=\alpha e_{5},\ \ \ \omega(e_{3},e_{4})=\beta e_{5},\ \ \ \omega(e_{4},e_{4})=\gamma e_{5},\] where \((\alpha,\beta,\gamma)\neq(0,0,0)\). Thus, we have the following class of symmetric Leibniz algebras \[\begin{array}{cccc}\mathcal{L}_{\omega}&:&e_{3}\cdot e_{2}=e_{1},&e_{1} \cdot e_{4}=e_{1},&e_{2}\cdot e_{4}=e_{2},&e_{3}\cdot e_{3}=\alpha e_{5},&e_{3 }\cdot e_{4}=\beta e_{5},\\ &&e_{2}\cdot e_{3}=-e_{1},&e_{4}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_ {4}\cdot e_{3}=\beta e_{5},&e_{4}\cdot e_{4}=\gamma e_{5}.\end{array}\] Since the matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{4.11}\oplus\mathbb{C}\) is \[T=\left(\begin{array}{cccc}a_{3}a_{5}&a_{1}&-a_{4}a_{5}&a_{2}&0\\ 0&a_{3}&0&a_{4}&0\\ 0&0&a_{5}&0&0\\ 0&0&0&1&0\\ 0&0&a_{6}&a_{7}&a_{8}\end{array}\right),\] we have the restriction \[\mu(e_{3},e_{3})=\frac{\alpha a_{5}^{2}}{a_{8}}e_{5},\ \ \ \mu(e_{3},e_{4})=\frac{\beta a_{5}}{a_{8}}e_{5},\ \ \ \mu(e_{4},e_{4})=\frac{\gamma}{a_{8}}e_{5}.\] Now we consider following cases: * Let \(\alpha=\beta=0\) and \(\gamma\neq 0\), then choosing \(a_{8}=\gamma\), we have the algebra \(\mathcal{L}_{50}\). * Let \(\alpha=0\), \(\beta\neq 0\) and \(\gamma=0\), then choosing \(a_{8}=a_{5}\beta\), we obtain the algebra \(\mathcal{L}_{51}\). * Let \(\alpha=0\), \(\beta\neq 0\) and \(\gamma\neq 0\), then choosing \(a_{5}=\frac{\gamma}{\beta}\), \(a_{8}=\gamma\), we obtain the algebra \(\mathcal{L}_{52}\). * Let \(\alpha\neq 0\), \(\beta=0\) and \(\gamma=0\), then choosing \(a_{8}=\alpha a_{5}^{2}\), we obtain the algebra \(\mathcal{L}_{53}\). * Let \(\alpha\neq 0\), \(\beta=0\) and \(\gamma\neq 0\), then choosing \(a_{8}=\gamma\), \(a_{5}=\sqrt{\frac{\gamma}{\alpha}}\), we obtain the algebra \(\mathcal{L}_{54}\). * Let \(\alpha\neq 0\), \(\beta\neq 0\), then choosing \(a_{5}=\frac{\beta}{\alpha}\), \(a_{8}=\frac{\beta^{2}}{\alpha}\), we obtain the algebra \(\mathcal{L}_{55}^{\alpha}\). **Case 9.** Let \(L\) be a five-dimensional complex solvable symmetric Leibniz algebra, whose underline Lie algebra is \[\mathcal{S}_{4.12}\oplus\mathbb{C}:\ \ [e_{1},e_{3}]=e_{1},\ \ \ [e_{2},e_{4}]=e_{2}.\] Then \(Z(\mathcal{S}_{4.12}\oplus\mathbb{C})=\{e_{5}\}\) and \[\omega(e_{3},e_{3})=\alpha e_{5},\ \ \ \omega(e_{3},e_{4})=\beta e_{5},\ \ \ \omega(e_{4},e_{4})=\gamma e_{5},\] where \((\alpha,\beta,\gamma)\neq(0,0,0)\). Thus, we have the following class of symmetric Leibniz algebra \[\begin{array}{llll}\mathcal{L}_{\omega}&:&e_{1}\cdot e_{3}=e_{1},&e_{2} \cdot e_{4}=e_{2},&e_{3}\cdot e_{3}=\alpha e_{5},&e_{3}\cdot e_{4}=\beta e_{5 },\\ &&e_{3}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{2}=-e_{2},&e_{4}\cdot e_{3}=\beta e_{5 },&e_{4}\cdot e_{4}=\gamma e_{5}.\end{array}\] Since the matrix form of the group of automorphisms of the algebra \(\mathcal{S}_{4.12}\oplus\mathbb{C}\) is \[T=\left(\begin{array}{cccc}a_{1}&0&a_{2}&0&0\\ 0&a_{3}&0&a_{4}&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&a_{5}&a_{6}&a_{7}\end{array}\right),\] we have the restriction \[\mu(e_{3},e_{3})=\frac{\alpha}{a_{7}}e_{5},\ \ \ \mu(e_{3},e_{4})=\frac{\beta}{a_{7}}e_{5},\ \ \ \mu(e_{4},e_{4})=\frac{\gamma}{a_{7}}e_{5}.\] From this restriction we get that the first non-vanishing parameter of \((\alpha,\beta,\gamma)\) can be scaled to \(1\) and obtain the algebras \(\mathcal{L}_{56}\), \(\mathcal{L}_{57}^{\alpha}\), \(\mathcal{L}_{58}^{\alpha,\beta}\). Classification of five-dimensional solvable symmetric Leibniz algebras, whose underline Lie algebra is \(\mathcal{S}_{3}\oplus\mathbb{C}^{2}\) and \(\mathcal{S}_{2}\oplus\mathbb{C}^{3}\) It should be noted that when the dimension of the center of the underline Lie algebra is bigger than two, then to get a non-isomorphic algebras from the given family using the Proposition 1.4 is more difficult and the calculation increases significantly. Therefore, in this subsection we also use the standard basis changing method and obtain complete classification of five-dimensional solvable symmetric Leibniz algebras, whose underline Lie algebra is \(\mathcal{S}_{3}\oplus\mathbb{C}^{2}\) and \(\mathcal{S}_{2}\oplus\mathbb{C}^{3}\). First, we consider the case when, underline Lie algebra is a direct sum of three-dimensional non-split algebra and two-dimensional abelian ideal, i.e. \(\mathcal{S}_{3}\oplus\mathbb{C}^{2}\). **Theorem 2.3**.: _Let \(L\) be a complex five-dimensional solvable symmetric Leibniz algebra, whose underline Lie algebra is \(\mathcal{S}_{3}\oplus\mathbb{C}^{2}\), then it is isomorphic to one of the following pairwise non-isomorphic algebras_ \[\begin{array}{llll}\mathcal{L}_{59}&:&e_{1}\cdot e_{3}=e_{1},&e_{2}\cdot e_{ 3}=e_{1}+e_{2},&e_{3}\cdot e_{3}=e_{4},\\ &&e_{3}\cdot e_{1}=-e_{1},&e_{3}\cdot e_{2}=-e_{1}-e_{2},&e_{3}\cdot e_{5}=e_{ 5},&e_{5}\cdot e_{3}=e_{5}\\ \hline\mathcal{L}_{60}&:&e_{1}\cdot e_{3}=e_{1},&e_{2}\cdot e_{3}=e_{1}+e_{2},&e _{3}\cdot e_{4}=e_{5},\\ &&e_{3}\cdot e_{1}=-e_{1},&e_{3}\cdot e_{2}=-e_{1}-e_{2},&e_{4}\cdot e_{3}=e_{ 5}\end{array}\] Proof.: Since any three-dimensional solvable Lie algebra is isomorphic to one of the algebras \[\mathcal{S}_{3.1}: [e_{1},e_{3}]=e_{1}, [e_{2},e_{3}]=e_{1}+e_{2},\] \[\mathcal{S}_{3.2}: [e_{1},e_{3}]=e_{1}, [e_{2},e_{3}]=ae_{2}, a\neq 0,\] we consider following cases. Let \(L\) be a complex five-dimensional solvable symmetric Leibniz algebras, whose underline Lie algebra is \(\mathcal{L}=\mathcal{S}_{3.1}\oplus\mathbb{C}^{2}.\) Since the center of the Lie algebra \(\mathcal{L}\) is \(\{e_{4},e_{5}\},\) we consider symmetric bilinear form \(\omega:\mathcal{L}\times\mathcal{L}\rightarrow\{e_{4},e_{5}\}.\) Put \[\omega(e_{i},e_{j})=\alpha_{i,j}e_{4}+\beta_{ij}e_{5}.\] From the condition \(\omega([e_{i},e_{j}],e_{k})=0,\) we obtain that \(\omega(e_{1},e_{i})=\omega(e_{2},e_{i})=0\) for \(1\leq i\leq 5.\) Thus, we have that the symmetric Leibniz algebra corresponding to the Lie algebra \(\mathcal{S}_{3.1}\oplus\mathbb{C}^{2}\) has a multiplication \[\left\{\begin{array}{ll}e_{1}\cdot e_{3}=e_{1},&e_{2}\cdot e_{3}=e_{1}+e_{2},\\ e_{3}\cdot e_{1}=-e_{1},&e_{3}\cdot e_{2}=-e_{1}-e_{2},\\ e_{i}\cdot e_{j}=\omega(e_{i},e_{j})=\alpha_{i,j}e_{4}+\beta_{i,j}e_{5},&3\leq i \leq j\leq 5.\end{array}\right. \tag{5}\] From this multiplication we have that the linear space \(\mathcal{V}=\{e_{4},e_{5}\}\) is a two dimensional ideal of symmetric Leibniz algebra. Since any two dimensional symmetric Leibniz algebra is abelian or isomorphic to the algebra \(\lambda_{2}\) with multiplication \(x\cdot x=y,\) we can consider following subcases. * Let \(\mathcal{V}\) is an abelian algebra, then we have \[\omega(e_{4},e_{4})=\omega(e_{4},e_{5})=\omega(e_{5},e_{5})=0.\] Moreover from the condition \(\omega(\omega(e_{i},e_{j}),e_{k})=0,\) we have that \[\left\{\begin{array}{ll}\alpha_{3,3}\alpha_{3,4}+\beta_{3,3}\alpha_{3,5}=0,& \alpha_{3,3}\beta_{3,4}+\beta_{3,3}\beta_{3,5}=0,\\ \alpha_{3,4}^{2}+\beta_{3,4}\alpha_{3,5}=0,&\alpha_{3,4}\beta_{3,4}+\beta_{3,4 }\beta_{3,5}=0,\\ \alpha_{3,4}\alpha_{3,5}+\beta_{3,5}\alpha_{3,5}=0,&\alpha_{3,5}\beta_{3,4}+ \beta_{3,5}^{2}=0.\end{array}\right.\] (6) * Let \((\alpha_{3,3},\beta_{3,3})\neq(0,0),\) then making the change \(e_{4}^{\prime}=\alpha_{3,3}e_{4}+\beta_{3,3}e_{5}\) in (5), we can suppose \(\alpha_{3,3}=1,\)\(\beta_{3,3}=0.\) Then from the equalities (6), we have that \(\alpha_{3,4}=\beta_{3,4}=\beta_{3,5}=0.\) If \(\alpha_{3,5}=0,\) then we have the split algebra. Thus, \(\alpha_{3,5}\neq 0,\) and choosing \(e_{5}^{\prime}=\frac{1}{\alpha_{3,5}}e_{5},\) we obtain the algebra \(\mathcal{L}_{59}.\) Let \((\alpha_{3,3},\beta_{3,3})=(0,0).\) If \(\beta_{3,4}=\alpha_{3,5}=0,\) then from (6) it follows \(\alpha_{3,4}=\beta_{3,5}=0\) and we obtain the split algebra. Thus, we may assume \((\beta_{3,4},\alpha_{3,5})\neq(0,0)\) and taking into account of the symmetrically \(e_{4}\) and \(e_{5},\) we can suppose \(\beta_{3,4}\neq 0.\) Then making the change \(e_{5}^{\prime}=\alpha_{3,4}e_{4}+\beta_{3,4}e_{5},\) we can assume \(\alpha_{3,4}=0,\)\(\beta_{3,4}=1\) and from (6), we have \(\alpha_{3,5}=\beta_{3,5}=0.\) Hence, we obtain the algebra \(\mathcal{L}_{60}.\) * Let \(\mathcal{V}\) is isomorphic to the algebra \(\lambda_{2},\) then we can suppose \[\omega(e_{4},e_{4})=e_{5},\quad\omega(e_{4},e_{5})=\omega(e_{5},e_{5})=0.\] From the condition \(\omega(\omega(e_{i},e_{j}),e_{k})=0,\) we have that \(\alpha_{3,3}=\alpha_{3,4}=\alpha_{3,5}=\beta_{3,5}=0.\) Moreover, making the change \(e_{3}^{\prime}=e_{3}-\beta_{3,4}e_{4},\) we can suppose \(\beta_{3,4}=0\) and in case of \(\beta_{3,3}=0,\) we have the split algebra. Thus, \(\beta_{3,3}\neq 0\) and making change \[e_{1}^{\prime}=\beta_{3,3}e_{1},\;e_{2}^{\prime}=\beta_{3,3}e_{2},\;e_{3}^{ \prime}=e_{3},\;e_{4}^{\prime}=\sqrt{\beta_{3,3}}e_{4},\;e_{5}^{\prime}=\beta_ {3,3}e_{5},\] we obtain the algebra \(\mathcal{L}_{61}.\) Similarly, in case when underline Lie algebra is \(\mathcal{S}_{3,2}\oplus\mathbb{C}^{2},\) we obtain the algebras \(\mathcal{L}_{62}^{a},\)\(\mathcal{L}_{63}^{a}\) and \(\mathcal{L}_{64}^{a}.\) Finally, we consider five-dimensional complex solvable symmetric Leibniz algebras, whose underline Lie algebra is \(\mathcal{S}_{2.1}\oplus\mathbb{C}^{3},\) where \(\mathcal{S}_{2.1}\) is a two-dimensional solvable Lie algebra with multiplication \([e_{1},e_{2}]=e_{1}.\) **Theorem 2.4**.: _Let \(L\) be a complex five-dimensional solvable symmetric Leibniz algebra, whose underline Lie algebra is \(\mathcal{S}_{2.1}\oplus\mathbb{C}^{2},\) then it is isomorphic to one of the following pairwise non-isomorphic algebras_ \[\begin{array}{cccccccc}\hline\mathcal{L}_{65}&:&e_{1}\cdot e_{2}=e_{1},&e_{ 3}\cdot e_{2}=e_{5},&e_{2}\cdot e_{1}=-e_{1},&e_{2}\cdot e_{3}=e_{5},&e_{3} \cdot e_{3}=e_{4}\\ \hline\mathcal{L}_{66}&:&e_{1}\cdot e_{2}=e_{1},&e_{3}\cdot e_{2}=e_{5},&e_{2} \cdot e_{2}=e_{4}&\\ &&e_{2}\cdot e_{1}=-e_{1},&e_{2}\cdot e_{3}=e_{5},&e_{3}\cdot e_{3}=e_{4}&\\ \hline\mathcal{L}_{67}&:&e_{1}\cdot e_{2}=e_{1},&e_{2}\cdot e_{5}=e_{4},&e_{3} \cdot e_{3}=e_{4},&e_{2}\cdot e_{1}=-e_{1},&e_{5}\cdot e_{2}=e_{4}\\ \hline\mathcal{L}_{68}&:&e_{1}\cdot e_{2}=e_{1},&e_{3}\cdot e_{4}=e_{5},&e_{2} \cdot e_{2}=e_{5}&e_{2}\cdot e_{1}=-e_{1},&e_{4}\cdot e_{3}=e_{5}.\end{array}\] Proof.: Let \(\mathcal{L}=\mathcal{S}_{2.1}\oplus\mathbb{C}^{2}\) be a Lie algebra with a basis \(\{e_{1},e_{2},e_{3},e_{4},e_{5}\},\) such that \([e_{1},e_{2}]=e_{1}.\) Since the center of the Lie algebra is \(\{e_{3},e_{4},e_{5}\},\) we consider symmetric bilinear form \(\omega:\mathcal{L}\times\mathcal{L}\rightarrow\{e_{3},e_{4},e_{5}\}.\) Put \[\omega(e_{i},e_{j})=\alpha_{i,j}e_{3}+\beta_{i,j}e_{4}+\gamma_{i,j}e_{5}.\] From the condition \(\omega([e_{i},e_{j}],e_{k})=0,\) we obtain that \(\omega(e_{1},e_{i})=0\) for \(1\leq i\leq 5.\) Thus, we have that the symmetric Leibniz algebra has a multiplication \[\left\{\begin{array}{l}e_{1}\cdot e_{2}=e_{1},\quad e_{2}\cdot e_{1}=-e_{1}, \\ e_{i}\cdot e_{j}=\omega(e_{i},e_{j})=\alpha_{i,j}e_{3}+\beta_{i,j}e_{4}+\gamma_ {i,j}e_{5},\quad 2\leq i\leq j\leq 5.\end{array}\right.\] From this multiplication we have that \(\mathcal{V}=\{e_{3},e_{4},e_{5}\}\) is a three-dimensional commutative ideal of symmetric Leibniz algebra. It is known that three-dimensional commutative symmetric Leibniz algebra is either abelian or isomorphic to \(\lambda_{2}\) or \(\mathcal{N}_{1}.\) Thus, we can consider following cases. * Let \(\mathcal{V}\) is an abelian algebra, then we have \[\omega(e_{3},e_{3})=\omega(e_{3},e_{4})=\omega(e_{3},e_{5})=\omega(e_{4},e_{4})= \omega(e_{4},e_{5})=\omega(e_{5},e_{5})=0.\] Now from the condition \(\omega(\omega(e_{i},e_{j}),e_{k})=0\), we have that \[\left\{\begin{array}{ll}\alpha_{2,2}\alpha_{2,3}+\beta_{2,2}\alpha_{2,4}+ \gamma_{2,2}\alpha_{2,5}=0,&\alpha_{2,2}\beta_{2,3}+\beta_{2,2}\beta_{2,4}+ \gamma_{2,2}\beta_{2,5}=0,\\ \alpha_{2,2}\gamma_{2,3}+\beta_{2,2}\gamma_{2,4}+\gamma_{2,2}\gamma_{2,5}=0,& \alpha_{2,3}^{2}+\beta_{2,3}\alpha_{2,4}+\gamma_{2,3}\alpha_{2,5}=0,\\ \alpha_{2,3}\beta_{2,3}+\beta_{2,3}\beta_{2,4}+\gamma_{2,3}\beta_{2,5}=0,& \alpha_{2,3}\gamma_{2,3}+\beta_{2,3}\gamma_{2,4}+\gamma_{2,3}\gamma_{2,5}=0,\\ \alpha_{2,4}\alpha_{2,3}+\beta_{2,4}\alpha_{2,4}+\gamma_{2,4}\alpha_{2,5}=0,& \alpha_{2,4}\beta_{2,3}+\beta_{2,4}^{2}+\gamma_{2,4}\beta_{2,5}=0,\\ \alpha_{2,4}\gamma_{2,3}+\beta_{2,4}\gamma_{2,4}+\gamma_{2,4}\gamma_{2,5}=0,& \alpha_{2,5}\alpha_{2,3}+\beta_{2,5}\alpha_{2,4}+\gamma_{2,5}\alpha_{2,5}=0,\\ \alpha_{2,5}\beta_{2,3}+\beta_{2,5}\beta_{2,4}+\gamma_{2,5}\beta_{2,5}=0,& \alpha_{2,5}\gamma_{2,3}+\beta_{2,5}\gamma_{2,4}+\gamma_{2,5}^{2}=0.\end{array}\right.\] (7) * Let \((\alpha_{2,2},\beta_{2,2},\gamma_{2,2})\neq(0,0,0)\), then making the change \(e_{3}^{\prime}=\alpha_{2,2}e_{3}+\beta_{2,2}e_{4}+\gamma_{2,2}e_{5}\), we can suppose \(\alpha_{2,2}=1\), \(\beta_{2,2}=0\), \(\gamma_{2,2}=0\) and from (7) we have that \[\left\{\begin{array}{ll}\alpha_{2,3}=0,&\beta_{2,3}=0,&\gamma_{2,3}=0,\\ \beta_{2,4}\alpha_{2,4}+\gamma_{2,4}\alpha_{2,5}=0,&\beta_{2,4}^{2}+\gamma_{2,4}\beta_{2,5}=0,&\beta_{2,4}\gamma_{2,4}+\gamma_{2,4}\gamma_{2,5}=0,\\ \beta_{2,5}\alpha_{2,4}+\gamma_{2,5}\alpha_{2,5}=0,&\beta_{2,5}\beta_{2,4}+ \gamma_{2,5}\beta_{2,5}=0,&\beta_{2,5}\gamma_{2,4}+\gamma_{2,5}^{2}=0.\end{array}\right.\] Moreover, taking the suitable basis of the vector space \(\mathrm{span}\{e_{4},e_{5}\}\), we can suppose \(\beta_{2,5}=0\), which implies \(\beta_{2,4}=\gamma_{2,5}=0\) and \(\alpha_{2,5}\gamma_{2,4}=0\). In case of \(\alpha_{2,5}=0\), we obtain the split algebra and in case of \(\alpha_{2,5}\neq 0\), which implies \(\gamma_{2,4}=0\), making the change \(e_{4}^{\prime}=e_{4}-\frac{\alpha_{2,4}}{\alpha_{2,5}}e_{5}\), we again obtain the split algebra. * Let \((\alpha_{2,2},\beta_{2,2},\gamma_{2,2})=(0,0,0)\), then we have that \[\left\{\begin{array}{ll}\alpha_{2,3}^{2}+\beta_{2,3}\alpha_{2,4}+\gamma_{2,3 }\alpha_{2,5}=0,&\alpha_{2,3}\beta_{2,3}+\beta_{2,3}\beta_{2,4}+\gamma_{2,3} \beta_{2,5}=0,\\ \alpha_{2,3}\gamma_{2,3}+\beta_{2,3}\gamma_{2,4}+\gamma_{2,3}\gamma_{2,5}=0,& \alpha_{2,4}\alpha_{2,3}+\beta_{2,4}\alpha_{2,4}+\gamma_{2,4}\alpha_{2,5}=0,\\ \alpha_{2,4}\beta_{2,3}+\beta_{2,4}^{2}+\gamma_{2,4}\beta_{2,5}=0,&\alpha_{2,4 }\gamma_{2,3}+\beta_{2,4}\gamma_{2,4}+\gamma_{2,4}\gamma_{2,5}=0,\\ \alpha_{2,5}\alpha_{2,3}+\beta_{2,5}\alpha_{2,4}+\gamma_{2,5}\alpha_{2,5}=0,& \alpha_{2,5}\beta_{2,3}+\beta_{2,5}\beta_{2,4}+\gamma_{2,5}\beta_{2,5}=0,\\ \alpha_{2,5}\gamma_{2,3}+\beta_{2,5}\gamma_{2,4}+\gamma_{2,5}^{2}=0.\end{array}\right.\] (8) Here we can consider the operator of \(ad_{e_{2}}\) as a linear map of the vector space \(\mathrm{span}\{e_{3},e_{4},e_{5}\}\). Choosing the suitable basis of the vector space \(\mathrm{span}\{e_{3},e_{4},e_{5}\}\), we can lead to Jordan form of the matrix of the operator \(ad_{e_{2}}\). It means that we can always assume \(\alpha_{2,4}=\alpha_{2,5}=\beta_{2,5}=\gamma_{2,3}=0\). Then from the equality (8), we obtain \(\alpha_{2,3}=\beta_{2,4}=\gamma_{2,5}=0\) and \(\beta_{2,3}\gamma_{2,4}=0\). So in this case also, we have the split algebra. * Let the three-dimensional algebra \(\mathcal{V}\) is isomorphic to \(\lambda_{2}\), then we have \[\omega(e_{3},e_{3})=e_{4},\quad\omega(e_{3},e_{4})=\omega(e_{3},e_{5})=\omega(e_ {4},e_{4})=\omega(e_{4},e_{5})=\omega(e_{5},e_{5})=0.\] Then from the condition \(\omega(\omega(e_{i},e_{j}),e_{k})=0\), we have that \[\omega(e_{2},e_{2})=\beta_{2,2}e_{4}+\gamma_{2,2}e_{5},\quad\omega(e_{2},e_{3})= \beta_{2,3}e_{4}+\gamma_{2,3}e_{5},\quad\omega(e_{2},e_{5})=\beta_{2,5}e_{4},\] with restriction \(\gamma_{2,2}\beta_{2,5}=0\), \(\gamma_{2,3}\beta_{2,5}=0\). Let \(\beta_{2,5}=0\), then we have the multiplication \[\left\{\begin{array}{ll}e_{1}\cdot e_{2}=e_{1},&e_{3}\cdot e_{2}=\beta_{2,3}e_{ 4}+\gamma_{2,3}e_{5},&e_{2}\cdot e_{2}=\beta_{2,2}e_{4}+\gamma_{2,2}e_{5},\\ e_{2}\cdot e_{1}=-e_{1},&e_{2}\cdot e_{3}=\beta_{2,3}e_{4}+\gamma_{2,3}e_{5},&e _{3}\cdot e_{3}=e_{4},\end{array}\right.\] If \(\gamma_{2,3}\neq 0\), \(4\beta_{2,2}\gamma_{2,3}^{2}-4\beta_{2,3}\gamma_{2,2}\gamma_{2,3}+\gamma_{2,2}^ {2}\neq 0\), then making the change \[e_{1}^{\prime}=e_{1},\quad e_{2}^{\prime}=e_{2}+Ae_{3},\quad e_{3}^{\prime}= Be_{3},\quad e_{4}^{\prime}=B^{2}e_{3},\quad e_{5}^{\prime}=B(A+\beta_{2,3})e_{4}+B \gamma_{2,3}e_{5},\] where \(A=-\frac{\gamma_{22}}{2\gamma_{2,3}}\), \(B=\frac{4\beta_{2,2}\gamma_{2,2}^{2}-4\beta_{2,3}\gamma_{2,2}\gamma_{2,3}+ \gamma_{2,2}^{2}}{2\gamma_{2,3}}\), we obtain the algebra \(\mathcal{L}_{65}\). If \(\gamma_{2,3}\neq 0\), \(4\beta_{2,2}\gamma_{2,3}^{2}-4\beta_{2,3}\gamma_{2,2}\gamma_{2,3}+\gamma_{2,2 }^{2}=0\), then making the change \[e_{1}^{\prime}=e_{1},\quad e_{2}^{\prime}=e_{2}+Ae_{3},\quad e_{3}^{\prime}=e _{3},\quad e_{4}^{\prime}=e_{4},\quad e_{5}^{\prime}=(A+\beta_{2,3})e_{4}+ \gamma_{2,3}e_{5},\] where \(A=-\frac{\gamma_{22}}{2\gamma_{2,3}}\), we have the algebra \(\mathcal{L}_{66}\). If \(\gamma_{2,3}=0\), then \(\gamma_{2,2}\neq 0\), and after the change \[e_{1}^{\prime}=e_{1},\quad e_{2}^{\prime}=e_{2}+Ae_{3},\quad e_{3}^{\prime}=e _{3},\quad e_{4}^{\prime}=e_{4},\quad e_{5}^{\prime}=(\beta_{2,2}+2A\beta_{2, 3}+A^{2})e_{4}+\gamma_{2,2}e_{5},\] where \(A=-\beta_{2,3}\), we find out that this algebra is split. * Let \(\beta_{2,5}\neq 0\), then \(\gamma_{2,2}=\gamma_{2,3}=0\) and making the change \[e_{1}^{\prime}=e_{1},\quad e_{2}^{\prime}=e_{2}-\frac{\beta_{2,2}}{2\beta_{2,5 }}e_{5},\quad e_{3}^{\prime}=e_{3}-\frac{\beta_{2,3}}{\beta_{2,5}}e_{5},\quad e _{4}^{\prime}=e_{4},\quad e_{5}^{\prime}=\frac{1}{\beta_{2,5}}e_{5},\] we have the algebra \(\mathcal{L}_{67}\). * Let the three-dimensional algebra \(\mathcal{V}\) is isomorphic to \(\mathcal{N}_{1}\), then we have \[\omega(e_{3},e_{4})=e_{5},\quad\omega(e_{3},e_{3})=\omega(e_{3},e_{5})=\omega( e_{4},e_{4})=\omega(e_{4},e_{5})=\omega(e_{5},e_{5})=0.\] Now from the condition \(\omega(\omega(e_{i},e_{j}),e_{k})=0\), we have that \[\omega(e_{2},e_{2})=\gamma_{2,2}e_{5},\quad\omega(e_{2},e_{3})=\gamma_{2,3}e_ {5},\quad\omega(e_{2},e_{4})=\gamma_{2,4}e_{5},\] Making the basis change \[e_{1}^{\prime}=e_{1},\quad e_{2}^{\prime}=e_{2}-\gamma_{2,4}e_{3}-\gamma_{2,3} e_{4},\quad e_{3}^{\prime}=e_{3},\quad e_{4}^{\prime}=e_{4},\quad e_{5}^{ \prime}=e_{5},\] we may suppose \(\gamma_{2,3}=\gamma_{2,4}=0\). Then \(\gamma_{2,2}\neq 0\) and we obtain the algebra \(\mathcal{L}_{68}\). ## Funding This work is supported by grant "Automorphisms of operator algebras, classifications of infinite-dimensional non associative algebras and superalgebras", No. FZ-202009269, Ministry of higher education, science and innovations of the Republic of Uzbekistan, 2021-2025. ## Data availability No data was used for the research described in the article.
2306.15176
Evaluation and Optimization of Rendering Techniques for Autonomous Driving Simulation
In order to meet the demand for higher scene rendering quality from some autonomous driving teams (such as those focused on CV), we have decided to use an offline simulation industrial rendering framework instead of real-time rendering in our autonomous driving simulator. Our plan is to generate lower-quality scenes using a game engine, extract them, and then use an IQA algorithm to validate the improvement in scene quality achieved through offline rendering. The improved scenes will then be used for training.
Chengyi Wang, Chunji Xu, Peilun Wu
2023-06-27T03:23:07Z
http://arxiv.org/abs/2306.15176v1
# Evaluation and Optimization of Rendering Techniques for Autonomous Driving Simulation ###### Abstract In order to meet the demand for higher scene rendering quality from some autonomous driving teams (such as those focused on CV), we have decided to use an offline simulation industrial rendering framework instead of real-time rendering in our autonomous driving simulator. Our plan is to generate lower-quality scenes using a game engine, extract them, and then use an IQA algorithm to validate the improvement in scene quality achieved through offline rendering. The improved scenes will then be used for training. ## I Introduction Computer graphics rendering technology is essential for autonomous driving simulation. Specifically, when engineers attempt to run a simulation, they must first create a 3D model that specifies how geometries should be placed in the virtual world [1, 2]. The image rendering technology used by modern autonomous driving simulators has been relatively complete from the algorithm level. However, due to the limitation of calculation time, the effect of real-time rendering is often unsatisfactory. In order to improve the quality of model rendering used in our autonomous driving laboratory, it is necessary and feasible to use offline rendering instead of real-time rendering to breakthrough this time limit. We expect that offline rendering will perform better than real-time rendering. And currently, most rendering tools are real-time rendering. To perform offline rendering, we need to render ourselves, and after offline rendering, we also need to evaluate the rendering effect to see if offline rendering is really better than real-time rendering. We will first render a scene in real-time, and then render a similar scene offline. Then we will evaluate the rendering effects of the two scenes. We will evaluate the rendering effect in two ways. The first method is to use object detection algorithms to detect rendered targets and see if it can improve the detection effect. Another method is to use the IQA algorithm to evaluate rendered images to see if the quality of offline rendered images is higher than that of real-time rendered images. The result is in two parts. In detection algorithms, offline rendering is slightly better than online ones, and in image quality assessments the advantage is much greater. ## II Background Research ### _Building Real-time Simulation Environment_ Real-time refers to the ability of a system to process and respond to data in a timely manner, typically within a few milliseconds or less. In the context of a real-time simulator for autonomous driving, this means that the simulator is able to simulate the driving environment and the behavior of the autonomous vehicle in real-time, responding to inputs and changes in the environment as quickly as possible [3]. To highlight the advantage of our offline rendering method in producing high-quality scenes, we will first build a real-time autonomous driving simulation environment, extract appropriate scenes from it, render them offline, and then compare the quality of the renders. Engineering a simulation environment requires the same features and toolsets used in creating other types of rich interactive content: lighting and physics, particle and weather systems, animation, machine learning [4, 5]. A number of real-time simulators for autonomous driving have been created, including CARLA, Airsim, Udacity Simulator, Gazebo, LGSVL Simulator, Carsim and so on. Among them, CARLA and Airsim are based on a famous game engine Unreal Engine. ### _Precomputation-Based Rendering methods_ In 2001, Basri and Jacobs showed that reflection from a curved surface, lit by an environment map, could be seen as a spherical convolution of the incident illumination and the reflective properties or BRDF of the surface. [6][7] Sloan et al. introduced the term precomputed radiance transfer and led to greatly increased interest in precomputation-based relighting methods. They use full environment maps for lighting instead of discrete light sources. The first innovation was to build on the spherical harmonic methods, but precompute the shadowing and interreflection components, encapsulated in a transfer function on the object surface. [8][7] ### _IQA for Rendering_ A method known as CVRKD-IQA proposed by Yin et al. [9] is especially suitable for our underground parking lot rendering research. Knowledge distillation is used for feature extraction from pictures and training the NAR-student agent. Under the verification of various data sets and algorithms, the results of the IQA algorithm have sufficient credibility. Bosse et al. [10] implemented an FR-IQA and NR-IQA method by Deep Neural Networks named WaDIQaM. The pre-study and training of different feature fusion strategies make the DNN system effective on FR and NR IQA scenes. Hosssein and Peyman [11] introduced a neural image assessment algorithm trained on both aesthetic and pixel-level quality datasets. This CNN-based NR-IQA method is called NIMA. NIMA is a pre-trained IQA algorithm and is popular for effectively predicting the distribution of quality ratings, rather than just the mean scores. ### _Evaluate image quality_ After using the previously mentioned methods for rendering, we will use other methods and metrics to verify the image quality. #### Iii-D1 PSNR (Peak Signal-to-Noise Ratio) and MSE (Mean Square Error) Peak Signal to Noise Ratio (PSNR) is a commonly used indicator for evaluating image quality. It is determined by calculating the ratio between the maximum possible pixel value in the image and the mean square error caused by noise in the image. Usually, the higher the PSNR, the better the image quality. However, it should be noted that PSNR does not always accurately reflect the human eye's perception of image quality, as the human eye has different sensitivities to different types and intensities of distortion [12]. Therefore, when evaluating image quality, other factors need to be considered, such as structural similarity index (SSIM) and subjective visual quality assessment [13]. Mean error refers to the difference between each data point in a set of data and the average value of that set of data. It can be calculated by adding up the difference between each data point and the average value and dividing it by the number of data points. Mean error is usually used to evaluate the accuracy and accuracy of a set of measurement results, with smaller mean errors indicating that the measurement results are closer to the true values [13]. The MSE formula is as follows: \[MSE=\sum_{i=1}^{M}\sum_{j=1}^{N}|R(i,j)-F(i,j)|^{2} \tag{1}\] Where R is the reference image, F is the image to be evaluated. And the PSNR formula is as follows: \[PSNR=20log_{10}MAX_{p}-10log_{10}MSE \tag{2}\] Where \(MAX_{p}\) is the maximum possible pixel value in an image (usually 255). This algorithm has the advantage of convenient computation, but its evaluation results are significantly different from the intuitive perception of the human eye, as the human visual system is more sensitive to brightness information than chromaticity information. #### Iii-D2 SSIM(Structural Similarity Index) SSIM is an abbreviation for Structural Similarity Index, which is an objective evaluation indicator used to measure image quality. It can compare the structural similarity and differences in brightness, contrast, structure, and other aspects between two images to evaluate their similarity. The higher the SSIM index, the more similar the two images are. The SSIM index is widely used in the field of image processing, such as image compression, denoising, and enhancement.[14] The SSIM formula is as follows: \[SSIM=[l(x,y)c(x,y)s(x,y)] \tag{3}\] where x and y represent two images, while l(x,y), c(x,y), and s(x,y) represent brightness similarity, contrast similarity, and structural similarity, respectively. Specifically, their calculation formula is as follows: \[l(x,y)=\frac{2\mu_{x}\mu_{y}+C_{1}}{\mu_{x}^{2}+\mu_{y}^{2}+C_{1}} \tag{4}\] \[c(x,y)=\frac{2\sigma_{x}y+C_{2}}{\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2}} \tag{5}\] \[s(x,y)=\frac{\sigma_{xy}+C_{3}}{\sigma_{x}\sigma_{y}+C_{3}} \tag{6}\] Where \(\mu_{x}\), \(\mu_{y}\) represents the average value of image x and image y, respectively, \(\sigma_{x}\), \(\sigma_{y}\) represents their standard deviations, respectively, \(\sigma_{xy}\) represents their covariance. \(C_{1}\), \(C_{2}\), and \(C_{3}\) are constants used to avoid instability when the denominator is 0 or close to 0. Although the results of the SSIM algorithm are more in line with the human eye's perception, it requires that the two images be of the same size and undergo graying during evaluation. In addition, due to the highly nonlinear nature of the human visual system, there are differences between the evaluation results and the actual visual perception. #### Iii-D3 NIQE (Natural Image Quality Evaluator) The natural image quality evaluator evaluates the quality of an image by extracting quality perception features from each image block and fitting them into a multivariate Gaussian model. Finally, the distance between the original image and the multivariate Gaussian model of the test image is used as the quality of the test image. This framework has been proposed with some methods that mainly focus on natural scene statistics (NSS) feature extraction. In the Natural Image Quality Evaluator (NIQE), Mittal et al. used mean subtraction and contrast normalization (MSCN) fitting parameters as quality perception features. These features are also fundamental features of subsequent work, as they are very effective.[15] The NIQE formula is as follows: \[Q(T)=\frac{1}{N}\sum(\frac{||x-\mu||^{2}}{\sigma^{2}}) \tag{7}\] Where N is the total number of blocks in the test image, \(\mu\) and \(\sigma^{2}\) are the mean and variance of the multivariate Gaussian model. This formula calculates the average Euclidean distance between the test image and the multivariate Gaussian model, which is used to evaluate the quality of the test image. The NIQE algorithm can perform well in super-resolution reconstruction tasks where other algorithms cannot perform well, but it also requires more complex calculations and longer computational time. ### _Object Detection Algorithms_ Object detection algorithms are computer vision techniques that enable machines to identify and locate objects within an image or video [16, 17]. These algorithms use various techniques, including machine learning, deep learning, and computer vision, to detect objects within an image or video and draw bounding boxes around them [18]. There are generally two main types of object detection algorithms: two-stage detectors and one-stage detectors [19]. Two-stage detectors follow a two-step process. The first stage generates a set of potential object proposals, which are regions in the image that may contain objects. These proposals are then refined and classified in the second stage. One popular example of a two-stage detector is the Faster R-CNN (Region-based Convolutional Neural Network) algorithm, which uses a region proposal network (RPN) to generate proposals and then classifies and refines them using subsequent stages. One-stage detectors perform object detection in a single pass without the explicit proposal generation step. These algorithms directly predict the bounding boxes and class labels for objects in an image [20, 21]. Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO) are examples of popular one-stage detectors. SSD divides the image into a grid and predicts multiple bounding boxes and class probabilities at each grid location, while YOLO divides the image into a grid and predicts bounding boxes and class probabilities directly. Compared to two-stage detectors, one-stage detectors(especially YOLO) are more efficient and provide better performance in detecting small objects. These advantages make them more suitable for application in the field of autonomous driving, and that is the reason why we choose YOLOv3, YOLOv8, and Detectron2 to evaluate our results. ## III Research methods ### _Build scene from ground_ For our experimental purposes, we need to use a rendering model based entirely on offline industrial renders. In order to ensure the same work as the recognition algorithm, we select components with the same functions as Carla Town to build the scene. For reasons of realism, aesthetics or consistency with the scene, some parameters can be adjusted before using these models. We combine the models to get the required scene, and then select a suitable perspective in the scene to get a photo for the algorithm to identify in both online and offline engines. ### _Rendering engine and optimize_ To optimize the rendering result, we use an offline Cycles engine to render images of possible autonomous driving scenes. Cycles is a ray tracing engine in which many post-processing functions are built in. The main feature of ray tracing engines is that the ray tracing engine determines the position of the object first, while the rasterization engines determine the position of the sampling point first. Above that, Cycles implemented global illumination calculation, which resulted in more realistic rendered images [22]. Cycles can run on CPUs, while GPU is used to accelerate the rendering process. ### _Evaluation_ We will use several methods of image quality evaluation mentioned earlier to compare the quality of images generated by offline rendering with those generated by online rendering. We will analyze in which aspects the quality of offline-rendered images is better than that of online-rendered images, and also note in which aspects the quality of images has not significantly improved. We hope to improve our offline rendering through this analysis process. To quantitatively and visually evaluate the effectiveness of our optimization method [23], we have decided to design the following experiment: We will perform real-time rendering and offline rendering on a set of identical 3D models. And then we capture scene images with the camera positioned at the same location after real-time rendering and offline rendering. We will apply the object detection algorithm to detect objects (such as pedestrians and vehicles) relevant to driving in both data sets. Finally, we will compare the recognition performance (confidence scores) of these objects in the two sets of images. Specifically, we set two locations for the main camera to provide the close-up perspective and long-distance perspective of the objects. The confidence score on the objects using Yolov8n and Detectron2 are shown in Table I and Table II respectively. While performing better under the object detector does not necessarily guarantee the effectiveness of our optimization Fig. 1: Images of similar scenes in both online(above) and offline(below) engines. \begin{table} \begin{tabular}{c|c|c} \hline \hline YOLO & Close-up perspective & Long-distance perspective \\ \hline real-time rendering & Person:71\% & Person: \(<\) 25\% \\ & Car:82\% & Car: \(<\) 25\% \\ \hline Offline-rendering & Person: 88.5\% & Person:42\% \\ & Car:72\% & Car:34\% \\ \hline \hline \end{tabular} \end{table} TABLE I: YOLO results. method at all times, it remains an important metric indicating that our optimization results exhibit superior computer vision capabilities for autonomous driving compared to the original data. ## IV Results ### _IQA algorithm_ The calculation results of peak signal-to-noise ratio are shown in Table III. In the results of real-time and offline rendering in the near and far groups, the PSNR value of the near group is 14.015, and the PSNR value of the far group is 15.425. This result indicates that the images of the two near and far groups, the SSIM values of the near group are 0.5126, and the SSIM values of the far group are 0.5375. This result indicates that the images of the two rendering methods can be considered similar in terms of structural similarity, including brightness, contrast, and structure. The calculation results of Natural Image Quality Evaluator are shown in Table V. In the results of real-time and offline rendering in the near and far groups, the NIQE value for the near group is 38.463, while the NIQE value for offline rendering is 25.129. The NIQE value for the far group is 40.344, while the NIQE value for offline rendering is 30.467. This result indicates that through the evaluation method of NIQE, the quality of offline rendered images is higher than that of real-time rendered images. ## V Conclusion Our research system has been successfully constructed. The research methods should be able to effectively prove the effectiveness of improvement from offline rendering for the optimization of automatic driving model training. However, we still lack a more rigorous theoretical basis, general experiment and a definite conclusion. As a result, the next task of our research group is to further expand in these three directions. Judging from the results, our current research cannot guarantee that our expectations must be correct. But if we finally succeed in falsification, it can also play a guiding role in the improvement of automatic driving model training. In the future, we will extend this work with the AI technologies, such as Knowledge graphs [24, 25], and optimization algorithms [5, 26].
2304.04602
Learning a Universal Human Prior for Dexterous Manipulation from Human Preference
Generating human-like behavior on robots is a great challenge especially in dexterous manipulation tasks with robotic hands. Scripting policies from scratch is intractable due to the high-dimensional control space, and training policies with reinforcement learning (RL) and manual reward engineering can also be hard and lead to unnatural motions. Leveraging the recent progress on RL from Human Feedback, we propose a framework that learns a universal human prior using direct human preference feedback over videos, for efficiently tuning the RL policies on 20 dual-hand robot manipulation tasks in simulation, without a single human demonstration. A task-agnostic reward model is trained through iteratively generating diverse polices and collecting human preference over the trajectories; it is then applied for regularizing the behavior of polices in the fine-tuning stage. Our method empirically demonstrates more human-like behaviors on robot hands in diverse tasks including even unseen tasks, indicating its generalization capability.
Zihan Ding, Yuanpei Chen, Allen Z. Ren, Shixiang Shane Gu, Qianxu Wang, Hao Dong, Chi Jin
2023-04-10T14:17:33Z
http://arxiv.org/abs/2304.04602v2
# Learning a Universal Human Prior for Dexterous Manipulation ###### Abstract Generating human-like behavior on robots is a great challenge especially in dexterous manipulation tasks with robotic hands. Even in simulation with no sample constraints, scripting controllers is intractable due to high degrees of freedom, and manual reward engineering can also be hard and lead to non-realistic motions. Leveraging the recent progress on Reinforcement Learning from Human Feedback (RLHF), we propose a framework to learn a universal human prior using direct human preference feedback over videos, for efficiently tuning the RL policy on 20 dual-hand robot manipulation tasks in simulation, without a single human demonstration. One task-agnostic reward model is trained through iteratively generating diverse polices and collecting human preference over the trajectories; it is then applied for regularizing the behavior of polices in the fine-tuning stage. Our method empirically demonstrates more human-like behaviors on robot hands in diverse tasks including even unseen tasks, indicating its generalization capability. Machine Learning, Reinforcement Learning, Human Preference ## 1 Introduction Dexterous manipulation with multi-finger hands has been gaining popularity in the research community as it enables performing tasks that require dexterity such as rotating objects in-hand or opening a water bottle cap (Akkaya et al., 2019; Chen et al., 2022b), which are impossible or very difficult for traditional parallel-jaw gripper (Guo et al., 2017). Model-based motion planning methods are challenging to apply to multi-finger hands due to the high dimension of its action space (e.g., the hand we use in this work has 30 degrees of freedom) and exponential growth of possible contact modes. Consequently, researchers have been resorting to model-free methods including deep reinforcement learning (RL) with carefully designed reward functions and curriculum for training dexterous manipulation policies (Rajeswaran et al., 2017; Chen et al., 2022a). However, these RL-trained policies tend to generate unnatural and jarring motion. Due to the large action space, the training agent can easily find feasible hand and finger trajectories that satisfy the task completion requirement but does not align with humans' behavioral norms. For example, the robot fingers may twist around each other after throwing an object, or grasping an object in an unnatural way. If we were to deploy these policies in the real life, humans might feel uncomfortable and unsafe next to the robots. Humans are also less likely to trust them and question robots' capabilities in solving the tasks. Thus, it is important to train multi-finger hand policies to exhibit human-like behavior when performing different tasks. How could we help the robot escape the Uncanny Valley (Mori et al., 2012)? Figure 1: The proposed method involves an iterative policy fine-tuning procedure with three steps: Step 1� is to generate diverse policies across 20 dexterous hand manipulation tasks. Step 2� is to let human labelers provide the preference over trajectories collected from the generated policies. Step 3� is to train the task-agnostic reward model for human-like behavior using the labeled samples. The polices are fine-tuned in Step 1 of the next iteration with the reward model. We are inspired by the recent progress in RL with Human Feedback (RLHF), where a reward model (RM) is learned to encode human preferences over data like text generated by large language models (LLMs) (Ouyang et al., 2022). The model is then used as the reward function for RL to fine-tune the original policy. This process helps align the policy with human intent. In this work, we apply the similar idea to regularize the behaviors of policies for dexterous manipulation tasks. With an iterative process of trajectory generation with existing policies, human labeling preferences over robot videos, learning the reward model, and fine-tuning the policies, we gradually improve the human likeness of policies and also the performance across tasks. Compared to using explicit human demonstrations (e.g., teleoperation), which require dedicated equipment (e.g., gloves or other hand tracking devices) and extensive human labor, our approach alleviates the burdens and improves the scalability of encoding human priors in dexterous manipulation training. Overall, we summarize our contributions as follows: * We propose an iterative pipeline that utilizes human feedback for training diverse multi-finger hand policies and generating human-like behavior in dexterous manipulation tasks. * We build a publicly accessible platform1 that collects human preferences over robot behavior in videos with an intuitive user interface. Footnote 1: [https://sites.google.com/view/openbidexhand](https://sites.google.com/view/openbidexhand) * Without collecting any human demonstrations, we train a single, task-agnostic reward model for the Shadow Hand robot across 20 dexterous tasks in simulated environment, which regularizes the robot to generate more human-like behaviors during iterative fine-tuning of the policies. The policies fine-tuned with RM are shown effective for capturing the desired human-like behaviors by a \(22.3\%\) margin of preference probability over original RL policies after fine-tuning for four iterations. ## 2 Related Work ### Reinforcement Learning from Human Feedback RLHF (Akrour et al., 2011, 2012; Griffith et al., 2013; Christiano et al., 2017; Jaques et al., 2019) has been investigated for at least a decade. It is a sub-category of a broader concept called human-in-the-loop learning process (Wu et al., 2022b; Hejna and Sadigh, 2022). Human feedback data can be essential for some tasks where reward engineering is hard or expensive for RL. Research work has been conducted on leveraging human annotated data or demonstrations for robotic control (Finn et al., 2016; Cabi et al., 2019; Biyik et al., 2022), solving games (Ibarz et al., 2018; Vinyals et al., 2019), and tuning LLMs (Madaan et al., 2022; Ouyang et al., 2022). However, in practice, _human annotation_ or _demonstration_ can be expensive to acquire. _Human preference_(Akrour et al., 2011, 2012; Sadigh et al., 2017; Christiano et al., 2017), in contrast, is easier to collect as feedback and commonly used in the fields like natural language processing (Ziegler et al., 2019; Ouyang et al., 2022) and robotics (Ibarz et al., 2021). Human feedback is used for fine-tuning LLMs recently (Ziegler et al., 2019; Jaques et al., 2019; Stiennon et al., 2020). By leveraging both human demonstrations and preferences, InstructGPT (Ouyang et al., 2022) is shown to significantly outperform previous GPT-3 baseline in terms of human preferences from annotators. It shows the potential of using RLHF as a scalable approach of using human feedback for tuning large models. For robotics, human feedback is an important source of information to facilitate the robot learning process (Ibarz et al., 2021). Preference-based learning (Sadigh et al., 2017) has been used to provide the reward function for RL agents, with the benefits of better scalability compared to demonstrations. Abramson et al. (2022) uses the RLHF framework to instruct the learning agents for manipulating objects in a 3D simulated world, and shows improved task success rates over the behavior cloning baseline. The human feedback data is collected to guide the task completion. Few-shot preference learning (Hejna and Sadigh, 2022) is also investigated with multi-task learning for quick adaptation to new tasks. Moreover, people have explored the combination of demonstrations and preferences as guiding signals for robots (Biyik et al., 2022). As shown in Table 1, our proposed RLHF approach distinguishes with other previous work on robotic control with human feedback for the ease of data collection and less engineering effort. With the RLHF approach, the data collection and feedback time are significantly reduced without expensive human demonstrations. Preference over videos requires small amount of efforts from humans thus can produce a large amount of labeled data. ### Natural Human Behavior in Robotic Manipulation There is a branch of work leveraging human demonstrations and imitation learning (Rajeswaran et al., 2017; Jiang et al., 2019; Ren et al., 2021; Alakuijala et al., 2021; Arunachalam et al., 2022; Du et al., 2022; Lopez et al., 2023) for robotics. One of the most well-known work is the Demo Augmented Policy Gradient (DAPG) (Rajeswaran et al., 2017), which regulates the reinforcement learning process with an additional demonstration likelihood loss. Dasari et al. (2022) uses human demonstrations to generate pre-grasp poses for grasping objects. The demonstrations can be collected either with human wearing specific equipment on hands (Christen et al., 2019) or from videos with human hands only (Sivakumar et al., 2022; Mandikal & Grauman, 2022). Directly imitating behaviors from human is also a feasible approach for generating natural behaviors in robot manipulation. However, these works require either human demonstrations in the real world, or online video data with hand pose tracking and re-targeting, which are either expensive or requiring intricate calibration. In Jiang et al. (2019), the state-wise joint limits and the energy function are explicitly learned with parameterized models, which makes the human-like locomotion with the musculoskeletal model a constrained optimization problem. The grasping operation is one key step for dexterous manipulation with hands. For human-like grasping (Zhu et al., 2021; Ye et al., 2022; Wang et al., 2022; Mandikal & Grauman, 2022; Sievers et al., 2022; Du et al., 2022; Qi et al., 2023), previous works have used carefully engineered loss function (Zhu et al., 2021; Sievers et al., 2022; Qi et al., 2023), or optimization under reachability and collision constraints (Wu et al., 2022), or leveraging heavy human demonstrations (Ye et al., 2022; Wang et al., 2022; Du et al., 2022) with DexYCB dataset (Chao et al., 2021), DexGraspNet (Wang et al., 2022) or DEXVIP (Mandikal & Grauman, 2022). Our work distinguishes with the above work in several aspects: (1) no demonstration data is used in our method, but only human preferences over videos are collected; (2) our work is not only useful for grasping, but also for a broader category of dexterous manipulation tasks including turning water bottle caps and opening doors; (3) our work focuses on improving the human-likeness of the behaviors instead of just the task completion rate. Although human data is already widely adopted in tuning the performance of the learning agents in previous work, we highlight the rationale of our design choices given key properties of the dexterous robot manipulation problem: (1). Using human preferences: instead of using demonstrations, which can be expensive and requiring human expertise, the human preferences over video are much cheaper to get. We therefore identify human preferences as a more scalable approach than the human demonstrations. The scalability can be critical since eventually the general policy models for robot is expected to be large, and the data size required for training the model should be comparably large (Kaplan et al., 2020). (2). Using a single task-agnostic reward model: human-like behavior regularization should be task-agnostic, as an approximation of a combination of principles involving energy minimization, lowest frictions, and avoiding violating the joint limits, which can be hard to specify in the task rewards. This decides that the reward model should be trained with trajectories across diverse tasks. ## 3 Methodology For the robotic hands to demonstrate human-like behaviors in dexterous manipulation tasks, our goal is to have a universal and easy-to-use module for tuning the performance of policies trained with RL. As shown in Fig. 2, our proposed framework can be split in the following steps: (1). As detailed in Sec. 3.2, a diverse set of policies are generated with a RL algorithm (introduced in Sec. 3.1). (2). For each policy, multiple trajectories can be collected with the policy model deployed in simulation environment, and human preference data over those trajectories are collected in a pairwise manner. Details about trajectories and human preference data collection are introduced in Sec. 3.3 and experimental Sec. 4.2. (3). With the preference data, a parameterized reward model (RM) is trained to represent the preference relationship carried by the human feedback, as described in Sec. 3.4. (4). Sec. 3.5 introduces how to use the trained RM to fine-tune the polices for generating human-like behaviors on robots. ### Preliminary We use Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm in our experiments. The parameterized policy \(\pi_{\theta}\) is optimized with the loss: \(\mathcal{J}(\pi_{\theta};r)=-\mathbb{E}_{s\sim\rho_{\pi},a\sim\pi}[\min(R_{ \theta}A(s,a),\text{clip}(R_{\theta},1-\epsilon,1+\epsilon)A(s,a))]\), where ratio \(R_{\theta}=\frac{\pi_{\theta}(a|s)}{\pi_{\theta_{\text{old}}}(a|s)}\) and \(\pi_{\theta_{\text{old}}}\) is the previous model for generating training samples, and the advantage function \(A^{\pi_{\theta}}(s,a)\) is estimated with \(\sum_{t}\gamma^{t}r_{t}(s,a)-V^{\pi_{\theta}}(s)\), with state-value function \(V^{\pi_{\theta}}(s)\), \(r\) is the reward function and \(\rho_{\pi}\) is the state-visituation distribution by policy \(\pi\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Data Collection Time & Feedback Time & Data Amount & Expert & Engineering Difficulty & Re-targeting \\ \hline RLHF (Ours) & Medium & \(10\sim 20s\) & Large & No & Low & No \\ \hline Demo (Christen et al., 2019) & Large & Minutes & Small & Yes & High & Yes \\ \hline Video (Sivakumar et al., 2022; Mandikal & \(\sim 0\) & \(10\sim 20s\) & Very Large & No & High & Yes \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different approaches for using human feedback on robotic control. ### Diverse Policy Generation To allow collecting human preferences over trajectories, we first need to generate diverse task policies for humans to choose from. We apply the PPO algorithm with additional diversity loss on constraining the action log-probabilities. Without the diversity loss, policies trained even with different random seeds are likely to collapse to very few modes or a single mode in terms of behavior. The diversity loss for updating current policy \(\pi_{\theta}\), given an existing policy set \(\mathcal{S}\), is: \[l_{d}(\mathcal{S})=\frac{1}{|\mathcal{S}|}\sum_{i=1}^{|\mathcal{S}|}\mathbb{E} _{a\sim\pi_{\theta}(\cdot|s)}[\frac{-1}{1+\log\pi_{i}(a|s)}]. \tag{1}\] The pseudo-code for generating a diverse policy set \(\mathcal{S}\) is as Alg. 1. For generating diverse polices before the first iteration of fine-tuning, the reward function in objective \(\mathcal{J}(\pi;r)\) is just the task reward: \(r=r_{\text{Task}}\). The effects of diverse policy generation are evaluated in Sec. 4.2. ### Human Preference Collection In Table 1, we compare the RLHF approach with other commonly seen approaches for leveraging human feedback in robot manipulation tasks. These approaches include learning from human demonstrations in real world (Christen et al., 2019) and imitating behaviors from human with online videos (Sivakumar et al., 2022). ``` Initialize policy set: \(\mathcal{S}=\emptyset\) for policy training iterations \(i=1,\dots,N\)do Initialize policy \(\pi_{i}\), value \(V_{i}\) in PPO; while policy \(\pi_{i}\) not converge do Run policy \(\pi_{i}\) to collect samples \(\{(s,a,r,g,s^{\prime},\text{done})\}\) Update policy \(\pi_{i}\) with loss: \(-\mathcal{J}(\pi_{i})+\xi l_{d}\) (as Eq. (1)) Update value \(V_{i}\) as standard PPO endwhile Update policy set: \(\mathcal{S}=\mathcal{S}\bigcup\{\pi_{i}\}\) endfor ``` **Algorithm 1** Diverse Policy Generation with PPO The desiderata of an approach for leveraging human feedback involves time efficiency for getting both data and human feedback, the human resources and special requirement (_e.g._, whether an expert is required), the engineering difficulty of implement the methods. Due to embodiment mismatch of human bodies and the robots, a re-targeting procedure is usually required for imitating the behaviors from human directly, either via live human demonstrations or videos from a human. This can not only increase the engineering difficulty for using human feedback data, but also induce errors and uncertainties in the pipeline. Our proposed framework with human feedback over trajectory videos has the benefits of requiring a medium amount of time for video data collection and a relatively small amount for human Figure 2: An overview of the proposed framework includes five steps in iteration: 1. Diverse policy training; 2. Trajectories collection from diverse policy set; 3. Human preference over collected trajectory pairs; 4. Reward model training with the labeled sample from human preference; 5. Policy fine-tuning for re-generating diverse polices. feedback collection. Since all trajectories are collected in simulation with policies trained using RL algorithms, the amount of available data is large. Moreover, any normal person can serve as a labeler after very brief instruction and familiarization of the user interface, while collecting human demonstrations usually requires a human expert wearing certain equipment to collect the data (Christen et al., 2019). ### Reward Model (RM) The training of the reward model \(r_{\text{HF}}(\cdot;\phi)\) is inspired from Abramson et al. (2022), but here we directly compare two trajectories generated from different policies, and train the RM with the loss: \[l_{\text{RM}}=\mathbb{E}_{\tau_{1},\tau_{2}\sim\mathcal{D}}\big{[}-\log\sigma \big{(}r_{\text{HF}}(\tau_{1};\phi)-r_{\text{HF}}(\tau_{2};\phi)\big{)}\big{]}, \tau_{1}\succ\tau_{2},\] where \(r_{\text{HF}}(\tau_{i};\phi)\) is the reward model, \(\tau_{i}=[s_{1},a_{1},\dots,s_{M},a_{M}],i\in\{1,2\}\) is the stacked state-action pairs of length \(M\), '\(\succ\)' is the preference relationship. The training dataset \(\mathcal{D}\) is collected with the human preferences over trajectories, which are generated using the current set of diverse policies. Each trajectory is transformed into length \(M\) samples with a sliding window. ### Fine-tuning Task Policy with RM The objective for fine-tuning the task policy with learned reward model \(r_{\text{HF}}\) is \(\mathcal{J}(\pi_{\theta};\tilde{r})\) with: \[\tilde{r}=r_{\text{Task}}(s_{t})+\alpha\cdot c\cdot r_{\text{HF}}(\tau_{t}), \tag{2}\] where \(c=|\overline{r_{\text{Task}}}|\) is a scaling term tracking the magnitude of averaging task reward over time. \(\tau_{t}\) is the stacked state-action pairs at time-step \(t\). The score from the reward model \(r_{\text{HF}}\) serves as an additional regularization term for the task reward, with a proper scaling. In our experiments, this objective is shown to be effective for tuning the policy behaviors to follow human preference. ## 4 Experiments ### Task and Environment Settings Bi-DexHands Environments.Bi-DexHands (Chen et al., 2022) is a collection of bimanual dexterous manipulation tasks and reinforcement learning algorithms, aiming at achieving human-level sophistication of hand dexterity and bimanual coordination. 20 tasks are used in our experiments. Most tasks involve two Shadow hands and different manipulated objects. Each hand has 24 degrees of freedom (DoF), which leads to high-dimensional observation and action spaces. Here we briefly introduce the observation space, action space, and reward design in Bi-DexHands. More details can be found in Appendix A. Action.Each Shadow Hand has five fingers with 24 minimum drive units, including four underdriven fingertip units (Finger Distal: FF1, MF1, RF1 and LF1). There are 20 proactive driven units, so the action space for each Shadow Hand is of 20 dimensions, following the original environment settings (Chen et al., 2022). The dual Shadow Hands have 40 dimensions of action space, \(\mathcal{A}_{\text{left hand}}=\mathcal{A}_{\text{right hand}}=20\). Additionally, in some tasks (_e.g_., Switch) the base of each Shadow Hand is movable. This leads to another 6 DoF for the translation and rotation of each hand base in the world frame. Details of the action spaces for each task are provided in Appendix A.1. Observation.The observation space consists of three components: \((\mathcal{O}_{\text{left hand}},\mathcal{O}_{\text{right hand}},\mathcal{O}_{ \text{task}})\), representing the state information for the left and right Shadow Hand, and the task-relevant information. The dimensions of the observation space for the tasks range from 414 to 446. More information about the observation spaces for each task is detailed in Appendix A.2. Reward.The reward function applied in experiments follow the original Bi-DexHands environment (Chen et al., 2022). Different tasks have different specific rewards but follow the same design principles. The reward is a dense function of (1) hand positions to grasping points \(d_{left},d_{right}\), (2) the object translation and rotation errors from the target \(d_{target}\), and (3) penalties on actions \(f(\mathbf{a})\) for smoothing trajectories: \[r=c_{0}+c_{1}d_{left}+c_{2}d_{right}+c_{3}d_{target}+c_{4}f(\mathbf{a}), \tag{3}\] where \(c_{0},c_{1},c_{2},c_{3},c_{4}\) are adjustable constants for each task. More details can be found in Appendix A.3. ### Experimental Details Policy Generation and Data Collection.For each iteration, we train 10 policies for each task with different random seeds, with the diverse policy generation loss and the learned RM from last iteration (except for the first iteration). For the first iteration, each policy is trained for 20000 episodes to achieve the task completion, and in subsequent iterations, the policies are initialized with checkpoints from the previous iteration and fine-tuned for 5000 episodes. To ensure the policies can reasonably complete the tasks and thus then used used for trajectory collection, in the first iteration, we visualize the task performance for polices at all checkpoints, spread by a 1000-episodes interval, and choose only those checkpoints with successful task completion. In the trajectory collection phase, we collect 5 trajectories with each policy checkpoint in simulation. About 12300 trajectories2 across 20 tasks are collected for the first iteration. After reviewing the videos, we decide to discard three tasks (SwingCup, Kettle and DoorCloseOutward) due to the difficulty in task completion. For rest of the iterations, about 4100 trajectories3 are generated in each iteration. The hyperparameters for policy generation with PPO algorithm are shown in Appendix C. Footnote 3: Fewer checkpoints are saved compared to the first iteration. The trajectories for trained policies on three tasks (HandOver, Pen, CatchOver2Underarm) are visualized in Fig. 3. For each trained policy, 10 trajectories are collected by depolving the policy in the corresponding environment. The t-SNE (Van der Maaten and Hinton, 2008) plots show the clustering results in embedded 2D space for observed states in collected trajectories with different policies. The observed states here only involve the joint positions of the two hands, since other observations like velocities or forces may have different magnitudes of values and unfairly affect the results. From the results we can see the different policies well separated in the state space. This benefits the downstream procedure for human preference data collection. In Appendix A.4, we compare the proposed approach with policy entropy method and without any bonus for diversity to justify our design choice. Human Feedback Collection.We recruit five human labelers providing preferences with the feedback collection interface we build. A total of 1000 feedback over trajectory pairs are collected for each iteration, which is then converted into \(1\times 10^{5}\sim 2\times 10^{5}\) labeled samples4. Each feedback takes about 10-20 seconds and the data collection can be finished in several hours. Each preference is the choice over 'Left', 'Right' and 'Not Sure' based on the given two side-by-side trajectory videos for the same task. The 'Left' indicates that the trajectory on the left shows a more human-like behavior than the right and vice versa. The preference data is processed to be stacked state-action pairs with a sliding window on each trajectory. The window size is chosen to be 8 in our experiments, which corresponds to about 0.33 seconds in videos. The processed data is used for training the RM with loss as Eq. (2). Fig. 4 visualizes a side-by-side comparison of frames in five tasks5, showing the differences of unnatural and human-like behavior. Footnote 4: This number varies due to different trajectory lengths. Feedback in the format of Preferences over Trajectories.Previous works have used human preference over video clips (Christiano et al., 2017; Abramson et al., 2022) or the whole trajectories (Akrour et al., 2011, 2012). Compared with whole trajectories, video clips are shorter and therefore more time efficient for label collection. However, Figure 4: Comparison of unnatural (left) behaviors with original RL polices and human-like (right) behaviors after tuning with RM for four iterations on five tasks. Red circles mark out the unnatural poses of the hand, including twisted fingers, over-stretched hand poses and weird correlated positions of joints. Figure 3: Visualization of t-SNE plots for policies trained with the proposed diverse policy generation method on three tasks. Each color corresponds to the one policy. the previous work with video clips (Christiano et al., 2017; Abramson et al., 2022) focus on the improvement on the task completion for Atari games, MuJoCo or Playhouse environments, which are all long-horizon tasks. Although solving tasks in the Bi-DexHands environment require complex dexterous manipulation, the intentions of the robots are usually straightforward and the tasks have relatively short horizons, ranging from 20 to 600 timesteps (details in Appendix A.5). Clipping the trajectories in this environment can increase difficulties for the labelers to provide useful-preferences. Fig. 5 visualizes the trajectories for for tasks: Pen, HandOver, PushBlock and DoorCloseInward. Models and RM Training.The RM in our experiments is parameterized by a fully-connected neural networks with 512-512-512-128-32 hidden units and Tanh activation function for both hidden layers and the output layer. The input shape of RM is \((d_{\mathcal{O}}+d_{\mathcal{A}})\times M\), where \(d_{\mathcal{O}}=24\) is the dimension of joint positions of the full robot hand6 and \(d_{\mathcal{A}}=20\) is the dimension of the proactively driven joint actions7 and \(M=8\) is the number of stacked frames. The optimization process uses Adam optimizer (Kingma & Ba, 2014) for minibatch stochastic gradient descent of 50000 epochs, with batch size 4096, learning rate \(1\times 10^{-3}\) and the multiplicative learning-rate scheduler StepLR in PyTorch with step size as 1000 and gamma as 0.5. For each iteration, the newly collected data is appended to the previous data as a whole for training a new RM initialized with the checkpoint of the previous iteration. Footnote 6: Only \(\mathcal{J}_{p}\) in \(\mathcal{O}_{\text{left hand}}\) and \(\mathcal{O}_{\text{right hand}}\), see Appendix A.2. Footnote 7: Only \(\mathcal{A}_{\text{left hand}}\) and \(\mathcal{A}_{\text{right hand}}\), see Appendix A.1. We want to emphasize our choices of the input for the RM as a universal module for regularizing the behavior of Shadow Hand. As opposed to use the entire observation and action for the dual-hand tasks (detailed in Appendix A, which can be of hundreds of dimensions, we choose only the critical joint states and hand-only actions as the inputs to the RM. This choice of design benefits from several perspectives: (1) reduction of input dimensions increases the training and inference efficiency, without the need of using a much larger neural network; (2) this also helps to reduce the required number of samples for training the RM; (3) since the observations and actions only involve the joint states on Shadow Hands, it is _task-agnostic_. We assume that the human-like behavior is mostly affected by the relative motions of fingers on hand instead of the overall movement of the hand basis for these dexterous manipulation tasks. ### Reward Model Evaluation The reward model needs to (1) be consistent with labelers preference and (2) provide information gain. (1) can be evaluated on whether the RM matches with the labelers' preference, and this relationship can be task specific. Some tasks like SwingCup may not show the consistence, and therefore being discarded in later training and feedback collection. (1) can be measured by evaluating the RM score on those policies, and comparing it with the human preferences. Assuming (1) holds, sastifying (2) means RM needs to have a clear preference over the existing policies for each task. After collecting and summarizing the human feedback on sampled trajectories, we introduce the human preference score to quantify the preference over policies. **Human Preference Score** is the metric showing the prefer Figure 5: Visualization of hand and object trajectories in four tasks. From left to right it shows the procedure of the task completion. Figure 6: Comparison of the accumulative rewards over trajectories with the RM (top) and the human preference scores (bottom). ence of humans over the policy sets. \[c_{\text{HF}}=\frac{1}{M}\sum_{k=1}^{M}\big{[}\mathbb{1}(\tau_{i,k}\succ\tau_{j,k} )-\mathbb{1}(\tau_{j,k}\succ\tau_{i,k})\big{]}, \tag{4}\] \[\tau_{i,k}\in\mathcal{T}_{i},\tau_{j,k}\in\mathcal{T}_{j},\] where \(\mathbb{1}(\tau_{i,k}\succ\tau_{j,k})\) indicating the labeler's preference of trajectories \(\tau_{i,k}\) over \(\tau_{j,k}\) and vice versa. \(\mathcal{T}_{i}\) is the set of trajectories collected with the policy set indexed by \(i\). \(\tau_{i,k}\) and \(\tau_{j,k}\) are randomly selected from the corresponding sets. \(M\) is the set of paired samples for labelers to provide preference. This score rules out the samples labeled as 'Not Sure' in the data. Consistency between RM and Human Preference.Before fine-tuning the task policies, we evaluate the RM from human preferences, by letting the labelers to provide the human preference scores as comparison with the average rewards from the RM over the comparing trajectories. The results are displayed in Fig. 6 for two tasks. The ten sets of polices with random seeds are evaluated in this experiment, 25 trajectories are collected for each policy set. The top two figures show the evaluated scores with the RM, averaged over the entire trajectories. The two figures at the bottom show the human preference score \(c_{\text{HF}}\) by directly collecting human preferences over randomly paired trajectories for different policy sets. The results in Fig. 6 show the consistency of RM scores and the human preference scores. For each task, the human preference scores are evaluated on a randomly sampled batch of trajectories for collecting human preferences, and the RM score is evaluated on the whole dataset collected with the policy sets. As a result, there might be slight differences between the human preference score and the RM scores for some model indices. In general, the trained RM represents the preference bias from human feedback. More evaluation results of the trained RMs in each iterations are shown in Appendix. B. The breakdown results for each labelers are shown in Appendix B.1. ### Human-like Robot Polices with RLHF Preference Results.After evaluating the learned RM, we apply the task-agnostic RM as an additional term in the task reward to fine-tune the task-specific policies, as introduced in Sec. 3.5. The RM training and policy fine-tuning process are iterated for four times in our experiments. For the first and final iterations (the fourth), Table 2 shows the results of preference evaluation for polices with RM fine-tuning and without it. The numbers in the table indicate the percentage of evaluation trials for each case. For example, in the row 'First Iteration', \(25.1\%\) of evaluation trials show a preference of 'Policy+RM' over 'Original Policy', and \(22.2\%\) the opposite. With \(52.7\%\) probability, the human labelers are not sure on the preference. 'First Iteration' indicates the comparison after one iteration of RM training. 'Final Iteration' indicates the comparison after multiple iterations of RM training and policy tuning. As shown in the table, the effects of RM fine-tuning can be insignificant with only one iteration. However, the preference over polices fine-tuned with the RMs increases from \(25.1\%\) to \(35.7\%\) with more iterations of RM training and policy fine-tuning. In Appendix. D, the breakdown results for the preference over each task are shown. The high probability of uncertainty ('Not Sure') is within our expectation, since the comparison of human-like behaviors can be very subtle in some cases and we ask the labelers to only label a clear preference when they are certain to see a significant difference between the compared trajectories. In the results, the policies are evaluated with human labelers using the same interface as the one for providing preference feedback. The same people (five labelers) for providing feedback are providing evaluations. 500 trials of evaluation are provided for each comparison. The time for providing evaluation is the same as for providing feedback, as about 10-20 seconds per evaluation. \begin{table} \begin{tabular}{c c c c} \hline \hline & Policy+RM & Original Policy & Not Sure \\ \hline First Iteration (Seen Tasks) & \(25.1\%\) & \(22.2\%\) & \(52.7\%\) \\ \hline Final Iteration (Seen Tasks) & \(35.7\%\) & \(13.4\%\) & \(50.9\%\) \\ \hline Final Iteration (Unseen Tasks) & \(24.9\%\) & \(13.0\%\) & \(62.1\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Preference results. \begin{table} \begin{tabular}{c|c c} \hline \hline Task & Policy+RM & Original Policy \\ \hline ShadowHand & \(0.40\pm 0.20\) & \(0.98\pm 0.94\) \\ \hline Switch & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline CatchOver2Underarm & \(0.62\pm 0.25\) & \(0.74\pm 0.24\) \\ \hline CatchAbreast & \(0.60\pm 0.23\) & \(0.58\pm 0.24\) \\ \hline HandOver & \(0.79\pm 0.31\) & \(0.83\pm 0.24\) \\ \hline BlockStack & \(0.39\pm 0.24\) & \(0.38\pm 0.29\) \\ \hline CatchUnderarm & \(0.56\pm 0.28\) & \(0.68\pm 0.23\) \\ \hline BottleCap & \(0.58\pm 0.39\) & \(0.63\pm 0.28\) \\ \hline LifHomeParam & \(0.00\pm 0.00\) & \(0.10\pm 0.14\) \\ \hline TwoCatOnUnderarm & \(0.05\pm 0.05\) & \(0.04\pm 0.07\) \\ \hline DoorOpenInward & \(0.03\pm 0.04\) & \(0.21\pm 0.26\) \\ \hline DoorOpenOutward & \(0.85\pm 0.34\) & \(0.85\pm 0.32\) \\ \hline DoorCloseInward & \(0.5\pm 0.46\) & \(0.54\pm 0.47\) \\ \hline PushBlock & \(0.46\pm 0.37\) & \(0.51\pm 0.40\) \\ \hline Scissors & \(0.85\pm 0.34\) & \(0.9\pm 0.27\) \\ \hline Pen & \(0.78\pm 0.29\) & \(0.66\pm 0.43\) \\ \hline GraspAndPlace & \(0.61\pm 0.36\) & \(0.82\pm 0.23\) \\ \hline Total & \(0.47\) & \(0.55\) \\ \hline \hline \end{tabular} \end{table} Table 3: Success rates on training tasks. Success Rates.Table 3 summarizes the success rates for all 17 tasks (seen) finally adopted for training RM and tuning polices. Each task is evaluated with policies trained with 10 random seeds, and the results in the table show the means and standard deviations of the final success rates over the 10 runs. It shows that success rates are negatively affected by the RM fine-tuning process by \(8\%\). This is within our expectation since the RM serves as an additional regularization term for the original task reward. There is the trade-off between the task completion and the human-like behaviors in our settings. The learning curves for all tasks are shown in Appendix A.6. ### Generalization to Unseen Tasks With the trained RM in the last iteration, we further test its generalization capability in the four unseen tasks as shown in Fig. 7, CatchUnderarmPen, CatchAbreastPen, TwoCatchAbreast, GraspAndPlaceEgg, where the manipulated objects or the movement objectives are changed. For each task, original policies are trained under 10 random seeds for 50000 episodes. For fine-tuning, the RM is used to fine-tune ten polices for 20000 episodes in each task. The preference results are shown in Table 2 (with breakdown results in Appendix D Table 12). The preference scores of the RM on the unseen tasks are lower than on training tasks, but RM still outperforms original policies by \(11.9\%\), which shows the generalized improvement of using RM for fine-tuning polices on unseen tasks. The success rates for unseen tasks are shown in Table 4. We notice that the success rates even increase in two of the unseen tasks (CatchAbreastPen and CatchUnderarmPen) with RM fine-tuning. ### Lessons for Reward Model During the training of RM and policy fine-tuning, we find that some detailed choices can significantly affect the final performances. Therefore, we emphasize the subtleties in using RM for tuning behaviors of robot policies. This section also serves as ablation study for our design choices. **Objectives.** In Sec. 3.5, we introduce the objective used for tuning the robot task-specific policies with the RM, and we compare it with an alternative objective (second) for RLHF commonly used in other works, as detailed in Appendix. E.1. In Fig. 18 we compare the two objectives for three different tasks: HandOver, CatchAbreast and TwoCatchUnderarm. The adaptive scaling coefficient is \(c\) in Eq. (2), which is also the smoothed value of the task reward. For the second objective, although there is no such coefficient in the equation, we also track this value as an indicator of the task completion during training. We take \(\alpha=0.2\) and \(\beta=20.0\) for this experiment8. The results show that although the second objective optimizes the policy with higher human feedback rewards, it severely hurts the task completion performance in the process of policy tuning, since the task rewards are very small and not increasing during the training. Under the first objective, the policies manage to improve the reward for both the task completion and human preference. Footnote 8: We also test with larger \(\beta\) values and the results are similar. **Scaling matters**. Following the objective in Eq. (2) for policy tuning, we further notice that the scaling coefficients \(\alpha\) and \(c\) can affect the policy performance significantly. Without the proper scaling of the human preference reward, the performance of task completion can also be bad. In Appendix E.2 Fig. 19, we compare the task success rates and episodic rewards for this design choice on three tasks: HandOver, BlockStack, BottleCap. Specifically, whether using the adaptive coefficient \(c\) (\(c=1\) if not adaptive) and the value of \(\alpha\) are the hyperparameters in this experiment. Although the effects are not as significant as the change of optimization objective in above paragraph, it also shows that using the adaptive coefficient with a smaller scaling factor \(\alpha\) usually leads to higher task success rates as well as relatively high human feedback rewards, which indicates the better policy performance in terms of both task completion and human-like behaviors. \begin{table} \begin{tabular}{c|c c} \hline \hline Task & Policy+RM & Original Policy \\ \hline CatchAbreastPen & \(0.52\pm 0.37\) & \(0.51\pm 0.35\) \\ \hline TwoCatchAbreast & \(0.00\pm 0.00\) & \(0.00\pm 0.00\) \\ \hline CatchUnderarmPen & \(0.70\pm 0.36\) & \(0.67\pm 0.37\) \\ \hline GraspAndPlaceEgg & \(0.65\pm 0.35\) & \(0.74\pm 0.29\) \\ \hline Total & \(0.47\) & \(0.48\) \\ \hline \hline \end{tabular} \end{table} Table 4: Success rates on unseen tasks. Figure 7: Visualization of four unseen tasks. **The RM may not serve to guide the task completion.** For some hard tasks like Kettle and DoorCloseOutward, the PPO algorithm with certain entropy bonus for boosting exploration is not sufficient to acquire the optimal policy in terms of task completion. In this case, RM with human-like behavior as the criterion is unlikely to help with the task completion even after many iterations of policy fine-tuning. **The RM can be hard to achieve human-like behavior in special cases.** For some tasks like SwingCup, it could be hard for PPO policies to explore human-like behaviors even with the proper specification of initial pre-grasp poses and sophisticated reward engineering. In this case, the diverse polices to collecete human feedback do not even contain any human-like behavior. It is can be very hard to use the RM approach to get the human-like behavior after many iterations of fine-tuning. ## 5 Limitations and Conclusions In this work we propose using human preference feedback to learn a universal human prior for multi-finger in-hand manipulation over diverse dexterous tasks. With an iterative process of policy learning, feedback collection, and human reward model learning, the proposed method based on RLHF can significantly improve the human likeness of the hand trajectories without hurting too much of the task performance. There are also limitations and potential extensions for current method and experiments. The subtleties of in-hand manipulation can affect the human likeness thus make it hard for providing human preference over videos. The amount of human preference data is limited, and more data is expected to further improve the learned reward model for calibrating human-like behaviors. Although human preferences are cheaper than human demonstrations, we believe there are approaches to further alleviate the required human efforts by changing the fashion of feedback or improving the data efficiency by inspecting the preference data and leveraging it with prioritized sampling mechanism. These are promising directions to explore in the future work.
2305.13057
Causality-Aided Trade-off Analysis for Machine Learning Fairness
There has been an increasing interest in enhancing the fairness of machine learning (ML). Despite the growing number of fairness-improving methods, we lack a systematic understanding of the trade-offs among factors considered in the ML pipeline when fairness-improving methods are applied. This understanding is essential for developers to make informed decisions regarding the provision of fair ML services. Nonetheless, it is extremely difficult to analyze the trade-offs when there are multiple fairness parameters and other crucial metrics involved, coupled, and even in conflict with one another. This paper uses causality analysis as a principled method for analyzing trade-offs between fairness parameters and other crucial metrics in ML pipelines. To ractically and effectively conduct causality analysis, we propose a set of domain-specific optimizations to facilitate accurate causal discovery and a unified, novel interface for trade-off analysis based on well-established causal inference methods. We conduct a comprehensive empirical study using three real-world datasets on a collection of widelyused fairness-improving techniques. Our study obtains actionable suggestions for users and developers of fair ML. We further demonstrate the versatile usage of our approach in selecting the optimal fairness-improving method, paving the way for more ethical and socially responsible AI technologies.
Zhenlan Ji, Pingchuan Ma, Shuai Wang, Yanhui Li
2023-05-22T14:14:43Z
http://arxiv.org/abs/2305.13057v3
# Causality-Aided Trade-off Analysis for Machine Learning Fairness ###### Abstract There has been an increasing interest in enhancing the fairness of machine learning (ML). Despite the growing number of fairness-improving methods, we lack a systematic understanding of the trade-offs among factors considered in the ML pipeline when fairness-improving methods are applied. This understanding is essential for developers to make informed decisions regarding the provision of fair ML services. Nonetheless, it is extremely difficult to analyze the trade-offs when there are multiple fairness parameters and other crucial metrics involved, coupled, and even in conflict with one another. This paper uses causality analysis as a principled method for analyzing trade-offs between fairness parameters and other crucial metrics in ML pipelines. To practically and effectively conduct causality analysis, we propose a set of domain-specific optimizations to facilitate accurate causal discovery and a unified, novel interface for trade-off analysis based on well-established causal inference methods. We conduct a comprehensive empirical study using three real-world datasets on a collection of widely-used fairness-improving techniques. Our study obtains actionable suggestions for users and developers of fair ML. We further demonstrate the versatile usage of our approach in selecting the optimal fairness-improving method, paving the way for more ethical and socially responsible AI technologies. ## I Introduction Machine learning (ML) techniques are now essential for everyday applications in safety-critical domains like credit risk evaluation [1] and criminal justice [2]. However, ML models have exhibited inherent biases [3, 4], leading to real-world consequences such as discriminatory outcomes between privileged and underprivileged groups [5, 6, 7]. To address this, various _fairness-improving methods_ have been proposed and studied by the software engineering (SE) community, including mitigating unfairness through data processing [8, 4, 9], model modification [10, 11], or prediction alteration [3]. Despite the significant progress made, an important question arises: _what are the trade-offs made by these fairness-improving methods in the ML pipeline?_ It is a widely held belief that there exists a trade-off between fairness and the functional quality properties of the ML pipeline, such as the ML performance and the ML model robustness. In general, empirical studies from the SE community and theoretical analyses from the ML community have demonstrated that optimizing for performance may come at the cost of fairness, and vice versa [12, 13, 14, 9, 8]. Furthermore, many metrics concentrating on fairness, such as group fairness and individual fairness, are inherently incompatible [13]. These trade-offs render additional complexity to the process of improving fairness in ML systems and are not well understood. As a result, the lack of transparent and manageable trade-off analyses makes it challenging for developers to make informed decisions within the ML pipeline. To understand trade-offs, it is essential to comprehend the interactions among fairness-improving methods as well as different metrics. More importantly, it is crucial to "disentangle" the true cause-effect relationships from the observed correlations. For example, a fairness-improving method may simultaneously affect the model's fairness on both the training set and the test set. However, it is unclear how would training fairness affect test fairness due to the confounding influence introduced by the fairness-improving method. To hurdle this obstacle, we advocate for the use of causality analysis [15] -- a principled approach to learning the causal relations between random variables -- to better understand trade-offs between fairness parameters and other crucial metrics in the ML pipeline when fairness-improving methods are enforced. Despite the promising potential of causality analysis, several challenges must be addressed to fully harness its power. A typical causality analysis procedure can be broadly divided into two phases: 1 causal graph learning, and 2 causal inference. In the first phase (1), a causal graph is learned from data, whose nodes essentially represent random variables (which are fairness parameters and other metrics in the ML pipeline) and the edges encode the causal relationships among nodes. The second phase (2) involves applying an inference algorithm to the learned causal graph to quantitatively estimate the causal effect from one node to another. However, both phases present considerable challenges in our research context. First, when treating fairness parameters and other metrics in the ML pipeline as variables, the process of causal graph learning becomes a causal discovery problem involving mixed data types, such as continuous and discrete variables. Furthermore, the complex nature of ML pipelines, which involve numerous metrics and methods, makes learning an accurate causal graph challenging due to the well-known "curse of dimensionality" problem. Existing causal discovery algorithms have difficulty managing cases with this level of complexity. Second, even if a causal graph can be successfully learned, leveraging the graph to understand trade-offs between fairness-improving techniques and metrics remains under-explored. Specifically, it is unclear how to recast trade-off analyses into a series of standard causal inference queries such that well-established causal inference algorithms can be applied smoothly. To address these challenges, we propose a novel causal analysis framework for understanding trade-offs. In the causal graph learning phase (1), we involves fairness-improving methods as additional interventional variables to guide the learning process, and introduce a novel mechanism to convert discrete variables into continuous one without losing information. In the causal inference phase (2), we systematically formulate trade-offs in typical ML pipelines using causality analysis. Our formulation provides a novel and unified interface over various fairness-improving methods and critical metrics in the ML pipeline, covering both the training and testing phases. To gauge the effectiveness of our approach, we conduct extensive experiments using three real-world datasets: Adult [16], COMPAS [17], and German [18] with 12 widely-used fairness-improving methods used (see Table II). This empirical study enables a comprehensive analysis of the trade-offs and leads to a number of intriguing findings. First, the selection of fairness metrics can significantly affect the pattern of observed trade-offs, highlighting the need for a systematic and automated approach to deciding optimal metrics for a particular scenario. (b) Second, certain metrics, such as Average Odds Difference (AOD) and Theil Index (TI), play a central role in trade-offs (including fairness vs. performance and fairness vs. robustness). These metrics act as the cause for trade-offs more frequently than other metrics. Third, the trade-off between fairness and robustness, though not extensively explored in the SE community, is inevitable. This observation highlights the importance of taking both robustness and fairness into account when calibrating the ML pipeline. Furthermore, we demonstrate a versatile application of our framework in the selection of optimal fairness-improving methods. Empirical results in Sec. VII indicate that our approach outperforms state-of-the-art methods. To conclude, this paper makes the following contributions: * To our knowledge, this is _the first work_ to introduce causality analysis as a principled approach to analyzing trade-offs between fairness and other critical metrics in ML pipelines. * We propose a novel causality analysis framework to practically and effectively concretize causality analysis in the context of ML pipelines when fairness-improving methods are enforced. In particular, we deliver a set of domain-specific optimizations to enable more accurate causal discovery and design a unified interface for trade-off analysis on the basis of standard causal inference techniques. * We conduct an extensive empirical study on representative fairness-improving methods and real-world datasets. We obtain actionable suggestions for users and developers in the ML pipeline when fairness-improving methods are enforced. **Open Source.** The source code and data are available at [19]. ## II Preliminary ### _Fairness-Improving Methods_ A significant amount of research has been devoted to the investigation of fairness in ML [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. In general, most fairness-improving methods can be classified according to the ML stages when they are involved and the fairness objectives they aim to achieve [12, 24]. **Methods by Different Stages.** Fairness-improving methods can be categorized based on the stage of the machine learning pipeline they are involved in. These stages can be classified as pre-processing, in-processing, or post-processing methods. Pre-processing methods manipulate the training data, while in-processing methods modify the model during training. Post-processing methods, on the other hand, adjust the predictions made by the trained model. Representative methods for each type of fairness-improving method have been selected and analyzed in our study (see Table II for more information). **Methods by Different Objectives.** Fairness-improving methods can also be categorized based on the fairness objectives they aim to achieve. Generally, there are two primary types of fairness objectives: individual fairness and group fairness. Individual fairness refers to treating all individuals equally. Zhang et al. [24] introduced the Causal Discrimination Score (CDS) to quantify the individual fairness of a model. On the other hand, group fairness focuses on the fairness of a model's predictions for different groups defined by one or more sensitive attributes. A model is considered fair if it treats all subpopulations equally. Common group fairness metrics include Disparate Impact (DI) [23] and Statistical Parity Difference (SPD) [25], as presented in Table III. **Motivation.** The different types of fairness-improving methods, along with their varied objectives, can introduce complexity and challenges in achieving a balanced solution. For instance, two well-known fairness objectives, group fairness and individual fairness, are often mutually incompatible, leading to an inevitable trade-off between them [13, 14]. A similar phenomenon is also observed in other objectives such as robustness versus accuracy [26]. _These complex relationships behind potentially conflicting objectives render achieving a good trade-off among multiple metrics (fairness, accuracy, robustness) challenging._ Overall, this research advocates the usage of causality analysis, a well-established and systematic approach, for understanding the complex relationships between variables presented in this fairness-improving context over ML pipeline. We envision that, by employing causality analysis, we can make the process of improving fairness notably more transparent and manageable. We present the preliminary knowledge of causality analysis in the following subsection. ### _Causality Analysis_ Causality analysis effectively provides a systematic and comprehensive understanding of the complex causal relationships between variables. This desirable feature is attained through two fundamental steps: causal discovery and causal inference. According to Judea Pearl [15], causation (or causal relation) refers to the relationship between two variables, wherein changes in one variable cause changes in the other. This concept differs from correlation, which merely indicates the statistical dependence between two variables. Considering a simple example, \(X\gets Z\to Y\), where \(X\) and \(Y\) are correlated, but \(X\) does not cause \(Y\) (and vice versa). Here, \(Z\) is a _confounder_ because it simultaneously causes \(X\) and \(Y\). Indeed, the correlation between \(X\) and \(Y\) is induced by \(Z\). This example highlights why correlation alone cannot imply causation. To address more complex cases, we introduce the definition of _causal graph_ as follows: **Definition 1** (Causal Graph).: _A causal graph (a.k.a., Bayesian network) is a directed acyclic graph (DAG) consisting of nodes \(V\) and edges \(E\), i.e., \(G=(V,E)\), where each node \(X\) (\(X\in V\)) represents a random variable and each edge \(X\to Y\) (\(X,Y\in V\)) represents a directed causal relation from \(X\) to \(Y\). The nodes in the graph can be categorized into two groups: endogenous nodes, which are determined by the values of other nodes in the graph, and exogenous nodes, which are determined by external factors._ Causal graphs facilitate reasoning both qualitative and quantitative causal relations between variables. In practice, however, the causal graph is commonly _unknown_. For another, although causal relationships can be inferred if interventions are properly applied, the majority of real-world variables cannot be simply intervened [27]. This obstacle necessitates causal discovery, which aims to reconstruct the causal graph from observational data. **Causal Discovery.** Causal discovery is the process of inferring a directed acyclic graph (DAG), where each node represents a variable and each edge represents a causal relation. Holistically, mainstream causal discovery methods can be categorized into three groups: constraint-based, score-based, and model-based methods [27]. Constraint-based methods use the _conditional independence test_ to determine the edges and properties of special relationships (such as _confounder_) to infer the direction of causal relations [28, 29, 30]. Score-based methods formulate causal discovery as a search problem and evaluate the quality of the causal graph using a score function [31, 32, 33]. For model-based methods, asymmetry is exploited to identify causal relations [34, 35, 36]. **Causal Inference.** Causal inference quantitatively estimates the causal effect of one variable on another variable based on a causal graph. Although the causal graph is generally interpretable, estimating the causal effect can be challenging. Suppose, for instance, there are three variables \(X\), \(Y\), and \(Z\), where \(Z\to X\to Y\) and meanwhile \(Z\to Y\). Here, it is challenging to accurately determine the true causal effect of \(X\) on \(Y\) while minimizing the impact of \(Z\). In causality analysis, Average Treatment Effect (ATE) is commonly utilized to address this issue [15]. Below is the definition of ATE: **Definition 2** (ATE).: _In causal graph \(G\), ATE of \(X\) on \(Y\) can be computed as:_ \[\text{ATE}=\mathbb{E}[Y\mid\text{do}(X=\mathbf{x}_{1})]-\mathbb{E}[Y\mid\text{do} (X=\mathbf{x}_{2})] \tag{1}\] _where the \(\text{do}(\cdot)\) operator denotes a counterfactual query, which represents a hypothetical intervention over the value of a variable \(X\) (i.e., \(X\) is set to a constant value \(\mathbf{x}\), which may not be observed in the data). \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are two arbitrary values of \(X\) that are determined by the user._ Since ATE employs counterfactual queries, it cannot be explicitly estimated from observational data as it is a _causal estimand_. The process of making an incomputable causal estimand computable is known as _causal inference_[37]. This paper uses a popular causal estimation method, double machine learning (DML) [38]. Continuing with the preceding example, where \(Z\to X\to Y\) and meanwhile \(Z\to Y\). \(X\) denotes the treatment variable, \(Y\) denotes the outcome variable, and \(Z\) denotes the confounder.1 DML uses two arbitrary machine learning models to estimate \(X\) and \(Y\) from \(Z\), respectively. The effect of the confounder \(Z\) can then be "removed" by estimating the difference between the predicted and observed values of \(X\) and \(Y\) (i.e., the residuals). DML makes no assumptions about the form of the confounder \(Z\)'s effect [39], making it applicable to a wide range of causal relations. Footnote 1: For simplicity, we assume that they all represent a single variable, although, in more general scenarios, they may also be regarded as a set of variables. ## III Study Pipeline Fig. 1 depicts our study workflow, which consists of two main phases: (1) graph model construction and (2) trade-off analysis. In the first phase, we collect a substantial quantity of data, which includes un-interventional observed metrics such as test accuracies and SPD scores, and interventional metrics like the fairness-improving methods' parameters. Certain variables are referred to as "un-interventional" because their values cannot be directly changed by the user (e.g., the model test accuracy). We then use a causal discovery algorithm to learn a causal graph, where nodes represent variables (metrics and parameters)2, and directed edges represent causal relations between them. Footnote 2: In our context, exogenous nodes only represent the user-determined parameters of fairness-improving methods or model training, and endogenous nodes represent the observed metrics. Hence, with a slight abuse of notation, exogenous nodes are also referred to as interventional nodes, and endogenous nodes are referred to as observational (un-interventional) nodes in this paper. In the second phase, we propose counterfactual queries according to the identified trade-off between two un-interventional nodes on the causal graph; these nodes typically include the metrics of model accuracy, fairness, and robustness. Our aim is to explain the underlying cause for the trade-off among these important metrics, thus providing insights into the influence of the ML fairness-improving methods over other important factors on the ML pipeline. ### _Graph Model Construction_ This study seeks to reveal the intricate relationships among diverse kinds of metrics. However, the sheer number of metrics and the complexity of the relationships between them pose considerable obstacles in learning a sufficiently precise causal graph. and illustrate how we address this challenge from the perspectives of data collection and graph learning. **Collecting Training Data.** Due to the limited number of variables (e.g., only 14 nodes are involved in Baluta et al.'s work [40]), prior works tend to focus on un-interventional variables. In contrast, because of the large number of variables involved in this problem, [40]'s practice fails to guarantee that the entire value range of each variable is exhaustively covered. Therefore, we introduce fairness-improving methods as interventional nodes in the causal graph. For parameter-tunable methods, we convert and normalize their parameters into a ratio, which ranges from 0 to 1. For non-parameter-tunable methods, we use probabilistic sampling, also converting them into ratios as the following equation shows. \[D^{\prime}=T_{fair}(\alpha D)\cup(D-\alpha D) \tag{2}\] where \(D\) is the original dataset, \(T_{fair}\) is the fairness-improving method, \(\alpha\) denotes the ratio, \(\alpha D\) presents the selected \(\alpha\) part of the original dataset, \((D-\alpha D)\) denotes the remaining part, and \(D^{\prime}\) is the resulting dataset by applying \(T_{fair}\) on the \(\alpha\) part. Accordingly, all fairness-improving methods can be represented as a ratio and can be treated as nodes in the causal graph. By intervening these nodes, we can collect sufficient, high-quality training data for causal discovery, with all possible value ranges of un-interventional variables covered. **Learning Causal Graph.** We employ DiBS [33], a state-of-the-art score-based method, to learn causal graph with variational inference and is more efficient than other methods that rely on Markov chain Monte Carlo (MCMC) sampling. As will be shown in Sec. V, DiBS is able to accurately learn causal graphs and aligns well with expert knowledge. ``` Input: Causal Graph \(G\), Fairness Improving Method \(T\), two Metrics \(X\) and Y. Output: Trade-off causes list \(C\) 1\(C\leftarrow\emptyset\); 2\(\text{ATE}^{c}_{k}\leftarrow\mathbb{E}[X\mid do(T=1)]-\mathbb{E}[X\mid do(T=0)]\); 3\(\text{ATE}^{c}_{k}\leftarrow\mathbb{E}[X]\mid do(T=1)-\mathbb{E}[X\mid do(T=0)]\); 4\(\text{ATE}^{c}_{k}\leftarrow\mathbb{E}[X]\mid T=0]\); 5\(x_{T=0}\gets 0\); 6\(\text{if}\ \text{{sign}}(x_{T=0},\text{ATE}^{c}_{k})=\text{{sign}}(y_{T=0}, \text{ATE}^{c}_{k})\)then 7return\(\bot\); // Terminate algorithm in the absence of trade-offs. 8 9 end if 10 11if\(X\) causes Y on \(G\)then 12\(x_{T=1}=\mathbb{E}[X\mid T=1]\); 13\(\text{ATE}^{c}_{k}\leftarrow\mathbb{E}[Y]\mid do(X=x_{T=1})]\); 14\(\text{{sign}}(x_{T=0},\text{ATE}^{c}_{k})\neq\text{{sign}}(y_{T=0},\text{ATE}^ {c}_{k})\)then 15\(C\gets C\cup\{X\}\); 16 17 end if 18 19else if\(Y\) causes \(\infty\) or \(G\)then 20\(y_{T=1}\gets\mathbb{E}[Y\mid T=1]\); 21\(\text{ATE}^{c}_{y}\leftarrow\mathbb{E}[X\mid do(Y=y_{T=1})]\); 22\(\text{{sign}}(y_{T=0},\text{ATE}^{c}_{k})\neq\text{{sign}}(x_{T=0},\text{ATE}^ {c}_{k})\)then 23\(C\gets C\cup\{Y\}\); 24 25 end if 26\(C\gets\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find}\text{Find} \text{Find}\text{Find a list of identified causes for the trade-off between \(X\) and \(Y\) (line 25), where each cause (can be \(X\) or \(Y\) itself) represents one node on the causal graph. Following the above trade-off definition and the usage of \(T=0\), we note that \(T=1\) in Alg. 1 indicates that the fairness-improving method \(T\) is applied. When provided with two metrics \(X\) and \(Y\), Alg. 1 initially checks for the existence of a trade-off between them, based on the trade-off definition we just present (lines 2-7). And if a trade-off is present, the algorithm proceeds. Subsequently, if \(X\) is a cause of \(Y\) (i.e., there exists a path from \(X\) to \(Y\) in the causal graph \(G\)), Alg. 1 computes the ATE of \(X\) on \(Y\) (lines 9-10) and verifies whether the trade-off is caused by \(X\) (lines 11-12). Likewise, if \(Y\) is a cause of \(X\), Alg. 1 repeats the process with reversed roles (lines 13-17). Finally, Alg. 1 queries the causal graph to identify the potential causes of the trade-off, which are the common ancestors of \(X\) and \(Y\) (line 18). For each potential cause (line 19), Alg. 1 calculates its ATE on both \(X\) and \(Y\) (lines 21-22). If the ATEs implies a trade-off, the potential cause is designated as a cause (line 23). ## IV Experiment Setup Our study is implemented in Python with roughly 2.8K lines of code. All experiments are launched on one AMD CPU Ryzen Threadripper 3970X and one NVIDIA GPU GeForce RTX 3090. ### _Datasets & Model_ **Dataset.** Our experiments are conducted on three real-world datasets: Adult [16], COMPAS [17], and German [18]. These datasets are widely used for fairness research [7, 8, 9, 24, 43]. Table I shows the information of these datasets. Each of them has two sensitive attributes, which smoothly enables analyzing the trade-off between multiple sensitive attributes' fairness. **Model Training.** Following Zhang et al. [24], we use a feed-forward neural network (FFNN) with five hidden layers. To adjust the model's learning capacity, we control its size with a variable, _model width_. The default value of this variable is \(4\), so layers of the model contain \(4\times 16=64\), \(4\times 8=32\), \(4\times 4=16\), \(4\times 2=8\), and \(4\times 1=4\) neurons, respectively. For each dataset presented in Table I, we split the data into training and test sets with a ratio of 7:3. All trained models possess comparable performance [4, 8, 9, 24]. In particular, we achieve 84.7% accuracy on the Adult Income dataset, 67.4% accuracy on the COMPAS dataset, and 72.1% accuracy on the German Credit dataset. We also clarify that this model architecture is sufficient for our experiments, as all three datasets contain a relatively small number of features (see Table I). ### _Fairness Improving Methods_ Table II presents the fairness-improving methods used in our experiments. It is notable that we have opted for significantly more pre-processing methods than the other categories. This decision is influenced by the trend in the software engineering community and the greater compatibility offered by pre-processing methods. In particular, we surveyed top-tier conferences/journals in the software engineering community and found that pre-processing methods are generally dominant this line of research. Moreover, these methods impose no constraints on the model and have no impact on its output. Conversely, some in-processing methods are designed for specific models, such as FairNeuron [10] that only works on particular DNNs provided by the authors. This makes in-processing less compatible and impedes a fair comparison with other methods. Post-processing methods usually alter the model's output, nullifying prediction probability and prohibiting robustness-related analysis, such as adversary attacks. Given that said, as a comprehensive study, we still select three representative methods from both in-processing and post-processing methods. ### _Metrics_ This paper employs a wide range of metrics to evaluate model performance, fairness (both individual and group), and robustness. We present the metrics used in our experiments to ensure clarity. For detailed definitions of each metric, interested readers may refer to the cited references. Except for the Causal Discrimination Score (CDS), all fairness metrics are computed by the widely-used AIF360 [50]. We implement the CDS metric by us. For robustness, we evaluate the model from two perspectives: adversarial attack [47, 48] and membership inference attack [49]. For the adversarial attack, we use two standard approaches, the Fast Gradient Sign Method (FGSM) [47] and the Projected Gradient (PGD) [48]. Here, the success rate is defined as the ratio of the number of adversarial examples that successfully fool the model to the total number \begin{table} \begin{tabular}{l|c|c} \hline **Category** & \multicolumn{2}{c}{**Name**} \\ \cline{2-3} & \multicolumn{2}{c}{Reweighing [20]} \\ \cline{2-3} \multirow{3}{*}{Pre-processing (6)} & \multicolumn{2}{c}{Disparate Impact Remove (DIR) [23]} \\ \cline{2-3} & FairWay [7] \\ \cline{2-3} & \multicolumn{2}{c}{FairPass [8]} \\ \cline{2-3} & \multicolumn{2}{c}{FairMask [4]} \\ \hline \multirow{3}{*}{In-processing (3)} & \multicolumn{2}{c}{LTDD [9]} \\ \cline{2-3} & \multicolumn{2}{c}{Adversarial Defusion (AD) [5]} \\ \cline{2-3} & \multicolumn{2}{c}{Projure Remove (PG) [21]} \\ \cline{2-3} & \multicolumn{2}{c}{Expomended Gradient Reduction (EGK) [60]} \\ \hline \multirow{3}{*}{Post-processing (3)} & \multicolumn{2}{c}{Reget Option Classification (ROC) [22]} \\ \cline{2-3} & \multicolumn{2}{c}{Equated Odks (EG) [44]} \\ \cline{2-3} & \multicolumn{2}{c}{Calibrated Equalized Odks (CBO) [3]} \\ \hline \end{tabular} \end{table} TABLE II: Fairness improving methods. \begin{table} \begin{tabular}{l|c|c|c} \hline **Dataset** & **Size** & **Favorable Class** & **Sensitive Attribute** & **Privileged Group** \\ \hline Adult [16] & 48,842\(\times\)12 & income\(>\)50K & sex & sex=Male \\ & & race & race=White \\ \hline COMPAS [17] & 7,214\(\times\)11 & no recidivism & sex & sex=Female \\ & & race & race=CaCaCaCa \\ \hline German [18] & 1,000\(\times\)21 & good credit & sex & sex=male \\ & & age & age\(>\)30 \\ \hline \end{tabular} \end{table} TABLE I: Dataset information. \begin{table} \begin{tabular}{l|c} \hline **Category** & **Name** \\ \hline \multirow{3}{*}{Performance (2)} & Accuracy (Acc) [45] \\ \cline{2-3} & F1 score (F1) [45] \\ \cline{2-3} & \multicolumn{2}{c}{Disparate Impact (DI) [45]} \\ \cline{2-3} & Statistical Parry Difference (SFD) [45] \\ \cline{2-3} & Average Odks Difference (AMD) [45] \\ \hline \multirow{3}{*}{Individual Fairness (3)} & Consistency (Cons) [46] \\ \cline{2-3} & \multicolumn{2}{c}{Tinel Index (T) [46]} \\ \cline{2-3} & \multicolumn{2}{c}{Causal Discrimination Score (CDS) [22]} \\ \cline{2-3} & \multicolumn{2}{c}{FGSM’s Success Rate (FGSM) [47]} \\ \cline{2-3} & \multicolumn{2}{c}{PGD’s Success Rate (PGD) [48]} \\ \cline{2-3} & Rule-Based Membership Inference’s Accuracy (Rule) [49] \\ \cline{2-3} & \multicolumn{2}{c}{Black-Box Membership Inference’s Accuracy (Bbox) [49]} \\ \hline \end{tabular} \end{table} TABLE III: Metrics information. of adversarial examples. In our experiments, the FGSM/PGD implementation provided in Torchattacks [51] is used. For the membership inference attack, we use two popular methods: the rule-based and black-box methods. The rule-based method assumes that a sample is a member if the model correctly predicts its label; otherwise, the sample is a non-member. The black-box method trains a model to predict whether a sample is a member or not. Both methods are implemented by ART [52]. Although users are typically only interested in metrics measured on the test set, we also measure dataset properties (e.g., _DI_ of the dataset) and metrics on the training set (e.g., _SPD_ model's prediction on the training set). We regard these metrics as intermediate nodes, such as when pre-processing methods alter the dataset's properties, causing a change in the model's prediction on the training set, which in turn causes a change in the model's prediction on the test set. Therefore, these mediators contribute to causal graph learning. In addition, we measure fairness metrics for two sensitive attributes, respectively, to analyze the trade-off between multiple sensitive attributes. Overall, the causal graph contains 46 nodes. ## V Pilot Study on Causal Graph Quality Before using the learned causal graph to answer RQs (Sec. VI), we study their accuracy through a pilot study. This section consists of two pilot tasks: graph comparison and accuracy verification. In the first task, we compare the six learned causal graphs (three datasets \(\times\) two sensitive attributes in each dataset) to find their similarities and differences. In the second task, we conduct a human evaluation and a quantitative analysis to verify the accuracy of the learned causal graphs. ### _Graph Comparison_ From Fig. 2(a), we can see that the overlap across all graphs is small. These six graphs reach consensus on only 14 edges, while the average number of edges per graph is 138. According to previous research [40], this enormous distinction is a common phenomenon in causal discovery, compelling us to learn six graphs instead of one in this study. We clarify that the relationships among variables are typically more complex than we believe, as there is no simple equation to characterize them (e.g., the relationship between model loss and SPD score). Therefore, the causal graph may change considerably if the dataset, model architecture, or even the sensitive attribute addressed in this study is altered. In fact, as expected, when we retain the same dataset and model architecture, we can observe that the overlap increases. In Fig. 2(b), (c), and (d), the overlaps between two graphs range from 30% to 50%. Despite the intuitive observation, we clarify that the small overlaps between the learned causal graphs, as reflected in Fig. 2, does not imply that there is no common pattern across those graphs. For instance, we can observe that the right parts of all graphs are "empty" (colored in dark purple), signifying that nodes corresponding to those right parts lack parents. This is because those nodes represent the fairness-improving methods. As interventional variables, they are exogenous nodes in the graph. Based on this kind of observation, we conclude a high-level common pattern across all graphs, as shown in Fig. 3. This pattern is consistent with our expectations, such that the pre-processing methods cause the change in the data, which subsequently affect the model performance. Overall, we view that the pilot study at this step demonstrates the high accuracy of the learned causal graphs. ### _Accuracy Verification_ As the study basis, the accuracy of the learned causal graphs is essential to the effectiveness of the entire pipeline. Because of the absence of ground truth, we propose a human evaluation to assess the quality and accuracy of learned causal graphs. In particular, we invite six experts in the field of software engineering (all of whom are Ph.D. students with extensive experience in fairness or ML) to evaluate the causal graphs. Since there are 46 nodes in each graph, indicating high complexity and a large number of edges, we presume that it is impractical to request that experts construct causal graphs from scratch. Instead, the knowledge and experience of experts are more suitable for validating the causal graphs learned by the causal discovery algorithm. Specifically, for each learned causal graph, we randomly sample three subgraphs (each containing 15 nodes out of 46 nodes in total) and present them to the experts for evaluation. Experts are requested to mark any edges on the subgraph they disagree with and to note any edges not present in the subgraph that should be included. Then, we gather this feedback and compute an _error rate_ (i.e., the rate of incorrectly discovered edges by DiBS) and a _negative predictive value_ (NPV; denoting the proportion of absent edges in the learned graph). The results are shown in Table IV. We can see that the error rate is less than 10% in the vast majority of instances (with only one exception that trivially exceeds 10% by 0.19%), where the lowest value is only 1.78%. The results for NPV are \begin{table} \begin{tabular}{l|c||c c c c} \hline \hline & & nodes & edges & error rate & NPV \\ \hline \multirow{2}{*}{Adult} & sex & 46 & 141 & 1.78\% & 6.84\% \\ & race & 46 & 136 & 4.77\% & 4.15\% \\ \hline \multirow{2}{*}{COMPAS} & sex & 46 & 130 & 10.19\% & 2.30\% \\ & race & 46 & 133 & 7.90\% & 3.39\% \\ \hline \multirow{2}{*}{German} & sex & 46 & 141 & 6.71\% & 3.69\% \\ & age & 46 & 146 & 5.73\% & 2.58\% \\ \hline \hline \end{tabular} \end{table} TABLE IV: Human evaluation. Fig. 3: High-level common patterns across all graphs. Fig. 2: Overlap between the learned causal graph. The brighter the color, the higher the overlap. even more impressive. In all cases, the NPV is less than 7%, with a minimum value of 2.30%. This result indicates that the learned causal graphs are highly accurate and can serve as the foundation for further analysis. In addition to the human evaluation, we also report the statistics of the learned causal graphs and their ablated versions in Table V, where "Full ver." represents that all fairness-improving methods are included, and "w/o XX" indicates that some of the fairness-improving methods are excluded (e.g., "w/o pre" means pre-processing methods are excluded). We use normalized Bayesian Gaussian equivalent (BGe) score [53, 54], whose implementation is provided by DiBS [33], to measure the quality of the learned causal graph. As a standard metric in causal discovery, BGe scores reflect the data-fitness of the graph. The higher the BGe score, the better the learned graph. Note that BGe scores are data-specific, i.e., they cannot be compared across different scenarios, so we only compare the BGe scores on the same row in Table V. The results indicate that the application of fairness-improving techniques has a substantial impact on the quality of the learned causal graphs. Furthermore, each category of fairness-improving methods is essential to the quality of the learned causal graph, as removing any category of fairness-improving methods will result in a significant drop in the BGe scores. ## VI Evaluation We now investigate the mechanisms underlying various trade-off phenomena related to fairness, including the trade-off between fairness and model performance (**RQ1**), the trade-off between multiple sensitive attributes (**RQ2**), and the trade-off between fairness and model robustness (**RQ3**). In each RQ, we first present the discovered trade-off (with counts of occurrences) and corresponding causes that are revealed by Alg. 1 using the learned graphs. Then, we analyze the causes and discuss the implications of the trade-off and their causes. For the sake of space, metric names are abbreviated as follows: _prefix-metric_, where _prefix_ is either _Tr_ (measured on training data), _Te_ (measured on testing data), or \(D\) (datasets' properties), and _metric_ follows the same naming convention as in Table III. Additionally, _Width_ denotes the width of the neural network, which is the hyperparameter that controls the number of neurons in each layer. Using Tr-SPD as an illustration, it represents "SPD" (Statistical Parity Difference; see Table III) measured on the training set. Note that the term "causes" refers exclusively to the metrics mentioned in Sec. IV-C. All fairness-improving methods are excluded from "causes", as they are undoubtedly viewed as the causes of the trade-offs triggered by themselves. We clarify that the purpose of this study is to analyze the mechanisms underlying the trade-offs, so we concentrate on the intricate relationships among metrics. **Processing Time.** Building each causal graph requires approximately 30 minutes to learn from the data. As for the trade-off analyses (depicted in Alg. 1), we report that the average processing time for analyzing each trade-off is less than 1 minute, the majority of which is spent on computing the ATEs. In studying the following RQs, the processing time of computing ATEs is negligible, typically less than 20 seconds. ### _RQ1: Fairness vs. Model Performance_ As mentioned in Sec. II, there are two types of fairness: group fairness and individual fairness. This section explores the trade-offs between both types of fairness and model performance, i.e., group fairness vs. model performance and individual fairness vs. model performance. Additionally, we also investigate the trade-offs between group fairness and individual fairness, as this is a topic that has been primarily focused in the fairness literature [41, 42, 24]. Fig. 4 reports the observed trade-offs for each scenario in the form of "counts (#)" and "causes". The "counts" column shows the trigger time of trade-offs w.r.t. fairness-improving methods in Table II. For instance, the "counts" of "Acc vs. DI" is \(1\), when "sex" is the addressed sensitive attribute on the "Adult" dataset. This means that there is only one fairness-improving method that triggers the trade-off between Accuracy and DI on Adult. The "causes" column shows the causes of the trade-offs, which are revealed by Alg. 1. For each cause, we also report the "confidence" (i.e., the percentage of the trade-off caused by the metric) in four colors. Additionally, we use a diagonal line when no causes are found. Fig. 4 provides a comprehensive view of the trade-offs between fairness and model performance by listing all observed trade-offs. As the count of the majority of trade-offs is not zero, this table demonstrates that "trade-off" is a common phenomenon in fairness-improving methods. Moreover, the kinds of trade-offs vary considerably across different scenarios. For example, in the case of group fairness vs. performance, we observe that the trade-offs observed on Adult are mainly between SPD and performance metrics (the trade-off between Accuracy and SPD is observed eight times, and the trade-off between F1 and SPD is observed seven times when sex is the addressed sensitive attribute). In contrast, on COMPAS, most trade-offs are observed between AOD and performance metrics. This result suggests that the selection of fairness metrics may have a substantial impact on the number and type of trade-offs that can be observed. We presume that the more frequently a metric is observed as a cause, the more important the metric is, as it is more likely to reveal how well the fairness-improving method works with regard to the trade-off, i.e., whether this method achieves a win win-win situation. Based on this presumption, we expect to identify the most informative and beneficial metric for the development of fair ML based on the results of our experiments. From Fig. 4, we observe that there is no unified cause for all trade-offs. Instead, the causes of trade-offs vary considerably across different scenarios. Furthermore, the distribution of causes is not uniform. In Fig. 4, the most prevalent causes of trade-offs are metrics measured on the training set (e.g., Tr-TI). This \begin{table} \begin{tabular}{l|c||c c c c c} \hline \hline & \multicolumn{2}{c}{Full ver.} & w/o pre & w/o in & w/o post & w/o all \\ \hline Adult & sex & 1.00 & 0.56 & 0.64 & 0.77 & 0.00 \\ & race & 1.00 & 0.43 & 0.81 & 0.82 & 0.00 \\ \hline COMPAS & sex & 1.00 & 0.55 & 0.90 & 0.89 & 0.00 \\ & race & 1.00 & 0.48 & 0.89 & 0.88 & 0.00 \\ \hline German & sex & 1.00 & 0.65 & 0.54 & 0.76 & 0.00 \\ & age & 1.00 & 0.60 & 0.67 & 0.81 & 0.00 \\ \hline \hline \end{tabular} \end{table} TABLE V: Ablation study results. type of metric functions as a cause for trade-offs 190 times, which is far more than other types of causes. In comparison, metrics measured on the test set (e.g., \(\mathtt{Te}\mathtt{-}\mathtt{TI}\)) are only 38 times the cause of trade-offs, and for metrics of datasets' properties, the number is 29. This significant difference motivates us to investigate the distribution of causes further. Table VI presents the distribution of causes. In this table, we report the frequency of each fairness metric being the causes of trade-offs listed on Fig. 4.3 The column indicates the phase in which the metric is measured, i.e., "Data" for the dataset's properties, "Train" for the training set, and "Test" for the test set. Note that AOD, TI, and CDS depend on the model's prediction, so they are N/A for the "Data" column. This table reveals that AOD is the most common cause of trade-offs among all group fairness metrics, occurring 30 times. For individual fairness, TI is the most common cause of trade-offs, observed 42 times. Also, individual fairness metrics generally cause trade-offs more frequently than group fairness metrics. Footnote 3: Here, we only report fairness metrics, because the target of this experiment is to identify the most informative and beneficial fairness metrics for the development of fair ML. **RQ1 Findings:** In this RQ, we have the following two suggestions for users and developers of fair ML. \begin{table} \begin{tabular}{c 1. For the users, they should be aware that the results of experiments for fairness-improving methods may be biased by the selection of metrics. Moreover, the metrics that work well in one scenario to discover the trade-offs may not work well in other scenarios. To obtain a more comprehensive and faithful understanding of the fairness-improving method, we recommend that users use sufficient metrics or our causality analysis-based method, to explore the presumably optimal choice of metrics for their scenarios. 2. For the developers, we suggest that they pay closer attention to the metrics measured on the training set, as they are the most common causes of trade-offs according to our study. Additionally, we recommend AOD among all group fairness metrics and TI among all individual fairness metrics. ### _RQ2: Multiple Sensitive Attributes_ In this RQ, we investigate the trade-offs between multiple sensitive attributes. For the sake of space, we only report the results of experiments on COMPAS, and present other results on the website [55]. To distinguish metrics measured on different sensitive attributes, we change the abbreviation of metrics to "_prefix-sensitive attribute-metric_". For example, "\(\mathsf{Tr}\)-\(\mathsf{Sex}\)-\(\mathsf{PD}\)" represents "SPD" measured on the training set with "\(\mathsf{Sex}\)" as the sensitive attribute. In this RQ, only the trade-offs between group fairness for different sensitive attributes are considered, as individual fairness metrics are not related to the sensitive attribute. Fig. 5 presents the results of experiments on COMPAS. The left part of this table shows the results when the addressed sensitive attribute of the fairness-improving method is set to "\(\mathsf{Sex}\)", and the right part shows the results when the sensitive attribute is set to "\(\mathsf{Race}\)". Although these two parts attain consensus in some cases (e.g., they both agree that the trade-off between "\(\mathsf{Sex}\)-\(\mathsf{DI}\)" and "\(\mathsf{Race}\)-\(\mathsf{DI}\)" is rare), they have several substantial differences. For example, the left part reveals that the trade-off is rare between "\(\mathsf{Sex}\)-\(\mathsf{DI}\)" and "\(\mathsf{Race}\)-\(\mathsf{AOD}\)" (the "counts" is only one), whereas the right part shows that the trade-off is frequent between these two metrics (six counts). To explain this difference, we present the causal graphs of these two metrics in Fig. 6. This figure provides an intuitive explanation for the aforementioned distinction. Clearly, the causal graph of the scenario "COMPAS-Race" is more intricate than that for "COMPAS-\(\mathsf{Sex}\)". With more common ancestors in the graph, it is more likely that more causes leading to trade-offs will be identified. In addition, Fig. 6 also explain the difference in found causes between "COMPAS-\(\mathsf{Sex}\)" and "COMPAS-\(\mathsf{Race}\)". In Fig. 6(a), the metric "\(\mathsf{Te}\)-\(\mathsf{Acc}\)" not only has direct causal relations with "\(\mathsf{Sex}\)-\(\mathsf{DI}\)" and "\(\mathsf{Race}\)-\(\mathsf{AOD}\)", but also mediates the effect from "\(\mathsf{Te}\)-\(\mathsf{Race}\)-\(\mathsf{SPD}\)" and "\(\mathsf{D}\)-\(\mathsf{Race}\)-\(\mathsf{DI}\)" to "\(\mathsf{Sex}\)-\(\mathsf{DI}\)" and "\(\mathsf{Race}\)-\(\mathsf{AOD}\)". Therefore, "\(\mathsf{Te}\)-\(\mathsf{Acc}\)" is the cause with full confidence here. In contrast, "\(\mathsf{Te}\)-\(\mathsf{Acc}\)" no longer has direct causal relation with either "\(\mathsf{Sex}\)-\(\mathsf{DI}\)" or "\(\mathsf{Race}\)-\(\mathsf{AOD}\)" in Fig. 6(b). This explains why its confidence decreases drastically in scenario "Compas-\(\mathsf{Race}\)". **RQ2 Findings:** We observe the substantial variation in patterns of trade-offs between different sensitive attributes even on the same dataset. Furthermore, we take a pair of causal graphs as an example to explain the difference in detail. ### _RQ3: Fairness vs. Model Robustness_ As essential properties of ML models, both fairness and robustness receive considerable attention in the research community. Although some works [56, 57, 58] examine them simultaneously, no one has systematically studied the trade-offs between them. This RQ investigates the trade-offs between fairness and model robustness. Similar to RQ2, we only report Fig. 5: Trade-offs between multiple sensitive attributes. Fig. 6: A comparison between trade-offs in different scenarios. Fig. 7: Trade-offs between fairness and model robustness. the results of experiments conducted on German in Fig. 7 and present the remaining results on the website [55]. For the abbreviation of robustness metrics, we use "_prefix-robustness metric_" to represent them, where _prefix_ is either "A" (adversarial attack) or "M" (membership inference). Comparing patterns of trade-offs in Fig. 7 with those in Fig. 4, an apparent distinction is that the hyperparameter "Width" causes much more trade-offs than it does in Fig. 4's same scenario ("German-Sex" and "German-Age"). We interpret this difference as reasonable because the robustness metrics are more sensitive to the model's learning ability and the degree of overfitting. Hence, it is not unexpected to see that performance metrics, including Accuracy and F1 score, also cause more trade-offs in Fig. 7. This result is consistent with research on membership inference [40, 59, 60]. It is evident to conclude that trade-offs between fairness and robustness are inevitable, given that fairness-improving techniques typically have a significant impact on the model's performance. Moreover, the large number of observed trade-offs in Fig. 7 further suggests that the trade-offs between fairness and robustness should be taken seriously. Therefore, we suggest that future research on fairness-improving methods should consider this kind of trade-off, and faithfully disclose the results of potential robustness downgrades. We also clarify that the additional effort required to consider robustness will not be excessive, as Fig. 7 shows that the recommended metrics in RQ1 (AOD among group fairness metrics and TI among individual fairness) are still highly effective to inspect trade-offs between fairness and robustness. **RQ3 Findings:** Fairness and robustness contain inevitable trade-offs. We advocate taking into account robustness metrics when designing fairness-improving methods, and we have illustrated that the extra cost is moderate. ## VII Downstream Application As detailed in Sec. VI, our method delivers a comprehensive and in-depth understanding of the trade-offs among multiple metrics. Naturally, the identified causal graph provides valuable insights into the selection of the optimal fairness-improving method for a given scenario. As a "by-product", in this section, we present a case study that highlights the versatile application of our method in fairness-improving approach selection. The case study is conducted on all datasets mentioned in Table I, specifically examining the interplay of accuracy, SPD, and consistency as key factors to consider in selecting a suitable fairness-improving method. Specifically, we first identify the causes of the trade-offs between the selected metrics. Then, we find the optimal value of each cause using ATE, which constitutes the optimal setting for fairness-improving methods. We compare our method against the Adaptive Fairness Improvement (AFI) [24], a state-of-the-art approach, which is tailored for this task and is not designed for trade-off analysis. We report the evaluation results in Table VII. In particular, we find that our method surpasses AFI in almost all scenarios. This is reasonable: AFI can only select a single fairness-improving method, while our method can effectively combine multiple fairness-improving methods using the causal graph. For example, for the German-Age scenario, we found that the optimal combination of fairness-improving methods was to use both disparate impact remover (DIR) and predictive rate (PR), with their respective ratios set to 0.6 and 0.2. However, the limitation of AFI results in reduced effectiveness. ## VIII Related Work **Trade-off Study in ML Fairness.** Prior works have investigated the trade-offs associated with fairness. The analyzed trade-offs include fairness vs. accuracy [41, 61, 62, 63, 64, 42], group fairness vs. individual fairness [13, 14], and fairness vs. robustness [56, 57, 58]. Typically, these studies have three primary goals: (1) establishing the theoretical existence of trade-offs, (2) designing methods to achieve optimal trade-offs, and (3) identifying the best trade-off through empirical comparisons. No one has, however, systematically analyzed the influence of fairness-improving methods over ML pipelines, as measured by the metrics in Sec. IV-C, to cast light on the causes of trade-offs. **Causality Analysis in SE.** Recent years have witnessed a growing interest in applying causality analysis to SE. The high interpretability of causal graphs makes them appealing for a variety of SE problems, including software configuration [65], root cause analysis [66, 67, 68], and deep learning testing/repairing [24, 69]. We have compared with AFI in Sec. VII to highlight the distinct focus and superior performance. ## IX Threat to Validity In terms of internal validity, we employ DiBS, a state-of-the-art causal discovery algorithm, to infer causal graphs. Although DiBS outperforms other algorithms [33], it may not always identify the true causal graphs. To mitigate this threat, we use human evaluation in our pilot study for validating the derived causal graphs. Regarding external validity, our study focuses on neural networks, possibly limiting generalizability to other ML models. However, given the popularity of neural networks and their strong compatibility with numerous fairness-improving methods, we argue that our results hold considerable value. To further alleviate this threat, we conduct experiments across various network architectures and datasets. ## X Conclusion This research analyzes the trade-offs among multiple factors in fair ML via causality analysis. We propose a set of design \begin{table} \begin{tabular}{|c|c||c|c|c|c|c|} \hline \multirow{3}{*}{Acc} & \multicolumn{3}{c|}{Adult} & \multicolumn{3}{c|}{COMPAS} & \multicolumn{1}{c|}{German} \\ \cline{3-8} & & Sex & Race & Sex & Race & Sex & Age \\ \hline \multirow{3}{*}{Acc} & w/o FI &.8500 &.8489 &.6727 &.6740 &.7233 &.7197 \\ & AFI &.-0101 &.-0167 &.-0032 &.-0050 &.-0046 &.-0050 \\ & Ours &.-0138 &.-0205 &.-0098 &.+0072 &.+0100 &.+0130 \\ \hline \multirow{3}{*}{SPID} & w/o FI &.1730 &.0974 &.1575 &.1609 &.0685 &.0915 \\ & AFI &.-0608 &.-0299 &.-1161 &.-0926 &.-0228 &.-0321 \\ & Ours &.-1718 &.0936 &.1488 &.-1397 &.-0658 &.-0548 \\ \hline \multirow{3}{*}{Cons} & w/o FI &.9600 &.9593 &.9080 &.9089 &.9791 &.8016 \\ & AFI &.-0236 &.-0101 &.-0074 &.-0061 &.-0013 &.-0100 \\ & Ours &.+0179 &.+0050 &.+0081 &.+0151 & +0215 & +0364 \\ \hline \end{tabular} \end{table} TABLE VII: Comparison of AFI vs. our method on fairness-improving method selection. Each cell of AFI and ours reports the difference compared to the default model (w/o FI). Green and brown indicate an improvement/downgrade, respectively. principles and optimizations to facilitate an effective usage of causality analysis in this field. With extensive empirical analysis, we establish a comprehensive understanding of the interactions among fairness, performance, and robustness.
2305.12349
PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation
The eXtreme Multi-label Classification~(XMC) problem seeks to find relevant labels from an exceptionally large label space. Most of the existing XMC learners focus on the extraction of semantic features from input query text. However, conventional XMC studies usually neglect the side information of instances and labels, which can be of use in many real-world applications such as recommendation systems and e-commerce product search. We propose Predicted Instance Neighborhood Aggregation (PINA), a data enhancement method for the general XMC problem that leverages beneficial side information. Unlike most existing XMC frameworks that treat labels and input instances as featureless indicators and independent entries, PINA extracts information from the label metadata and the correlations among training instances. Extensive experimental results demonstrate the consistent gain of PINA on various XMC tasks compared to the state-of-the-art methods: PINA offers a gain in accuracy compared to standard XR-Transformers on five public benchmark datasets. Moreover, PINA achieves a $\sim 5\%$ gain in accuracy on the largest dataset LF-AmazonTitles-1.3M. Our implementation is publicly available.
Eli Chien, Jiong Zhang, Cho-Jui Hsieh, Jyun-Yu Jiang, Wei-Cheng Chang, Olgica Milenkovic, Hsiang-Fu Yu
2023-05-21T05:00:40Z
http://arxiv.org/abs/2305.12349v1
PINA: Leveraging Side Information in eXtreme Multi-label Classification via Predicted Instance Neighborhood Aggregation ###### Abstract The eXtreme Multi-label Classification (XMC) problem seeks to find relevant labels from an exceptionally large label space. Most of the existing XMC learners focus on the extraction of semantic features from input query text. However, conventional XMC studies usually neglect the side information of instances and labels, which can be of use in many real-world applications such as recommendation systems and e-commerce product search. We propose Predicted Instance Neighborhood Aggregation (PINA), a data enhancement method for the general XMC problem that leverages beneficial side information. Unlike most existing XMC frameworks that treat labels and input instances as featureless indicators and independent entries, PINA extracts information from the label metadata and the correlations among training instances. Extensive experimental results demonstrate the consistent gain of PINA on various XMC tasks compared to the state-of-the-art methods: PINA offers a gain in accuracy compared to standard XR-Transformers on five public benchmark datasets. Moreover, PINA achieves a \(\sim 5\%\) gain in accuracy on the largest dataset LF-AmazonTitles-1.3M. Our implementation is publicly available [https://github.com/amzn/pecos/tree/mainline/examples/pina](https://github.com/amzn/pecos/tree/mainline/examples/pina). ## 1 Introduction Many real-world applications, such as e-commerce dynamic search advertising (Prabhu and Varma, 2014; Prabhu et al., 2018), semantic matching (Chang et al., 2021), and open-domain question answering (Chang et al., 2020; Lee et al., 2019), can be formulated as eXtreme Multi-label Classification (XMC) problems. Given a text input, XMC aims to predict relevant labels from a label collection of extremely large size \(L\). The scale of \(L\), which is often of the order of millions, makes designing accurate and efficient XMC models arduous. Despite the progress in tackling the XMC problem, most XMC solvers still only take instance features as inputs for prediction. Even though side information, such as label Figure 1: Illustration of two types of side information, including (1) label metadata and (2) instance correlation signals, based on an example XMC task that recommends relevant keywords (labels) for input products (instances) in E-commerce. Specifically, the text descriptions of keywords serve as label metadata while customer behaviors collectively provide instance correlation signals. metadata (i.e., label text) and instance correlation signals, may be highly beneficial for the learning task, it cannot be leveraged directly. Taking product keyword recommendation as an example, Figure 1 illustrates two types of side information. For label metadata, the standard XMC formulation treats labels as identifiers and ignores their text descriptions (You et al., 2019; Babbar and Scholkopf, 2019). More precisely, while recent XMC solutions such as XR-Linear (Yu et al., 2022) and XR-Transformer (Zhang et al., 2021) have exploited the correlations among labels to generate label partitions or hierarchical label trees, they do not use label text features. Instead, they construct label embeddings via aggregation of positive instance features. Recent works (Mittal et al., 2021; Dahiya et al., 2021) have also demonstrated that using label text features is beneficial for the XMC problem, leading to state-of-the-art results on datasets containing label text information. Moreover, instance correlation signals based on the collective behaviors of customers are also ignored in the standard XMC formulation. For example, the co-purchase signal from Amazon is now used as a benchmark graph dataset for node classification problems (Chiang et al., 2019; Hu et al., 2020). Beyond e-commerce, the idea of leveraging side information is universal and can be applied to XMC tasks in diverse fields, such as disease descriptions and cross-disease statistics in medical diagnosis (Almagro et al., 2020). Hence, it is of critical importance and expected to be widely impactful to enable side information inclusion into XMC models and thereby enhance prediction quality. In the recent graph learning literature, Chien et al. 2021 have bridged the gap between XMC and neighborhood prediction. Intuitively, the XMC label matrix can be described as a biadjacency matrix of a bipartite graph connecting instances and labels. As shown in Figure 3, the XMC task leads to the problem of predicting the neighborhood of each instance, which is termed the neighborhood prediction task (Chien et al., 2021). This work clearly illustrates the point that graph learning techniques can be useful in addressing XMC tasks. One standard operation to enhance the performance of graph learning methods is graph convolution (Kipf and Welling, 2017), or message passing (Gilmer et al., 2017). The idea is to aggregate the neighborhood features, which implicitly encode the graph topological information. The graph convolution operation has by now been successfully used in various graph learning methods, including generalized PageRank (Li et al., 2019), Graph Neural Networks (GNNs) (Hamilton et al., 2017; Velickovic et al., 2018; Chien et al., 2020; 2021) and hypergraph learning (Chien et al., 2019; 2021; 2021). This work asserts that aggregating neighborhood features can also be beneficial for XMC. Motivated by the connection between XMC and neighborhood prediction, we propose Predicted Instance Neighborhood Aggregation, PINA, to allow XMC methods such as XR-Transformers to leverage the aforementioned side information in a data enhancement manner. Our contributions can be summarized as follows: 1. We introduce PINA, a data enhancement method that allows XMC models to leverage two types of side information, label metadata and instance correlation signal in a unified manner. 2. On five public benchmark datasets where the side information is label metadata, we compare PINA with the state-of-the-art XMC model, XR-Transformer. PINA consistently beats classical XR-Transformers and achieves roughly a \(5\%\) gain in accuracy on the largest dataset LF-AmazonTitles-1.3M. Moreover, XR-Transformer enhanced by the PINA technique is shown to outperform all previous published results. 3. We test PINA on the industrial scale proprietary dataset containing millions of instances and labels, where the side information is of the form of instance correlation signals. PINA provides a \(3.5\%\) relative improvement in accuracy compared to the baseline XMC method. In summary, our approach consistently improves XR-Transformer on public benchmark datasets (Bhatia et al., 2016) when the side information is label text. We achieve new state-of-the-art results on the public benchmark datasets with a significant gain, and also observe performance gains brought forth by PINA on proprietary datasets when the side information is in the form of instance correlation signals. ## 2 Related Work ### Extreme multi-label classification Pioneering works on XMC adopt static input text representations and focus on the handling of extremely large label space. Treating labels as being binary, OVA architectures such as DiSMEC (Babbar and Scholkopf, 2017) and PPDSparse (Yen et al., 2017) require carefully designed parallel training algorithms to handle an enormously large number of labels. Even though these methods encourage model sparsity through weight truncation, the linear inference time with respect to the output space would still make them impractical to handle millions of labels. To address this issue, some works have focused on shortlisting candidate labels to achieve sub-linear training and inference complexity. One line of study focuses on partitioning label spaces. Tree-based methods (Choromanska and Langford, 2015; Daume III et al., 2017) divide the label space recursively into hierarchical label trees and therefore come with logarithmic inference times. More recent works such as Parabel (Prabhu et al., 2018), Xtext (Wydmuch et al., 2018), Bonsai (Khandagale et al., 2020), NapkinXC (Jasinska-Kobus et al., 2020; 2021) and XR-Linear (Yu et al., 2022) use the tree-partitioning architecture. Approximate nearest neighbor search (ANNS) is another method that adopts shortlisting the candidate labels in XMC. Instead of restricting the search space by partitioning, methods like AnnexML (Tagami, 2017), SLICE (Jain et al., 2019) and GLaS (Guo et al., 2019) accelerate the search in the original label space with pre-build label indexing (Malkov and Yashunin, 2020) or product quantization (Guo et al., 2020). ### Deep learning based methods Recent works on deep learning based XMC models adopt different neural network architectures to extract semantic features and have demonstrated better performance than methods using statistical features only, such as bag-of-words (BoW) and Term Frequency-Inverse Document Frequency (TF-IDF). Methods that use shallow networks such as XML-CNN (Liu et al., 2017) and AttentionXML (You et al., 2019) employ CNN and BiLSTM to directly extract semantic representation from input text. On the other hand, token embedding based methods (Medini et al., 2019; Dahiya et al., 2021; Mittal et al., 2021; Saini et al., 2021; Mittal et al., 2021) use shallow networks to combine pre-trained token embeddings into input sequence representations. Despite having limited capacity to capture semantic meanings, token embedding based methods still offer good performance on short-text applications (search queries, product titles, and document keywords). With the development of Transformer models (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019), new state of the art results have been established on XMC benchmarks through fine-tuning the Transformer encoders on the downstream XMC tasks (Chang et al., 2020; Ye et al., 2020; Jiang et al., 2021). X-Transformer (Chang et al., 2020) and LightXML (Jiang et al., 2021) fine-tune the transformer encoders on a simplified XMC problem, where each new label is induced by a cluster of the original labels. XR-Transformer (Zhang et al., 2021) adopts the tree-based label partitioning approach and fine-tunes the transformer encoder on multi-resolution objectives. ### XMC with label features While most traditional XMC architectures treat labels as featureless identifiers, a recent study shows that taking label textural descriptions into consideration enhances the performance of XMC models (Dahiya et al., 2021). Following this line of thought, methods such as GalaXC (Saini et al., 2021), ECLARE (Mittal et al., 2021) and SiameseXML (Dahiya et al., 2021) were put forward. While these methods obtained reasonable performance improvement by using label text especially when input texts are short, most of them make the assumption that the instance label bipartite graph is homophilic by using bi-encoders for candidate set retrieval (i.e. similar nodes are likely to have edges). While this is true for most XMC benchmark datasets, it does not hold in many real-world applications. For instance, complementary product recommendations in e-commerce would prefer to recommend accessories to a user who just bought a smartphone rather than yet another Figure 2: Illustration of the simplified XR-Transformer model. First, the model uses statistical text features (i.e. BoW or TF-IDF) and training labels to build the hierarchical label tree (HLT). Note that each layer of the HLT itself represents an XMC problem. Second, it trains a transformer \(\Phi_{\text{data}}(x)\) from the root to the leaves in a recursive manner. Third, it concatenates the statistical text feature \(\Phi_{\text{sum}}(x)\) and transformer feature \(\Phi_{\text{data}}(x)\) for learning linear one-versus-all (OVA) classifiers recursively. smartphone. Also, none of these works consider the instance correlation signals as our work. ## 3 Preliminaries Assume that we are given a training set \(\{x_{i},\mathbf{y}_{i}\}_{i=1}^{N}\) where \(x_{i}\in\mathcal{D}\) is the \(i^{th}\) input instance text feature and \(\mathbf{y}_{i}\in\{0,1\}^{L}\) is the one hot label vector with \(y_{i,l}=1\) indicating that label \(l\) is relevant to instance \(i\). The standard goal of XMC is to learn a function \(f:\mathcal{D}\times[L]\mapsto\mathbb{R}\), such that \(f(x,l)\) indicates the "mutual relevance" between \(x\) and \(l\). The standard way to compute this relevance is to use the one-versus-all (OVA) strategy: \[f(x,l)=\mathbf{w}_{l}^{T}\Phi(x);\;l\in[L], \tag{1}\] where \(\mathbf{W}=[\mathbf{w}_{1},\dots,\mathbf{w}_{L}]\in\mathbb{R}^{d\times L}\) are learnable weight vectors and \(\Phi:\mathcal{D}\mapsto\mathbb{R}^{d}\) is the text vectorizer. The function \(\Phi(\cdot)\) can be obtained by either statistical methods such as BoW and TF-IDF models, or through the use of deep learning models with learnable weights. In practice, directly training with OVA is prohibitive when \(L\) is large. This is due to not only the underlying \(O(L)\) time complexity, but also due to severe label sparsity issues inherent to long-tailed label distributions (Chang et al., 2020; Zhang et al., 2021). **XR-Transformers.** We start by briefly introducing the state-of-the-art XMC method: XR-Transformers (Zhang et al., 2021). A simplified illustration of it is given in Figure 2. The first step is to leverage the statistical text vectorizer \(\Phi_{\text{stat}}(x)\) and training labels \(\mathbf{y}\) to construct the label representation \(\mathbf{Z}\in\mathbb{R}^{L\times d}\) (which should not be confused with the label text feature \(\{z_{l}\}_{l=1}^{L}\)). XR-Transformers adopt the Predicted Instance Feature Aggregation (PIFA) strategy for label representations, which is further used to construct the hierarchical label tree (HLT) via hierarchical \(k\)-means clustering, \[\text{(PIFA)}\quad\mathbf{Z}_{l}=\frac{\sum_{i:y_{il}=1}\Phi_{\text{stat}}(x_{ i})}{\|\sum_{i:y_{il}=1}\Phi_{\text{stat}}(x_{i})\|}\;\forall l\in[L]. \tag{2}\] Note that each level of the HLT gives rise to an XMC problem. The second step is to train the Transformer models \(\Phi_{\text{dmn}}\), such as BERT (Devlin et al., 2019), recursively from root to leaves. In the third step, the XR-Transformer concatenates both the statistical feature \(\Phi_{\text{stat}}(x)\) and Transformer embedding \(\Phi_{\text{dmn}}(x)\) to arrive at the final feature \(\Phi_{\text{cat}}(x)\). It also trains linear OVA classifiers (1) based on HLT recursively to generate the final prediction. Through the use of HLT, one can not only reduce the time complexity from \(O(L)\) to \(O(\log(L))\), but also alleviate the label sparsity issue (Zhang et al., 2021). **XMC with label text.** Consider the scenario where the label text \(\{z_{l}\}_{l=1}^{L}\) is available as side-information, where \(z_{l}\in\mathcal{D}\) is the label text of label \(l\). One can observe that standard XMC approaches, such as XR-Transformers, cannot leverage this information directly. While it is possible to use the label text to improve the construction of the HLT, the learnable text vectorizer \(\Phi_{\text{dmn}}\) itself cannot leverage the label text information. PINA, as we show, enables XMC learners to leverage label text information in a data enhancement manner, where the learnable text vectorizer \(\Phi_{\text{dmn}}\) can also perform training with the label texts. **XMC with instance correlation signal.** In keyword recommendation problems, researchers aim to predict the most relevant keywords for each product. Keyword recommendation is an example of an XMC problem. In this scenario, instances (products) correlation signals are also available from the customer behavioral data, such as those pertaining to the "frequently bought together" category. This type of side information provides us with beneficial information about the instances. Unfortunately, it is not clear how to leverage the instances correlation signals within the standard XMC problem solvers. PINA makes use of this side information in a data enhancement way similar to what is done with the label text. ### The XMC problem and the neighborhood prediction problem To understand the key idea behind our approach, we have to describe the relationship between the XMC problem and the neighborhood prediction problem first described in the graph learning literature. Recently, Chien et al. 2021 revealed the equivalence of the XMC problem and the neighborhood prediction problem in graph learning. Let \(G=(V_{\text{in}},V_{\text{out}},E)\) be a directed bipartite graph, where \(V_{\text{in}}=[N]\) and \(V_{\text{out}}=[L]\) are the input and output node sets, respectively, while \(E\subseteq V_{\text{in}}\times V_{\text{out}}\) is the edge set. A common way to characterize the edge relations is to use a biadjacency matrix \(\mathbf{B}\in\{0,1\}^{N\times L}\), where \(B_{ij}=1\) if and only if \((i,j)\in E\). The goal of the neighborhood prediction problem is to predict the \(i^{th}\) row of \(\mathbf{B}\) via the node attributes of node \(i\). Since the \(i^{th}\) row of \(\mathbf{B}\) is just a vector in \(\{0,1\}^{1\times L}\) (i.e., a binary vector), it can also be viewed as a multi-label \(\mathbf{y}_{i}\). See Figure 3 for a pictorial illustration. One standard operation in graph learning is graph convolution (Kipf and Welling, 2017), where the key idea is to gather the attributes of the neighborhood of a node to enhance its ego node features. It has been proven to be effective for many graph tasks, including node classification (Kipf and Welling, 2017; Hamilton et al., 2017; Velickovic et al., 2018; Chien et al., 2020), link prediction (Zhang et al., 2021; Zhang and Chen, 2018) and graph classification (Xu et al., 2019; Zhang and Li, 2021). Our proposed method - PINA - is motivated by the connection to the neighborhood prediction task and the graph convolution operation, which we describe in the next section. ## 4 PINA: Predicted Instance Neighborhood Aggregation For simplicity, we mostly focus on the setting where side information is of the form of label text. The case of side information being of the form of instance correlation signals can be treated similarly. A detailed discussion regarding how to apply PINA with instance correlation signals is available in Section 4.2. We propose PINA to allow XMC learners such as XR-Transformers to make use of label text information in a _data enhancement_ manner. Due to the equivalence of XMC and neighborhood prediction, a naive way of including label attributes is via neighborhood aggregation. However, there are several issues preventing us from applying this idea directly. First, one can only apply the average operation on numerical features instead of raw text. Ideally, we have to fine-tune the text vectorizer \(\Phi_{\text{data}}\) with both instance and label text features during the training phase. However, the XMC formalism (Figure 3) does not treat label text as an input, which is suboptimal. Second, the neighborhood relation is defined using labels \(\mathbf{y}_{i}\), which are _unknown for test instances_. See Figure 3 for an illustration. Thus, we cannot apply neighborhood aggregation directly even though we are equipped with the bipartite graph underlying the XMC problem. We describe next the high-level ideas how to resolve these issues. **Lack of knowledge about neighborhoods for test instances.** In order to resolve this issue, we propose to pre-train a neighborhood predictor \(g\). Instead of using the exact neighbors (i.e. ground truth multi-labels), we generate predicted neighbors via \(g\). This allows us to generate neighbors for both _train and test_ instances. Note that pretraining \(g\) only leverages training data (which includes both labels and instances). **The transformer text vectorizer \(\Phi_{\text{data}}\) does not involve label text.** In order to resolve this issue, we propose a pre-training XMC task that also takes label text as input. More specifically, the input and output space of our pretraining task contains both instances and labels. See the illustration of the proposed pretrained XMC in Figure 4. Hence, our pretrained text vectorizer \(\Phi_{\text{pre}}\) is trained with both instance text and label text. This resolves the issue of not being able to include the label text in standard XMCs. ### A detailed description of PINA We implemented PINA as a two-stage method, described in Figure 4. The pseudo-code of the PINA augmentation and pretraining process are listed in the Appendix J. The first stage is the pretraining phase, where we design a pretraining task to learn a neighbor predictor \(g(\cdot,\Phi_{\text{stat}})\) via a base XMC learner (e.g., XR-transformer). Note that the pretraining task is also an XMC problem, but both instances and label text are treated as inputs and both the input and output space contain instance and label nodes. The edges are defined by multi-label relations \(\{\mathbf{y}_{i}\}_{i=1}^{N}\) in an undirected manner. We also add edges from all instances and label nodes in the input space to their output space counterpart. More Figure 3: Equivalence of the XMC problem and neighborhood prediction problem. Blue nodes correspond to instances and orange nodes correspond to labels. Note that the multi-label vectors \(\{\mathbf{y}_{i}\}\) can be viewed as the rows of biadjacency matrix \(\mathbf{B}\), which characterize the edges in the graphs on the right. Hence, predicting the multi-label \(\mathbf{y}_{i}\) is equivalent to predicting the neighborhood of blue node \(i\) in the graph on the right. specifically, we construct \(\mathbf{B}_{pre}\) as described in Figure 4. Recall that \(\mathbf{B}\in\{0,1\}^{N\times L}\) is obtained by training the multi-labels \(\{\mathbf{y}_{i}\}_{i=1}^{N}\) and \(\mathbf{I}\) represents an identity matrix of appropriate dimensions. Hence, in our pretraining XMC problem, we aim to predict the \(i^{th}\) row of \(\mathbf{B}_{pre}\) using \(x_{i}\) when \(i\in[N]\) and \(z_{i-N}\) when \(i=N+1,N+2,\dots,N+L\). This allows both the label and instance text to be observed during the pretraining phase. We consequently obtain the corresponding text vectorizer \(\Phi_{\text{pre}}\) to generate numerical features for both the labels and instances. The second stage is the PINA augmentation phase, during which we leverage the pretrained neighborhood predictor \(g(\cdot,\Phi_{\text{stat}})\) and text vectorizer \(\Phi_{\text{pre}}(\cdot)\) to augment the instance features. We first predict the most relevant nodes among the output space of the pretraining stage as neighbors for both training and _test_ instances via our pretrained neighbor predictor \(g(\cdot,\Phi_{\text{stat}})\). More specifically, we obtain the neighborhood prediction vector \(g(x_{i},\Phi_{\text{stat}})\in[0,1]^{1\times L}\) and zero out all but the top \(K\) largest values \(\mathbf{P}_{i}=top_{K}(g(x_{i},\Phi_{\text{stat}}))\). Then we perform neighborhood aggregation on the numerical features obtained from \(\Phi_{\text{pre}}\) accordingly, which results in PINA features. See lines \(3-8\) in Algorithm 1 for PINA feature extraction of each instance. The augmented features are fed to the next XMC learner for solving the downstream XMC task. ### Applying PINA to instance correlation signals We describe next how to apply PINA to instance correlation signals. In this case, the instance correlation can be formulated as an instance-to-instance graph (i.e., I2I graph). Similarly to the construction rules of the Amazon co-purchase graph benchmarking dataset known from the graph learning literature, a link \((i,j)\) exists if and only if the instance correlation signal between \(i\) and \(j\) is larger than a threshold. We thus capture the instance correlation signal by the (bi)adjacency matrix \(\mathbf{B}^{\prime}\). We then use \(\mathbf{B}^{\prime}\) directly to formulate our pretraining XMC. More specifically, one can choose \(\mathbf{B}_{pre}=\mathbf{B}^{\prime}\) in Stage 1 of Figure 4 to obtain the neighbor predictor and text vectorizer. Note that the set of instances considered in the downstream application need not be identical to instances in the I2I graph. We only require their text domains to be the same (i.e., all texts associated with instances are product descriptions). This is due to the fact that we can still obtain the predicted neighbors among instances in the I2I graph for each instance in the downstream application (and similar to the reason why PINA applies to test data). The remainder of the PINA pipeline is as described in Figure 4. ## 5 Experimental Results We demonstrate the effectiveness of PINA on both public benchmark datasets with side information of the form of Figure 4: Illustration of the two-stage PINA method. At stage \(1)\), we construct a pretraining biadjacency matrix \(\mathbf{B}_{pre}\) using only the _training data_. Since we still have an XMC problem, we can train an XMC learner as the neighbor predictor \(g(\cdot,\Phi_{\text{stat}})\) and obtain its corresponding text vectorizer \(\Phi_{\text{pre}}(\cdot)\) as well. At stage \(2)\), we first use the pretrained neighbor predictor \(g(\cdot,\Phi_{\text{stat}})\) to extract the most relevant (top \(K\)) nodes among the output space of the pretraining task. Then we apply the pretrained text vectorizer \(\Phi_{\text{pre}}(\cdot)\) to obtain the numerical features for both instances and labels. Finally, we perform normalized neighborhood aggregation to obtain the PINA augmented features, which can then be used in downstream XMC. label text and proprietary datasets with side information of the form of instance correlation signals. We report the precision at \(k\) (P@\(k\)) and recall at \(k\) (R@\(k\)) as our evaluation metrics. Their definition can be found in Appendix F. ### Label text benchmark datasets **Datasets.** We consider the public benchmark long-text datasets for product-to-product recommendations as well as predicting related Wikipedia articles, taken from (Bhatia et al., 2016). The data statistics can be found in Table 1. For a fair comparison with previous works, we adopt the provided BoW features as our statistical text feature in the experiments. For LF-Amazon-1.3M, where BOW features are only constructed using title text, we still use the provided BoW features (Bhatia et al., 2016) and leverage the full text only as input to transformer encoders. **Baseline methods.** We not only compare PINA with plain XR-Transformers (Zhang et al., 2021), but also with other previously published XMC methods that achieve state-of-the-art results for the label text XMC problem. These include ECLARE (Mittal et al., 2021), DECAF (Mittal et al., 2021), AttentionXML (You et al., 2019) and SiameseXML (Dahiya et al., 2021). For methods other than XR-Transformer, we directly take the reported numbers from the DECAF paper (Mittal et al., 2021) with superscript \({}^{\dagger}\) and SiameseXML paper (Daihya et al., 2021) with superscript \({}^{\star}\). For PINA, we use XR-Transformers as our baseline XMC learners. **Results.** The results are summarized in Table 2. Compared to XR-Transformers, PINA consistently improves the performance with a significant gain across all datasets. This demonstrates the effectiveness of PINA as a data enhancement approach for leveraging label text side information. As an example, PINA improves XR-Transformer with a gain of 1-\(2\%\). Compared to previously published state-of-the-art methods, XR-Transformer + PINA outperforms SiameseXML by around \(2\%\) and \(2.5\%\) on LF-Amazon-131K and LF-WikiSeeAlso-320K, respectively. At the same time, XR-Transformer + PINA achieves roughly the same performance as AttentionXML on LF-Wikipedia-500K. AttentionXML is outperformed by XR-Transformer + PINA with a \(4\%\)-\(4.5\%\) margin on LF-Amazon-131K and LF-WikiSeeAlso-320K. Moreover, AttentionXML exhibits at least a \(2.5\) times larger training time compared to XR-Transformer + PINA. These results again demonstrate the superiority of the proposed PINA method in leveraging label text side information in the XMC problem. More experiment details such as significant tests are included in the Appendix F. Note that none of the previously reported methods were tested on the (long text) LF-Amazon-1.3M dataset. Some where tested on the short text version LF-AmazonTitles-1.3M, which exclusively uses instance titles as text features. We have included the result on the original LF-AmazonTitles-1.3M dataset in Table 3, which show that PINA outperforms published state-of-the-art methods by a large margin. **Ablation study.** In Section 4 we introduced two major problems in implementing the neighborhood aggregation idea. For the problem of lacking neighborhoods for test instances, it is straightforward to understand why using predicted neighbors resolves the issue. In the ablation study that follows, we test whether letting the transformer text vectorizer \(\Phi_{\text{dnn}}\) observe the label text influences the performance of the learner. Instead of pretraining with \(\mathbf{B}_{pre}\), one may also pretrain the neighbor predictor with \(\mathbf{B}\). Our results are listed in Table 4. We can see that letting the transformer text vectorizer \(\Phi_{\text{dnn}}\) see the label text is indeed crucial, which verifies our intuition described in Section 4. Moreover, we find that pretraining neighborhood predictors with \(\mathbf{B}\) can often lead to worse results compared to applying XR-Transformer only. This further highlights the necessity of our design for PINA. Finally, we provide a qualitative analysis in Appendix G which further validate the necessity of the design of PINA. ### Proprietary datasets with instance correlation signal **Datasets.** We also conduct our experiment on a proprietary dataset pertaining to tasks similar to those illustrated in Figure 1. This proprietary dataset consists of millions of instances and labels. Our side information is of the form of instance correlation signals among roughly millions of instances. The instance correlation signal is aggregated similar to the case described in the Introduction. **Settings.** Due to the large scale of the data, the requirements for small inference times and daily model updates, we choose XR-Linear (Yu et al., 2022) as our downstream XMC model. Nevertheless, since the pretraining step in PINA can be performed beforehand and requires less frequent updates (i.e. monthly), we can still use the XR-Transformer as our neighborhood predictor during PINA augmentation. The \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \(d_{BoW}\) & \(L\) & \(N_{train}\) & \(N_{test}\) & \(\bar{n}\) & \(\bar{L}\) \\ \hline LF-Amazon-131K & 80,000 & 131,073 & 294,805 & 134,835 & 5.15 & 2.29 \\ LF-WikiSeeAlso-320K & 80,000 & 312,330 & 693,082 & 177,515 & 4.67 & 2.11 \\ LF-Wikiipedia-500K & 500,000 & 501,070 & 1,813,391 & 783,743 & 24.75 & 4.77 \\ LF-Amazon-1.3M & 128,000 & 1,305,265 & 2,248,619 & 970,273 & 28.24 & 22.20 \\ \hline \hline \end{tabular} \end{table} Table 1: Public benchmark dataset statistics: \(N_{train},N_{test}\) refer to the number of instances in the training and test sets, respectively; \(L\): the number of labels. \(\bar{L}\): the average number of positive labels per instance; \(\bar{n}\): average number of instances per label; \(d_{BoW}\): the Bag-of-Word feature dimension. high-level idea of the XR-Linear model can be understood with the help of Figure 2, where XR-Linear does not include Step 2 (i.e., machine learned matching). It directly trains a linear OVA classifier recursively on the HLT with input statistical text features such as BoW or TF-IDF. Besides applying PINA with XR-Linear, we also conduct an ablation study. We test if our performance gain is merely a consequence of concatenating features from pretrained transformer text vectorizers or if neighborhood aggregation also plays an important role. **Results.** We report the relative performance compared to the plain XR-Linear model. Our results are listed in Table 5. One can observe that PINA once again consistently improves the performance of downstream XMC models under all reported metrics. Furthermore, our ablation study shows that the performance gain of PINA does not merely come from concatenating pretrained text features. Our neighborhood aggregation mechanism is indeed important. Notably, merely using pretrained text features can lead to worse performance in P\(@10\) and R\(@10\). ## 6 Conclusion We proposed Predicted Instance Neighborhood Aggregation (PINA), a data enhancement framework that allows traditional XMC models to leverage various forms of side information, such as label metadata and instance correlation signals. Motivated by the neighborhood prediction problem from the graph learning literature, PINA enriches the instance features via neighborhood aggregation similar to what graph convolutions and message-passing operations do in many graph learning tasks. We conducted experiments on both public benchmark datasets and a proprietary dataset. PINA offers consistent gains when compared to its \begin{table} \begin{tabular}{c c c} \hline \hline LF-Amazon-131K & **P@1** & **P@3** & **P@5** \\ \hline XR-Transformer & 45.61 & 30.85 & 22.32 \\ XR-Transformer + PINA & 43.89 & 30.43 & 22.67 \\ XR-Transformer + PINA & **46.76** & **31.88** & **23.20** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of PINA on the LF-Amazon-131K dataset. Bold font indicates the best results. For PINA-naive, we use only **B** as our pretraining target in Stage 1 of PINA. \begin{table} \begin{tabular}{c|c c|c c|c c|c} \hline \hline Methods & **P@1** & **P@3** & **P@5** & **Train Time (hrs)** & **P@1** & **P@3** & **P@5** & **Train Time (hrs)** \\ \hline & \multicolumn{5}{c|}{LF-Amazon-131K} & \multicolumn{5}{c}{LF-WikiSeeAlso-320K} \\ \hline DECAF\({}^{\dagger}\) & 42.94 & 28.79 & 21 & 1.8 & 41.36 & 28.04 & 21.38 & 4.84 \\ AttentionXML\({}^{\dagger}\) & 42.9 & 28.96 & 20.97 & 50.17 & 40.5 & 26.43 & 21.38 & 90.37 \\ SiameseXML\({}^{\star}\) & 44.81 & 30.19 & 21.94 & 1.18 & 42.16 & 28.14 & 21.35 & 2.33 \\ ECLARE\({}^{\star}\) & 43.56 & 29.65 & 21.57 & 2.15 & 40.58 & 26.86 & 20.14 & 9.40 \\ XR-Transformer & 45.61 & 30.85 & 22.32 & 7.9 & 42.57 & 28.24 & 21.30 & 22.1 \\ XR-Transformer + PINA & **46.76** & **31.88** & **23.20** & 9.8 & **44.54** & **30.11** & **22.92** & 28.3 \\ \hline & \multicolumn{5}{c|}{LF-Wikipedia-500K} & \multicolumn{5}{c}{LF-Amazon-1.3M} \\ \hline DECAF\({}^{\dagger}\) & 73.96 & 54.17 & 42.43 & 44.23 & - & - & - & - \\ AttentionXML\({}^{\dagger}\) & 82.73 & **63.75** & **50.41** & 221.6 & - & - & - & - \\ SiameseXML\({}^{\star}\) & 67.26 & 44.82 & 33.73 & 7.31 & - & - & - & - \\ ECLARE\({}^{\star}\) & 68.04 & 46.44 & 35.74 & 86.57 & - & - & - & - \\ XR-Transformer & 81.62 & 61.38 & 47.85 & 41.0 & 54.67 & 47.87 & 42.93 & 28.2 \\ XR-Transformer + PINA & **82.83** & 63.14 & 50.11 & 85.0 & **58.33** & **51.06** & **46.04** & 39.1 \\ \hline \hline \end{tabular} \end{table} Table 2: Main result on label text XMC benchmark datasets. Bold font refers to the best result. Superscripts \({}^{\dagger}\) and \({}^{\star}\) indicate the results are taken from DECAF paper (Mittal et al., 2021) and SiameseXML (Daihya et al., 2021) respectively. \begin{table} \begin{tabular}{c c c c} \hline \hline Methods & **P@1** & **P@3** & **P@5** \\ \hline & LF-AmazonTitle-1.3M \\ \hline DECAF\({}^{\dagger}\) & 50.67 & 44.49 & 40.35 \\ AttentionXML\({}^{\dagger}\) & 45.04 & 39.71 & 36.25 \\ SiameseXML\({}^{\star}\) & 49.02 & 42.72 & 38.52 \\ ECLARE\({}^{\star}\) & 50.14 & 44.09 & 40.00 \\ XR-Transformer & 50.98 & 44.49 & 40.05 \\ XR-Transformer + PINA & **55.76** & **48.70** & **43.88** \\ \hline \hline \end{tabular} \end{table} Table 3: Study of PINA on LF-AmazonTitle-1.3M dataset. Bold font numbers indicate the best results. backbone XMC models such as XR-Transformers and XR-Linear. The combination of PINA and a XR-Transformer also outperforms published state-of-the-art methods specialized for label text on all the benchmarking datasets. ## Acknowledgements The authors thank the support from Amazon and the Amazon Conference Grant. Part of this work was funded by the NSF CIF Grant 1956384. Cho-Jui Hsieh is supported in part by NSF IIS-\(2008173\) and IIS-\(2048280\).
2304.12060
Liouville theorems for a class of degenerate or singular Monge-Ampère equations
In this note, we classify solutions to a class of Monge-Amp\`ere equations whose right hand side may be degenerate or singular in the half space. Solutions to these equations are special solutions to a class of fourth order equations, including the affine maximal hypersurface equation, in the half space. Both the Dirichlet boundary value and Neumann boundary value cases are considered.
Ling Wang, Bin Zhou
2023-04-24T12:55:21Z
http://arxiv.org/abs/2304.12060v1
# Liouville theorems for a class of degenerate or singular Monge-Ampere equations ###### Abstract. In this note, we classify solutions to a class of Monge-Ampere equations whose right hand side may be degenerate or singular in the half space. Solutions to these equations are special solutions to a class of fourth order equations, including the affine maximal hypersurface equation, in the half space. Both the Dirichlet boundary value and Neumann boundary value cases are considered. Key words and phrases:degenerate Monge-Ampere equation, Liouville theorems, partial Legendre transform, method of moving spheres 2020 Mathematics Subject Classification: 35J96, 35J70, 35B53, 35A09 This research is partially supported by NSFC grants 12271008 and National Key R&D Program of China SQ2020YFA0712800. any convex continuous solution to (1.1) with the growth condition \(u=O(|x|^{3+\alpha-\varepsilon})\) as \(|x|\to+\infty\) must be the form of \[u(Ax)=Bx_{n}+\frac{1}{2}|x^{\prime}|^{2}+\frac{x_{n}^{2+\alpha}}{(2+\alpha)(1+ \alpha)}\] for some sliding \(A\) along \(x_{n}=0\), and some constant \(B\). In particular, when \(\alpha=0\), the solution is a quadratic polynomial. This result was later extended to the singular case with \(\alpha\in(-1,0)\) by Savin and Zhang [SZ]. There are examples show that the growth condition at infinity is necessary in general dimensions. When \(\alpha=-1\), the local asymptotic behavior of the solution near the boundary in dimension two was studied in [Ru]. In this paper, we concentrate on the two dimensional case. Our first result classifies all solutions to (1.1) with Dirichlet condition in dimension two when \(\alpha>-2\). **Theorem 1.1**.: _Let \(u(x,y)\in C^{2}(\mathbb{R}^{2}_{+})\cap C(\overline{\mathbb{R}^{2}_{+}})\) be a convex solution to_ \[\left\{\begin{aligned} \det D^{2}u&=(a+by)^{ \alpha}&\text{in }\mathbb{R}^{2}_{+},\\ u(x,0)&=\frac{1}{2}x^{2}&\text{on } \partial\mathbb{R}^{2}_{+},\end{aligned}\right. \tag{1.3}\] _where \(a\geq 0\), \(b>0\), and \(\alpha>-2\). Then there exist \(A,\)\(B,\)\(C\in\mathbb{R}\) with \(A\geq 0\) such that_ \[u(x,y)=\left\{\begin{aligned} &\frac{(b-aA)(a+by)^{2+\alpha}}{b^{3}(1+ \alpha)(2+\alpha)}+\frac{A(a+by)^{3+\alpha}}{b^{3}(2+\alpha)(3+\alpha)}-By\\ &-\frac{(b-aA)a^{2+\alpha}}{b^{3}(1+\alpha)(2+\alpha)}-\frac{Aa ^{3+\alpha}}{b^{3}(2+\alpha)(3+\alpha)}+\frac{(x-Cy)^{2}}{2(1+Ay)},& \alpha\neq-1;\\ &\frac{b-aA}{b^{3}}(a+by)\ln(a+by)+\frac{A}{2b}y^{2}-By\\ &-\frac{(b-aA)a\ln a}{b^{3}}+\frac{(x-Cy)^{2}}{2(1+Ay)},& \alpha=-1.\end{aligned}\right. \tag{1.4}\] **Remark 1.2**.: 1. _When_ \(a=0\)_, we improve the exponent in the results of_ _[_S1_, S2, SZ_]_ _to_ \(\alpha>-2\) _in two dimensional case. This exponent is sharp since (_1.3_) admits no solutions continuous up to the boundary for_ \(\alpha\leq-2\) _(see details in Remark_ 3.1_). When_ \(\alpha=0\)_, Theorem_ 1.1 _can be also found in_ _[_Fi_, Page 145-148]__._ 2. _If we assume_ \(u=O(|(x,y)|^{3+\alpha-\varepsilon})\) _as_ \(|(x,y)|\to+\infty\)_, then we have that_ \(A\) _must be_ \(0\) _in (_1.4_). Hence we can recover some of the results in_ _[_S2_, SZ_]_ _in dimension two._ 3. _In a subsequent work, we are going to study the Liouville type theorem the following problem_ (1.5) \[\left\{\begin{aligned} U^{ij}w_{ij}&=0&&\text{ in }\mathbb{R}_{+}^{n},\\ u&=\frac{1}{2}|x^{\prime}|^{2}&&\text{ on } \partial\mathbb{R}_{+}^{n},\\ w&=1&&\text{ on }\partial\mathbb{R}_{+}^{n}. \end{aligned}\right.\] The main idea to prove Theorem 1.1 is as follows. Let \(u(x,y)\) be a uniformly convex solution to (1.1). Then its partial Legendre transform in the \(x\)-variable is \[u^{\star}(\xi,\eta)=xu_{x}(x,y)-u(x,y), \tag{1.6}\] where \((\xi,\eta)=(u_{x},y)\). It is easy to check that \(u^{\star}\) is a solution to \[(a+b\eta)^{\alpha}u^{\star}_{\xi\xi}+u^{\star}_{\eta\eta}=0. \tag{1.7}\] When \(a=0\), this Grushin type equation was studied in [CS]. By a change of variables \(v(x_{1},x_{2})=u^{\star}\left(x_{1},f(x_{2})\right)\), where \[\xi=x_{1},\ \eta=f(x_{2})=b^{\frac{-\alpha}{\alpha+2}}\left(\frac{\alpha+2}{2}x_ {2}\right)^{\frac{2}{\alpha+2}}-\frac{a}{b},\] we know that \(v\) solves the following divergence type equation \[\operatorname{div}\left(x_{2}^{\frac{\alpha}{\alpha+2}}\nabla v\right)=0, \tag{1.8}\] which may be degenerate or singular. A Liouville theorem for (1.8) on the upper half space has been obtained recently by [WZ]. However, in our case, the domain may shift after the transformations. Hence, we need a generalization of the result in [WZ] to general upper half spaces. The above approach also works for the case of Neumann problem. As for the Neumann boundary value case, we only consider the degenerate case, stated as follows. **Theorem 1.3**.: _Let \(u(x,y)\in C^{2}(\mathbb{R}_{+}^{2})\cap C^{1}(\overline{\mathbb{R}_{+}^{2}})\) be a convex solution to_ \[\left\{\begin{aligned} &\det D^{2}u=y^{\alpha}&& \text{in }\mathbb{R}_{+}^{2},\\ & u_{y}(x,0)=0&&\text{ on }\partial\mathbb{R}_{+}^{2}, \end{aligned}\right. \tag{1.9}\] _where \(\alpha\geq 0\). Then there exist a constant \(A>0\), and a linear function \(l(x)\) such that_ \[u(x,y)=\frac{1}{2A}x^{2}+\frac{A}{(2+\alpha)(1+\alpha)}y^{2+\alpha}+l(x). \tag{1.10}\] **Remark 1.4**.: 1. _When_ \(\alpha=0\)_, Theorem_ 1.3 _is included in_ _[_JT_, Theorem 1.1]_. In fact, it is proved in_ _[_JT_]_ _that any convex solution to Neumann problem of Monge-Ampere equations in the half plane must be a quadratic polynomial for two dimensional case, and the conclusion still holds for dimension_ \(n\geq 3\) _if either the boundary value is zero or the solution restricted on some \(n-2\) dimensional subspace is bounded from above by a quadratic function. Here we extend this to the degenerate case._ 2. _It is still unknown whether Theorem 1.3 is true for \(\alpha<0\). Although our method doesn't work for this case, we believe the conclusion is true for \(\alpha>-1\)._ Finally, we turn to the Liouville theorem on the whole space. The celebrated result of Jorgens [Jo], Calabi [Ca] and Pogorelov [Po] states that any entire classical convex solution to the Monge-Ampere equation \[\det D^{2}u=1\quad\text{in }\mathbb{R}^{n}\] must be a quadratic polynomial. Caffarelli [Caf] extended this result to viscosity solutions (the proof can be also found in [CL, Theorem 1.1]). For another direction of extension, Jin and Xiong [JX] studied the class of equations \[\det D^{2}u(x,y)=|y|^{\alpha} \tag{1.11}\] on the whole plane \(\mathbb{R}^{2}\), and established a Liouville theorem. **Theorem 1.5** ([JX, Theorem 1.1]).: _Let \(u(x,y)\) be convex generalized (or Alexandrov) solution to (1.11) with \(\alpha>-1\). Then there exist some constants \(A>0\), \(B\in\mathbb{R}\) and a linear function \(l(x,y)\) such that_ \[u(x,y)=\frac{1}{2A}x^{2}+\frac{AB^{2}}{2}y^{2}+Bxy+\frac{A}{(2+\alpha)(1+ \alpha)}|y|^{2+\alpha}+l(x,y). \tag{1.12}\] At the end of this paper, we use the approach above to give a new proof of this result in the case of \(\alpha\geq 0\). The main idea of Jin and Xiong in [JX] is that using the partial Legendre transform to change (1.11) into a class of linearized Monge-Ampere equations, then applying the Harnack inequality for linearized Monge-Ampere equations derived by Caffarelli and Gutierrez [CG] and the scaling argument to classify all solutions of the transformed equation. Our new proof is similar to Theorem 1.1 and Theorem 1.3. The structure of this paper is as follows. In Section 2, we derive the Liouville theorems for a class of linear elliptic equations in divergence form including (1.8). Then we prove Theorem 1.1, Theorem 1.3 and Theorem 1.5 in Section 3. ## 2. Liouville theorems for linear elliptic equations in divergence form In this section, we establish a Liouville theorem for a class of linear elliptic equations in divergence form, which may be degenerate or singular cases, in the half space. This theorem can be viewed as an extension of [WZ, Theorem 1.1]. The proof is very similar to [WZ, Theorem 1.1], where the method of moving sphere will be used. Denote \(\mathbb{R}^{n}_{l}=\{x=(x^{\prime},x_{n}):x^{\prime}\in\mathbb{R}^{n-1},\,x_{n}>l\}\) for \(l\geq 0\). **Theorem 2.1**.: _For \(n\geq 2\) and \(a\in\mathbb{R}\), let \(u\in C^{2}(\mathbb{R}^{n}_{l})\cap C^{0}(\overline{\mathbb{R}^{n}_{l}})\) be a solution to_ \[\begin{cases}\operatorname{div}\left(x_{n}^{a}\nabla u\right)=0,&u>-C_{0}\text { in }\mathbb{R}^{n}_{l},\\ u(x^{\prime},l)=0,&\text{on }\mathbb{R}^{n-1}\times\{x_{n}=l\},\end{cases}\] _where \(l\geq 0\) and \(C_{0}>0\). Then \(u=C_{*}\left(x_{n}^{1-a}-l^{1-a}\right)\) for some nonnegative constant \(C_{*}\). In particular, when \(a\geq 1\), \(C_{*}=0\)._ **Remark 2.2**.: _When \(l=0\), Theorem 2.1 is just the Theorem 1.1 of [WZ]._ Proof of Theorem 2.1.: We extend \(u\) to \(\overline{\mathbb{R}_{+}^{n}}\) by letting \(u\left(x^{\prime},x_{n}\right)=0\) in \(\mathbb{R}^{n-1}\times[0,l)\), and denote it by \(\widetilde{u}\). Hence, we know that \(\widetilde{u}(x)\in C\left(\overline{\mathbb{R}_{+}^{n}}\right)\). Firstly, we show that \(\widetilde{u}\) is weakly differentiable in \(\mathbb{R}_{+}^{n}\) and \[\nabla\widetilde{u}=\begin{cases}\nabla u,&\mathbb{R}_{l}^{n},\\ 0,&\mathbb{R}^{n-1}\times(0,l).\end{cases}\] Indeed, \(\forall\,\varphi\in C_{0}^{\infty}(\mathbb{R}_{+}^{n})\), by integration by parts, we have \[\int_{\mathbb{R}_{+}^{n}}\widetilde{u}\,\partial_{x_{i}}\varphi\ \mathrm{d}x=\int_{ \mathbb{R}^{n-1}\times(l,+\infty)}u\,\partial_{x_{i}}\varphi\ \mathrm{d}x=-\int_{\mathbb{R}^{n-1}\times(l,+\infty)}\partial_{x_{i}}u\, \varphi\ \mathrm{d}x\] for \(i\leq n-1\) and \[\int_{\mathbb{R}_{+}^{n}}\widetilde{u}\,\partial_{x_{n}}\varphi \ \mathrm{d}x =\int_{\mathbb{R}^{n-1}\times(l,+\infty)}u\,\partial_{x_{n}} \varphi\ \mathrm{d}x\] \[=-\int_{\mathbb{R}^{n-1}\times\{x_{n}=l\}}u\,\varphi\ \mathrm{d}x^{\prime}-\int_{\mathbb{R}^{n-1}\times(l,+\infty)}\partial_{x_{n}}u \,\varphi\ \mathrm{d}x\] \[=-\int_{\mathbb{R}^{n-1}\times(l,+\infty)}\partial_{x_{n}}u\, \varphi\ \mathrm{d}x,\] where we used \(u(x^{\prime},l)=0\) in the last equality. Next, we show that \(\widetilde{u}\in W^{1,2}_{loc}\left(\mathbb{R}_{+}^{n}\right)\cap C\left( \overline{\mathbb{R}_{+}^{n}}\right)\) is a weak solution to \[\begin{cases}\mathrm{div}\left(x_{n}^{a}\nabla\widetilde{u}\right)=0,& \widetilde{u}>-C_{0}\ \text{in}\ \mathbb{R}_{+}^{n},\\ \widetilde{u}=0,&\text{on}\ \partial\mathbb{R}_{+}^{n}.\end{cases} \tag{2.1}\] Indeed, for any \(\varphi\in C_{0}^{\infty}(\mathbb{R}_{+}^{n})\), there is \[\int_{\mathbb{R}_{+}^{n}}\mathrm{div}\left(x_{n}^{a}\nabla \widetilde{u}\right)\varphi\ \mathrm{d}x =\int_{\mathbb{R}^{n-1}\times(l,+\infty)}\mathrm{div}\left(x_{n}^{a} \nabla\widetilde{u}\right)\varphi\ \mathrm{d}x+\int_{\mathbb{R}^{n-1}\times(0,l)} \mathrm{div}\left(x_{n}^{a}\nabla\widetilde{u}\right)\varphi\ \mathrm{d}x\] \[=-\int_{\mathbb{R}^{n-1}\times(l,+\infty)}x_{n}^{a}\nabla u\cdot \nabla\varphi\ \mathrm{d}x-\int_{\mathbb{R}^{n-1}\times\{x_{n}=l\}}\partial_{x_{n}}u\, \varphi\ \mathrm{d}x^{\prime}\] \[=\int_{\mathbb{R}^{n-1}\times(l,+\infty)}\mathrm{div}\left(x_{n}^ {a}\nabla u\right)\varphi\ \mathrm{d}x=0.\] It's clear that \(\widetilde{u}>-C_{0}\) in \(\mathbb{R}_{+}^{n}\) and \(\widetilde{u}=0\) on \(\partial\mathbb{R}_{+}^{n}\). Hence, \(\widetilde{u}\in W^{1,2}_{loc}\left(\mathbb{R}_{+}^{n}\right)\cap C\left( \overline{\mathbb{R}_{+}^{n}}\right)\) is a weak solution to (2.1). For any fixed \(x\in\partial\mathbb{R}_{+}^{n}\) and \(\lambda>0\), by Kelvin transformation \[y^{x,\lambda}=x+\frac{\lambda^{2}(y-x)}{|y-x|^{2}},\quad\forall y\in\overline{ \mathbb{R}_{+}^{n}},\] we define \[\widetilde{u}_{x,\lambda}(y)=\frac{\lambda^{n-2+a}}{|y-x|^{n-2+a}}\widetilde{u} \left(y^{x,\lambda}\right),\quad\forall y\in\overline{\mathbb{R}_{+}^{n}}.\] By [YD, Theorem 2.1], we know that \(\widetilde{u}_{x,\lambda}(y)\in W^{1,2}_{loc}\left(\mathbb{R}_{+}^{n}\right)\) satisfies \(\operatorname{div}\left(y_{n}^{a}\nabla\widetilde{u}_{x,\lambda}\right)=0\) in the weak sense, i.e. \(\widetilde{u}_{x,\lambda}\) satisfies the same equation. For \(a>2-n\), we consider \(\bar{u}=\widetilde{u}+C_{0}\) instead of \(\widetilde{u}\). Then \(\lim\limits_{|y|\to 0}\bar{u}(x+y)=C_{0}\) for \(x\in\partial\mathbb{R}_{+}^{n}\). Let \[w_{x,\lambda}(y)=\bar{u}(y)-\bar{u}_{x,\lambda}(y),\quad\forall\,y\in \mathbb{R}_{+}^{n}.\] We have \[\varliminf_{|y|\to+\infty}w_{x,\lambda}(y)\geq 0-\lim_{|y|\to+\infty}\frac{ \lambda^{n-2+a}}{|y-x|^{n-2+a}}\bar{u}\left(x+\frac{\lambda^{2}(y-x)}{|y-x|^{ 2}}\right)=0.\] By the maximum principle, we have \(\widetilde{u}_{x,\lambda}(y)\leq\widetilde{u}(y),\ \forall\,y\in\mathbb{R}_{+}^{n} \backslash B_{\lambda}(x)\). Hence by Lemma 2.3 below, we know that \(\widetilde{u}(y^{\prime},y_{n})=\widetilde{u}(y_{n})\). Then solving the corresponding ODE gives us the desired result. For \(a<2-n\), we consider \(\bar{u}=\widetilde{u}-1\) instead of \(\widetilde{u}\). Then \(\lim\limits_{|y|\to 0}\bar{u}(x+y)=-1\) for \(x\in\partial\mathbb{R}_{+}^{n}\). Let \[w_{x,\lambda}(y)=\bar{u}(y)-\bar{u}_{x,\lambda}(y),\quad\forall\,y\in \mathbb{R}_{+}^{n}.\] We have \[\varliminf_{|y|\to+\infty}w_{x,\lambda}(y) =\varliminf_{|y|\to+\infty}\bar{u}(y)-\lim_{|y|\to+\infty}\frac{ |y-x|^{2}}{\lambda^{2}}\bar{u}\left(x+\frac{\lambda^{2}(y-x)}{|y-x|^{2}}\right)\] \[\geq-1-C_{0}+\lim_{|y|\to+\infty}\frac{|y-x|^{2}}{\lambda^{2}}\] \[=+\infty.\] Again by the maximum principle, we have \(\widetilde{u}_{x,\lambda}(y)\leq\widetilde{u}(y),\ \forall\,y\in\mathbb{R}_{+}^{n} \backslash B_{\lambda}(x)\). Similarly, by Lemma 2.3, we also have \(\widetilde{u}(y^{\prime},y_{n})=\widetilde{u}(y_{n})\), then we can obtain the conclusion. As for \(a=2-n\), we need to modify \(\widetilde{u}_{x,\lambda}(y)\) to be \[\widetilde{u}_{x,\lambda}(y)=\widetilde{u}\left(y^{x,\lambda}\right)+\ln\frac {\lambda}{|y-x|}.\] Then by similar arguments, we also have \(\widetilde{u}_{x,\lambda}(y)\leq\widetilde{u}(y),\ \forall\,y\in\mathbb{R}_{+}^{n} \backslash B_{\lambda}(x)\). The result follows by applying Lemma 2.4. In the proof of Theorem 2.1, we used two crucial lemmas of moving spheres [WZ]. For readers' convenience, we include a proof here, which is very similar to the proof of [Li, Lemma 5.7]. **Lemma 2.3** ([WZ, Lemma 3.3]).: _Assume \(f(y)\in C^{0}\left(\overline{\mathbb{R}_{+}^{n}}\right)\), \(n\geq 2\), and \(\tau\in\mathbb{R}\). Suppose_ \[\left(\frac{\lambda}{|y-x|}\right)^{\tau}f\left(x+\frac{\lambda^{2}(y-x)}{|y-x |^{2}}\right)\leq f(y) \tag{2.2}\] _for \(\lambda>0\), \(x\in\partial\mathbb{R}_{+}^{n}\), \(y\in\mathbb{R}_{+}^{n}\) satisfying \(|y-x|\geq\lambda\). Then_ \[f(y)=f\left(y^{\prime},y_{n}\right)=f\left(0^{\prime},y_{n}\right),\quad\forall y =(y^{\prime},y_{n})\in\mathbb{R}_{+}^{n}.\] Proof.: For any fixed \(y^{\prime},z^{\prime}\in\mathbb{R}^{n-1}\) with \(y^{\prime}\neq z^{\prime}\) and \(y_{n}>0\), we denote \(y=(y^{\prime},y_{n})\) and \(z=(z^{\prime},z_{n})\), where \(z_{n}=\frac{b-1}{b}y_{n}\) for \(b>1\). Then we have \[x=y+b(z-y)\in\partial\mathbb{R}_{+}^{n}\] and \[z=x+\frac{\lambda^{2}(y-x)}{|y-x|^{2}},\] where \(\lambda=\sqrt{|z-x|\cdot|y-x|}\). By (2.2), we have \[\left(\frac{\lambda}{|y-x|}\right)^{\tau}f(z)\leq f(y). \tag{2.3}\] Since \[\lim_{b\rightarrow+\infty}\frac{\lambda}{|y-x|}=\lim_{|x|\rightarrow\infty} \sqrt{\frac{|z-x|}{|y-x|}}=1,\quad\lim_{b\rightarrow+\infty}z_{n}=\lim_{b \rightarrow+\infty}\frac{b-1}{b}y_{n}=y_{n}.\] and \(f\) is continuous, we have \(f(z^{\prime},y_{n})\leq f(y^{\prime},y_{n})\). By the arbitrariness of \(y^{\prime}\neq z^{\prime}\), the proof is completed. **Lemma 2.4**.: _Suppose that \(f\in C^{0}\left(\overline{\mathbb{R}_{+}^{n}}\right)\) satisfies that for all \(x\in\partial\mathbb{R}_{+}^{n}\) and \(\lambda>0\),_ \[f(y)\geq f\left(x+\frac{\lambda^{2}(y-x)}{|y-x|^{2}}\right)+\ln\frac{\lambda} {|y-x|},\quad\forall y\in\mathbb{R}_{+}^{n}\backslash B_{\lambda}(x).\] _Then_ \[f(y)=f\left(y^{\prime},y_{n}\right)=f\left(0^{\prime},y_{n}\right),\quad \forall y=(y^{\prime},y_{n})\in\mathbb{R}_{+}^{n}.\] Proof.: The proof is the same as Lemma 2.3. It suffices to replace (2.3) by \[\ln\frac{\lambda}{|y-x|}+f(z)\leq f(y).\] A Liouville theorem for the Neumman boundary value is also derived in [WZ]. **Theorem 2.5** ([WZ, Theorem 1.2]).: _Assume \(n\geq 2\) and \(\max\{-1,2-n\}<a<1\). Suppose \(u(x)\in C^{2}\left(\mathbb{R}_{+}^{n}\right)\cap C^{1}\left(\overline{\mathbb{ R}_{+}^{n}}\right)\) satisfies_ \[\begin{cases}\operatorname{div}\left(x_{n}^{a}\nabla u\right)=0,&u>0,\quad \text{ in }\mathbb{R}_{+}^{n},\\ x_{n}^{a}\frac{\partial u}{\partial x_{n}}=0&\text{ on }\partial\mathbb{R}_{+}^{n}. \end{cases} \tag{2.4}\] _Then \(u=C\) for some positive constant \(C\). The boundary condition in (2.4) holds in the following sense:_ \[\lim_{x_{n}\to 0^{+}}x_{n}^{a}\frac{\partial u}{\partial x_{n}}=0.\] ## 3. Proof of main theorems In this section, we first derive the new equation under the partial Legendre transform. Let \(\Omega\subset\mathbb{R}^{2}\) and \(u(x,y)\) be a uniformly convex function on \(\Omega\). The partial Legendre transform in the \(x\)-variable is \[u^{\star}(\xi,\eta)=xu_{x}(x,y)-u(x,y), \tag{3.1}\] where \[(\xi,\eta)=\mathcal{P}(x,y):=(u_{x},y)\in\mathcal{P}(\Omega):=\Omega^{\star}. \tag{3.2}\] We have \[\frac{\partial(\xi,\eta)}{\partial(x,y)}=\left(\begin{array}{cc}u_{xx}&u_{ xy}\\ 0&1\end{array}\right),\quad\text{ and }\quad\frac{\partial(x,y)}{\partial(\xi, \eta)}=\left(\begin{array}{cc}\frac{1}{u_{xx}}&-\frac{u_{xy}}{u_{xx}}\\ 0&1\end{array}\right).\] Hence, \[u_{\xi}^{\star}=x,\ \ u_{\eta}^{\star}=-u_{y}, \tag{3.3}\] \[u_{\xi\xi}^{\star}=\frac{1}{u_{xx}},\ \ u_{\eta\eta}^{\star}=-\frac{\det D ^{2}u}{u_{xx}},\ \ u_{\xi\eta}^{\star}=-\frac{u_{xy}}{u_{xx}}. \tag{3.4}\] Then if \(u\) is a solution to \[\det D^{2}u=(a+bx)^{\alpha},\] \(u^{\star}\) is a solution to \[(a+b\eta)^{\alpha}u_{\xi\xi}^{\star}+u_{\eta\eta}^{\star}=0.\] We will apply the results in Section 2 related to this equation to prove the main theorems. ### The case of Dirichlet boundary value We use Theorem 2.1 to prove Theorem 1.1. Proof of Theorem 1.1.: We consider the the partial Legendre transform \(u^{\star}\) of \(u\) on \(\mathbb{R}^{2}_{+}\). Note that \(\xi=u_{x}\), \(\eta=y\) and \(u_{x}=x\) on \(\{y=0\}\) by (1.3), which gives us that \(\mathcal{P}(\{y=0\})=\{\eta=0\}\). Hence, we have \(\mathcal{P}\left(\mathbb{R}^{2}_{+}\right)=\mathbb{R}^{2}_{+}\). Then if \(u\) is a solution to (1.3), \(u^{\star}\) is a solution to \[\left\{\begin{aligned} (a+b\eta)^{\alpha}u_{\xi\xi}^{\star}+u_{ \eta\eta}^{\star}&=0&\text{ in }\mathbb{R}\times(0,+\infty),\\ u^{\star}(\xi,0)&=\frac{\xi^{2}}{2}& \text{ on }\mathbb{R}\times\{0\},\end{aligned}\right. \tag{3.5}\] where we used the fact that the Legendre transform of \(x\mapsto\frac{1}{2}x^{2}\) is \(\xi\mapsto\frac{1}{2}\xi^{2}\). Since Legendre transform does not change the convexity, we have that \(u_{\xi\xi}^{\star}\geq 0\). Denote \(v:=u_{\xi\xi}^{\star}-1\). Differentiating (3.5) twice respect to \(\xi\), we have that \(v\geq-1\) solves \[\left\{\begin{aligned} (a+b\eta)^{\alpha}v_{\xi\xi}+v_{\eta\eta}& =0&\text{ in }\mathbb{R}\times(0,+\infty),\\ v(\xi,0)&=0&\text{ on }\mathbb{R}\times\{ \eta=0\}.\end{aligned}\right. \tag{3.6}\] Let \(\xi=x_{1}\), \(\eta=f(x_{2})=b^{\frac{-\alpha}{\alpha+2}}\left(\frac{\alpha+2}{2}x_{2} \right)^{\frac{2}{\alpha+2}}-\frac{a}{b}\) and \[\widetilde{v}(x_{1},x_{2}) =v\left(x_{1},f(x_{2})\right).\] A direct calculation yields \[\widetilde{v}_{11}=v_{\xi\xi},\] \[\widetilde{v}_{2}=b^{\frac{-\alpha}{\alpha+2}}\left(\frac{\alpha+2} {2}x_{2}\right)^{\frac{-\alpha}{\alpha+2}}v_{\eta},\] \[\widetilde{v}_{22}=-\frac{\alpha}{\alpha+2}x_{2}^{-1}\widetilde{v }_{2}+(a+b\eta)^{-\alpha}v_{\eta\eta}.\] \(\eta=0\) gives us that \(x_{2}=\frac{2}{b(\alpha+2)}a^{\frac{\alpha+2}{2}}\). Denote \(l=\frac{2}{b(\alpha+2)}a^{\frac{\alpha+2}{2}}\). Hence by (3.6), we know that \(\widetilde{v}\geq-1\) solves \[\begin{cases}\widetilde{v}_{11}+\widetilde{v}_{22}+\frac{\alpha}{\alpha+2}x_{ 2}^{-1}\widetilde{v}_{2}=0&\text{ in }\mathbb{R}\times(l,+\infty),\\ \widetilde{v}(x_{1},0)=0&\text{ on }\mathbb{R}\times\{x_{2}=l\},\end{cases}\] i.e., \[\begin{cases}\operatorname{div}\left(x_{2}^{\frac{\alpha}{\alpha+2}}\nabla \widetilde{v}\right)=0&\text{ in }\mathbb{R}\times(l,+\infty),\\ \widetilde{v}(x_{1},0)=0&\text{ on }\mathbb{R}\times\{x_{2}=l\}.\end{cases}\] Applying Theorem 2.1 with \(n=2\) and \(a=\frac{\alpha}{\alpha+2}<1\), we know that \(\widetilde{v}(x_{1},x_{2})=C_{*}\left(x_{2}^{\frac{2}{\alpha+2}}-l^{\frac{2}{ \alpha+2}}\right)\) for some nonnegative constant \(C_{*}\). Transforming back to \((\xi,\eta)\), we have \(v(\xi,\eta)=A\eta\) for some \(A\geq 0\), i.e., \(u_{\xi\xi}^{\star}(\xi,\eta)=1+A\eta\). Then \[u^{\star}(\xi,\eta)=h_{1}(\eta)+\xi h_{2}(\eta)+\frac{\xi^{2}}{2}(1+A\eta)\] for some functions \(h_{1},h_{2}:[0,+\infty)\to\mathbb{R}\). Recalling (3.5), we have \(h_{1}(0)=h_{2}(0)=0\) and \[h_{1}^{\prime\prime}(\eta)+\xi h_{2}^{\prime\prime}(\eta)+(1+A\eta)(a+b\eta)^ {\alpha}=0\] on \(\mathbb{R}\times(0,+\infty)\). This implies that \(h_{1}^{\prime\prime}(\eta)+(1+A\eta)(a+b\eta)^{\alpha}=0\) and \(h_{2}^{\prime\prime}(\eta)=0\). By solving the ODEs, we obtain \[u^{\star}(\xi,\eta)=\begin{cases}B\eta-\frac{(b-aA)(a+b\eta)^{2+\alpha}}{b^{ 3}(1+\alpha)(2+\alpha)}-\frac{A(a+b\eta)^{3+\alpha}}{b^{3}(2+\alpha)(3+\alpha) }+C\xi\eta\\ \quad+\frac{(b-aA)a^{2+\alpha}}{b^{3}(1+\alpha)(2+\alpha)}+\frac{ Aa^{3+\alpha}}{b^{3}(2+\alpha)(3+\alpha)}+\frac{\xi^{2}}{2}(1+A\eta),&\alpha\neq-1;\\ B\eta-\frac{b-aA}{b^{3}}(a+b\eta)\ln(a+b\eta)-\frac{A}{2}\eta^{2}+C\xi\eta\\ \quad+\frac{(b-aA)a\ln a}{b^{3}}+\frac{\xi^{2}}{2}(1+A\eta),& \alpha=-1,\end{cases}\] for some constants \(B,C\in\mathbb{R}\). Recalling that the Legendre transform is an involution on convex functions, we recover \(u\) by taking the partial Legendre transform of \(u^{\star}\) : \[u(x,y)=\begin{cases}\dfrac{(b-aA)(a+by)^{2+\alpha}}{b^{3}(1+\alpha)(2+\alpha)}+ \dfrac{A(a+by)^{3+\alpha}}{b^{3}(2+\alpha)(3+\alpha)}-By\\ \qquad-\dfrac{(b-aA)a^{2+\alpha}}{b^{3}(1+\alpha)(2+\alpha)}-\dfrac{Aa^{3+ \alpha}}{b^{3}(2+\alpha)(3+\alpha)}+\dfrac{(x-Cy)^{2}}{2(1+Ay)},\ \ \alpha\neq-1;\\ \dfrac{b-aA}{b^{3}}(a+by)\ln(a+by)+\dfrac{A}{2b}y^{2}-By\\ \qquad\qquad\qquad\qquad-\dfrac{(b-aA)a\ln a}{b^{3}}+\dfrac{(x-Cy)^{2}}{2(1+ Ay)},\qquad\alpha=-1.\end{cases}\] This gives us a complete classification of all solutions to (1.3). **Remark 3.1**.: \(\alpha>-2\) _in Theorem 1.1 is sharp since (1.3) has no convex solutions continuous up to boundary in \(\mathbb{R}^{2}_{+}\) when \(\alpha\leq-2\). Indeed, if there exists a convex function \(u\in C^{2}(\mathbb{R}^{2}_{+})\cap C(\overline{\mathbb{R}^{2}_{+}})\) solves (1.3), by [S2, Theorem 5.1], we will have a Pogorelov type estimate_ \[(1-u)u_{xx}\leq C(\max|u_{x}|)\] _in \(S_{1}\), where \(S_{h}=\{x\in\mathbb{R}^{2}_{+}:u(x)<u(0)+\nabla u(0)\cdot x+h\}\) for \(h>0\). Since \(u(x,0)=\frac{1}{2}x^{2}\)on \(\partial\mathbb{R}^{2}_{+}\), we know that \(|u_{x}|\) is bounded in \(S_{1}\) (depends on \(\|u\|_{L^{\infty}(S_{2})}\)). Then there exists a small \(c_{0}>0\) such that \(u_{xx}\leq C(\|u\|_{L^{\infty}(S_{2})})\) in \(B^{+}_{c_{0}}\). Hence, we have_ \[Cu_{yy}\geq u_{xx}u_{yy}\geq u_{xx}u_{yy}-u_{xy}^{2}=y^{\alpha}\text{ in }B^{+}_{c_{0}},\] _i.e. in \(B^{+}_{c_{0}}\). Then it holds_ \[u(x,y)\geq\begin{cases}\dfrac{1}{C(1+\alpha)(2+\alpha)}y^{2+\alpha}+D(x)y+E(x),&\alpha<-2,\\ -\dfrac{1}{C}\ln y+D(x)y+E(x),&\alpha=-2,\end{cases}\] _which means that \(\lim\limits_{y\to 0+}u(x,y)=+\infty\). This contradicts with \(u\in C(\overline{\mathbb{R}^{2}_{+}})\)._ ### The case of Neumann boundary value We prove Theorem 1.3 in this section. Proof of Theorem 1.3.: For \(\alpha=0\), it has been proved by Jian and Tu [JT, Theorem 1.1]. In the following, we mainly prove the case for \(\alpha>0\). Similarly as in the last section, we know that if \(u\) is a solution to (1.9), \(u^{\star}\) is a solution to \[\begin{cases}\eta^{\alpha}u^{\star}_{\xi\xi}+u^{\star}_{\eta\eta}=0&\text{ in }\mathbb{R}\times(0,+\infty),\\ u^{\star}_{\eta}(\xi,0)=0&\text{on }\mathbb{R}\times\{0\}.\end{cases} \tag{3.7}\] Denote \(v:=u^{\star}_{\xi\xi}\). Differentiating (3.7) twice respect to \(\xi\), we have that \(v>0\) solves \[\left\{\begin{aligned} \eta^{\alpha}v_{\xi\xi}+v_{\eta\eta}& =0&\text{ in }\mathbb{R}\times(0,+\infty),\\ v_{\eta}(\xi,0)&=0&\text{ on }\mathbb{R}\times \{\eta=0\}.\end{aligned}\right. \tag{3.8}\] Let \(\xi=x_{1}\), \(\eta=\left(\frac{\alpha+2}{2}\right)^{\frac{2}{\alpha+2}}x_{2}^{\frac{2}{ \alpha+2}}\) and \[\widetilde{v}(x_{1},x_{2})=v\left(x_{1},\left(\frac{\alpha+2}{2}\right)^{ \frac{2}{\alpha+2}}x_{2}^{\frac{2}{\alpha+2}}\right).\] Then (3.8) gives us that \(\widetilde{v}>0\) solves \[\left\{\begin{aligned} \widetilde{v}_{11}+\widetilde{v}_{22}+ \frac{\alpha}{\alpha+2}x_{2}^{-1}\widetilde{v}_{2}&=0& \text{ in }\mathbb{R}\times(0,+\infty),\\ x_{2}^{\frac{\alpha}{\alpha+2}}\widetilde{v}_{2}(x_{1},0)& =0&\text{ on }\mathbb{R}\times\{x_{2}=0\},\end{aligned}\right.\] i.e., \[\left\{\begin{aligned} \operatorname{div}\left(x_{2}^{\frac{ \alpha}{\alpha+2}}\nabla\widetilde{v}\right)&=0& \text{ in }\mathbb{R}\times(0,+\infty),\\ x_{2}^{\frac{\alpha}{\alpha+2}}\widetilde{v}_{2}(x_{1},0)& =0&\text{ on }\mathbb{R}\times\{x_{2}=0\}.\end{aligned}\right.\] Applying Theorem 2.5 with \(n=2\) and \(a=\frac{\alpha}{\alpha+2}\in(0,1)\), we know that \(\widetilde{v}=C\) for some positive constant \(C\). Transforming back to \((\xi,\eta)\), we have \(v(\xi,\eta)=A\) for some \(A>0\), i.e., \(u^{\star}_{\xi\xi}(\xi,\eta)=A\) for some \(A>0\). Then \[u^{\star}(\xi,\eta)=h_{1}(\eta)+\xi h_{2}(\eta)+\frac{A}{2}\xi^{2}\] for some functions \(h_{1},h_{2}:[0,+\infty)\to\mathbb{R}\). Recalling (3.7), we have \[h_{1}^{\prime}(0)=h_{2}^{\prime}(0)=0\quad\text{and}\quad h_{1}^{\prime\prime }(\eta)+\xi h_{2}^{\prime\prime}(\eta)+A\eta^{\alpha}=0\quad\text{on }\mathbb{R}\times(0,+\infty).\] This implies that \(h_{1}^{\prime\prime}(\eta)+A\eta^{\alpha}=0\) and \(h_{2}^{\prime\prime}(\eta)=0\). By solving these ODEs, we obtain \[u^{\star}(\xi,\eta)=\frac{A}{2}\xi^{2}+B\xi-\frac{A\eta^{2+\alpha}}{(1+\alpha) (2+\alpha)}+C\] for some constants \(A,C\in\mathbb{R}\). Recalling that \(u=(u^{\star})^{\star}\), we have \[u(x,y)=\frac{1}{2A}(x-B)^{2}+\frac{Ay^{2+\alpha}}{(1+\alpha)(2+\alpha)}-C,\] which yields (1.10). ### The entire space case Before proving Theorem 1.5, we first recall two theorems for (1.11) in [JX]. **Theorem 3.2** ([JX, Theorem 4.1]).: _Let \(\Omega\) be an open convex set in \(\mathbb{R}^{2}\), and \(u\) be the generalized solution of_ \[\det D^{2}u(x)=|y|^{\alpha}\quad\text{in $\Omega$,}\] _with \(u=0\) on \(\partial\Omega\). Then \(u\) is strictly convex in \(\Omega\), and \(u\in C^{1,\delta}_{\text{loc}}\left(\Omega\right)\) for some \(\delta>0\) depending only on \(\alpha\). Furthermore, the partial Legendre transform \(u^{\star}\) of \(u\) is a strong solution of_ \[|\eta|^{\alpha}u^{\star}_{\xi\xi}+u^{\star}_{\eta\eta}=0\quad\text{in $\mathcal{P}( \Omega)$},\] _where the map \(\mathcal{P}\) is given in (3.2)._ **Theorem 3.3** ([JX, Theorem 4.2]).: _Let \(u\) be a generalized solution of_ \[\det D^{2}u(x)=|y|^{\alpha}\quad\text{in $\mathbb{R}^{2}$}.\] _Then \(u\) is strictly convex._ Hence Theorem 3.2 and Theorem 3.3 give us that \(u\) is strictly convex, and then \(u\) is smooth away from \(\left\{y=0\right\}\). Furthermore, we know that \(u\in C^{1,\delta}_{\text{loc}}\left(\mathbb{R}^{2}\right)\) and the partial Legendre transform \(u^{\star}\) of \(u\) is a strong solution of (3.10). Next, we need a Liouville theorem for degenerate elliptic equations in divergence form. This theorem is a partial extension of [WZ, Corollary 1.4], where they assumed stronger conditions. **Theorem 3.4**.: _Assume that \(n=2\) and \(a\geq 0\). Then any positive \(C^{1}(\mathbb{R}^{2})\) solution to_ \[\operatorname{div}\left(|x_{2}|^{a}\nabla u\right)=0\quad\text{in $\mathbb{R}^{2}$} \tag{3.9}\] _is a constant function._ Proof.: Note that \(u\) only belongs to \(C^{1}(\mathbb{R}^{2})\). We need to prove this theorem in the weak sense. Hence we can repeat the same process as in the proof of Theorem 2.1. Due to its similarity, we omit the details here. Now, we are ready to give the proof of Theorem 1.5. Proof of Theorem 1.5.: Our proof only works for the case \(\alpha\geq 0\). We consider the partial Legendre transform \(u^{\star}\) of \(u\). \(u^{\star}\) is a solution to \[|\eta|^{\alpha}u^{\star}_{\xi\xi}+u^{\star}_{\eta\eta}=0\quad\text{in $\mathbb{R}^{2}$}. \tag{3.10}\] Let \(v:=u^{\star}_{\xi\xi}\geq 0\). Differentiating (3.10) twice respect to \(\xi\), we have that \(v\geq 0\) solves \[|\eta|^{\alpha}v_{\xi\xi}+v_{\eta\eta}=0\quad\text{in $\mathbb{R}^{2}$}. \tag{3.11}\] By a change of variables, we let \[\widetilde{v}(x_{1},x_{2})=\begin{cases}v\left(x_{1},\left(\frac{\alpha+2}{2} \right)^{\frac{2}{\alpha+2}}x_{2}^{\frac{2}{\alpha+2}}\right),&\eta\geq 0,\\ v\left(x_{1},-\left(\frac{\alpha+2}{2}\right)^{\frac{2}{\alpha+2}}(-x_{2})^{ \frac{2}{\alpha+2}}\right),&\eta<0.\end{cases}\] A direct calculation yields \[\widetilde{v}_{11}=v_{\xi\xi},\] \[\widetilde{v}_{22}=-\frac{\alpha}{\alpha+2}x_{2}^{-1}\widetilde{ v}_{2}+|\eta|^{-\alpha}v_{\eta\eta}.\] By (3.11), we know that \(\widetilde{v}\geq 0\) solves \[\widetilde{v}_{11}+\widetilde{v}_{22}+\frac{\alpha}{\alpha+2}x_{2}^{-1} \widetilde{v}_{2}=0\quad\text{in }\mathbb{R}^{2},\] i.e., \[\operatorname{div}\left(|x_{2}|^{\frac{\alpha}{\alpha+2}}\nabla\widetilde{v} \right)=0\quad\text{in }\mathbb{R}^{2}.\] Hence, by Theorem 3.4 with \(a=\frac{\alpha}{\alpha+2}\geq 0\) in (3.9), we obtain \(\widetilde{v}\equiv\text{constant}\). Thus \(u_{\xi\xi}^{\star}\equiv A\), where \(A\) is a constant. Similar to the proofs of Theorem 1.1 and Theorem 1.3, by solving these ODEs, we have \[u^{\star}(\xi,\eta)=\frac{A}{2}\xi^{2}-\frac{A}{(1+\alpha)(2+\alpha)}|\eta|^{2 +\alpha}+B\xi\eta+l(\xi,\eta).\] Again by \(u=(u^{\star})^{\star}\), we have (1.12).
2306.03714
DashQL -- Complete Analysis Workflows with SQL
We present DashQL, a language that describes complete analysis workflows in self-contained scripts. DashQL combines SQL, the grammar of relational database systems, with a grammar of graphics in a grammar of analytics. It supports preparing and visualizing arbitrarily complex SQL statements in a single coherent language. The proximity to SQL facilitates holistic optimizations of analysis workflows covering data input, encoding, transformations, and visualizations. These optimizations use model and query metadata for visualization-driven aggregation, remote predicate pushdown, and adaptive materialization. We introduce the DashQL language as an extension of SQL and describe the efficient and interactive processing of text-based analysis workflows.
André Kohn, Dominik Moritz, Thomas Neumann
2023-06-06T14:23:06Z
http://arxiv.org/abs/2306.03714v2
# DashQL - Complete Analysis Workflows with SQL ###### Abstract We present DashQL, a language that describes complete analysis workflows in self-contained scripts. DashQL combines SQL, the grammar of relational database systems, with a grammar of graphics in a grammar of analytics. It supports preparing and visualizing arbitrarily complex SQL statements in a single coherent language. The proximity to SQL facilitates holistic optimizations of analysis workflows covering data input, encoding, transformations, and visualizations. These optimizations use model and query metadata for visualization-driven aggregation, remote predicate pushdown, and adaptive materialization. We introduce the DashQL language as an extension of SQL and describe the efficient and interactive processing of text-based analysis workflows. Information visualization, systems, declarative specification ## 1 Introduction Interactive Data Analysis has evolved as an umbrella term for diverse research around approachable, inspirational, explanatory, and efficient data processing. Decades of prior work in these areas have assembled a comprehensive toolbox, guiding users on their paths towards valuable data insights. A common principle among these tools has been the unification of graphics and database interactions, creating a gap towards database query languages like SQL. Pioneering systems like Polaris (Tableau), for example, shield users from database specifics by pairing a graphic taxonomy with an own table algebra [30]. This table algebra is lowered to SQL transparently which abstracts from subtle differences between SQL dialects and allows supporting various database systems through a single interface. In the meantime, however, SQL has become the de-facto standard for data transforms at all scales, ranging from embedded systems (e.g., DuckDB [23], SQLite [11]) to large data warehouses (e.g., Snowflake [9], F [24], Procela [6], Presto [28], Redshift [10], Azure Synapse [1], CockroachDB [31], Hive [4]. Today's database abstractions therefore should match SQL in its expressivity or otherwise turn into an explicit translation layer that data analysts might have to work around. This is particularly pronounced for advanced SQL functionality such as nested subqueries, non-inner joins, window aggregates or grouping sets that are often omitted as early victims during generalization. A lack of these features can render today's tools insufficient for analysts that use SQL as their mental model for database interactions. Additionally, database abstractions prevent holistic optimizations of analysis workflows. The optimization of SQL queries is a well studied problem but is usually unaware of how data is ingested and how the results are consumed [3]. Data analysts therefore propagate information back into the database, for example, by optimizing requests for a following visualization. This is not only error prone but exposes relational optimizations to the user. Database abstractions also obfuscate capabilities of the underlying database. It is not uncommon in today's analysis tools to prepare and cache volatile data to optimize the repeated and interactive query evaluation during exploration [33]. However, this turns out to be a pitfall as it nullifies common database optimizations like projection and selection pushdown. Structured file formats like Parquet allow reading data partially based on the query columns and filters. A tool that lacks these query-driven optimizations might therefore be _slower_ if it loads _unnecessary_ data for a workflow. We expand the vision of Wu et al. [38] and propose a language for a Data Visualization Management System (DVMS) that embeds data retrieval, loading, and visualization into SQL. We call this SQL dialect DashQL and explain how a single coherent language model can drive interactive analysis workflows. Figure 1 shows the first example of a DashQL script that visualizes grouped timeseries data using an area chart and table. In the figure, the input script on the left is translated to a graph of tasks that drives the parallel evaluation of statements. The right side of the figure hints at the visual output of the script as an interactive dashboard including an input field at the top of the screen, followed by the two visualizations. The contribution of this paper is twofold. We first introduce the lan guage grammar and statement semantics in Section 2 and outline how a SQL dialect facilitates interactive exploration, scalable dashboards, and workflow development. We then describe the efficient evaluation of DashQL workflows in Section 3 and present holistic optimizations that use metadata for remote predicate pushdown and adaptive materialization. Section 3 also introduces AM4, an optimization in DashQL that accelerates visualizations within serine series data. We demonstrate DashQL examples throughout the paper and author an interactive analysis workflow step-by-step in Section 4. We measure the performance of the holistic optimization AM4 in Section 5. We close with a discussion of related work in Section 6 and a summary of the paper in Section 7. ## 2 Grammar of Analytics DashQL unifies the predominant grammar of relational [8] database systems, SQL, with a grammar of graphics [35] into a grammar of analytics. This section introduces the DashQL language and its role in an analytics system. We first list the grammar rules of DashQL and describe the semantics of every new statement. Afterward, we present three advantages of driving analysis workflows with a coherent analysis language. ### _SQL Extension_ DashQL introduces the five statements SET, INPUT, FETCH, Load and VISUALZE to the SQL language. Together, they extend SQL just enough to specify where data is located, how it can be loaded and how it can be visualized for users. This allows DashQL to describe complete analysis workflows in self-contained scripts, while preserving the expressiveness of arbitrary SQL queries. The grammar rules of all statements are shown in Figure 2 and are outlined in the following. SET is a utility statement that defines global script properties as individual key-value pairs. This allows modifying script evaluation settings or provide script metadata such as titles, descriptions or versions. INPUT declares values that are provided to the script at runtime. For example, an INPUT statement with identifier \(\mathbf{x}\) and value type FILE presents an input control to users that opens a file picker dialog when clicked. The provided file is then exposed to the remainder of the script through the identifier \(\mathbf{x}\). INPUT may further be followed by an explicit component type and configuration options, matching additional settings like default values. ETCH accompanies INPUT as the second statement that declares raw data for DashQL scripts. In its simplest form, the rule FETCH name FROM uri specifies a raw data source as a single URI. The value "https://a/b.parquet", for example, declares a remote file that will be loaded using HTTPS. If a simple URI is not sufficient, the statement may alternatively be written with the explicit keyword HTTPS followed by fine-granular settings such as the method type or request headers. Similarly, the FETCH statement allows supporting additional source types such as AWS S3, following the same syntax. The fourth statement 100 defines how raw data can be loaded into the database. We deliberately separate the fetching of opaque data and the extraction of relations since the boundary between these two can be fuzzy and depends on the capabilities of the underlying database. DuckDB, for example, can partially scan remote Parquet files over HTTPS using a dedicated table function and a virtual file system abstraction. This collapses both statements into following SQL statements, emphasizing the decoupled nature of declarations in a script and their efficient execution. Other databases without these capabilities might need to execute either one or both tasks explicitly upfront. Similar to FETCH, the LOAD statement can also be extended with additional keywords to introduce new data formats to the language. The last statement (VISUALZE) displays data using charts and tables. Creating visualizations is an iterative process that benefits from short round-trip times between ideas and their realizations. DashQL offers approachable and fast exploration by combining a simple and short syntax with a fallback to a full grammar of graphics for refinements. After all, visualizations are created in tandem with SQL statements which already provide useful information such as the attribute order of SQL projections or data types. For example, users might want to display timeseries data with a time attribute t, and a value attribute \(\mathbf{v}\) backed by a SQL query such as CREATE TABLE a AS SELECT t, \(\mathbf{v}\). Creating the first visualization for this table can be as simple as VISUALZE a USING LINE. Without further information, DashQL assumes that \(\mathbf{t}\) and \(\mathbf{v}\) were deliberately provided as first and second attributes referring to \(\mathbf{x}\) and \(\mathbf{y}\) values of the line chart. Alternatively, SQL column aliases can be used in anticipation of ambiguities to name \(\mathbf{x}\) and \(\mathbf{y}\) explicitly, as in SELECT v AS y, t AS x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x..x.x.x.x.x.x.x.x.x.x.x.x.x.x.x..x.x..x.x.x.x.x.x.x.x.x.x.x.x.x..x.x..x.x.x.x.x.x.x.x.x..x.x.x.x.x.x.x..x.x..x..x.x.x.x..x..x..x..x..x..x.x.x..x.x.x.x.x.x.x..x.x..x.x.x.x.x.x.x.x..x.x.x..x.x.x.x..x.x..x.x.x..x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x..x.x.x.x.x.x.x.x.x..x.x.x..x.x.x.x..x.x.x.x.x..x.x.x.x..x.x..x.x.x..x.x.x.x.x..x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x. flow development. Figure 3 illustrates these features based on a single visualization statement that displays the results of a join as a table. #### 2.2.1 Interactive Exploration DashQL demystifies system internals by replacing a multitude of configuration knobs with guided textual editing. DashQL strikes a balance between flexibility and intuition by providing short and long versions of the different grammar rules. This flexibility allows users to start the exploration with short statements and later refine the workflow by manually adjusting inferred properties. This simplifies the exploration as the syntactical differences between statements stay small. The example in Figure 3 expresses the intent to display data as a table by writing VISUALIZE - USING TABLE. The short syntax allows altering the visualization quickly. For example, a user can replace the keyword TABLE with AREA CHART to change the visualization type and add the keyword STACKED to group areas based on an additional attribute. Once the correct chart type is found, a user can adjust fine-granular configuration options such as colors and labels through explicit Vega-Lite settings. These rewrites can either be done manually by changing the script text or by modifying a previously rendered chart. The result is an interactive loop where partially evaluated DashQL workflows guide users through following refinements. Additionally, the analysis workflows are interactive themselves through the INPUT statement. These statements parameterize workflows explicitly by exposing variables to viewers. This allows embedding arbitrary complex SQL queries into the workflow and steer them through input controls. A popular alternative to INPUT statements is to derive raw SQL text from input values through text interpolation. This is flexible, but complicates the semantic analysis of workflows as it gives up crucial information about statement dependencies, types, and the exact usage of parameters. It also requires a preprocessing step to generate the actual script text which does not align well with the continuous and iterative re-evaluation of workflows. DashQL distinguishes between the analysts authoring workflows and the viewer that consume the workflows's output. Viewers are not exposed to the language but instead only see the results from statements as opaque analysis dashboards. Authors, in contrast, see the language and visual output side-by-side and benefit from semantic information in the script editor. Interactive exploration in DashQL is therefore emphasized differently for these two user groups as authors benefit from frictionless feedback loops for textual changes while viewers require efficient re-evaluations after changing input values. #### 2.2.2 Scalable Dashboards DashQL simplifies the sharing of analysis dashboards. A workflow is a single self-contained script text and can be treated as such for the distribution to multiple users. This decouples the workflow description from the evaluating analysis tool, similarly to SQL being the common denominator between relational database systems. Sharing a data analysis workflow is _cheap_ since there is no dependency on specific service resources except for the workflow's input data. The price for serving this data is often _lower_ than maintaining computing resources for traditional server-based analytics tools, more so with scalable cloud storage services and large content delivery networks. We introduce the language DashQL alongside a reference implementation that is powered by DuckDB-Wasm, an efficient WebAssembly version of the analytical database DuckDB [23] for the web. It evaluates entire analytical workflows ad-hoc in the browser, presenting a cost-efficient and interactive solution without dedicated analytics servers. The lack of a dedicated server increases the horizontal scalability of the system at the cost of higher bandwidth requirements for the viewers. According to Vogelsgasang et al, shared analysis workflows on smaller datasets are not uncommon today [34]. They state, that only approximately 600 out of 62 thousand workbooks uploaded to the service Tableau Public contain more than a million tuples. All other workflows fall into the range of browser-manageable data sizes, eliminating the need for dedicated computing resources in the cloud. DashQL also supports workflows that process larger datasets but reduce the data size quickly based on user input. For example, if a workflow processes event data of a logging service, the entire dataset for all users might easily exceed petabytes of records. But if the workflow itself analyzes events of a specific user over a fixed period, the datasets can get sufficiently small. Section 3.6 introduces holistic optimizations that optimize the amount of loaded data based on SQL queries in the workflow. Yet, the language DashQL is not limited to small datasets. It instead offers an opportunity to dynamically combine client and server-side implementations to optimize for scalability and interactivity wherever possible and fall back to traditional server-side processing when needed. #### 2.2.3 Collaborative Development DashQL also simplifies collaborative development of analysis workflows. Text-based version control systems like _Git_ dominate the distributed software development today. Since DashQL workflows are self-contained scripts, they can be developed as part of a versioned development process. Users can fork DashQL workflows and contribute changes back through simple textual updates. This process is facilitated by the concise grammar of SQL that keeps the textual differences small. Figure 3 demonstrates this versioning by adding a grouping clause and a changed chart type in the example statement. The patch tracks the new grouping by the country attribute as well as the visualization as stacked bar chart in the same script. DashQL workflows can therefore be created, updated, forked and discussed in environments that have already proven their effectiveness in collaborative development. Fig. 3: DashQL scripts as a driver for data analysis workflows. AST nodes store the location in the input text, the node type, the attribute key, the index of the parent node, and either a raw value or a span of children nodes. Script-based analysis workflows allow for interactive exploration, scalable dashboards, and a collaborative workflow development. ## 3 Implementation In this section, we outline the implementation of a DashQL powered analysis tool. We first describe the efficient AST encoding that we use as textual language model for analysis workflows. This model allows the runtime to update only the parts of the execution state that have changed instead of full re-evaluations. We then introduce the concept of tasks and show how adaptive task graphs can be maintained using fast difference computations. We discuss the extensibility of DashQL and the use of query metadata to simplify declarative visualizations for fast exploration. And finally, we present two examples of holistic optimizations that are accelerating the coupled workflow components. ### _AST Format_ DashQL translates many user interactions into modifications of the associated script text. This positions the underlying text model as fundamental component of the entire system. Our implementation is therefore built around a fast syntactical analysis, backed by an efficient representation of the abstract syntax tree (AST). The parser extends the SQL grammar rules of PostgreSQL and allocates compact AST nodes into a single, bump-allocated memory buffer. This accelerates parsing and increases the cache efficiency of any following operations such as tree traversals. An AST node is exactly 20 B large and stores the location in the input text, the node type, the attribute key and the index of the parent node. It also stores either a raw integer value or a span of children nodes in the same buffer. The text location associates each node with the substring matched by its grammar rule, enabling partial rewrites of individual statements. The AST further acts as an auxiliary data structure and references string literals in the original script text instead of copying them. Children of an AST node are further stored in sorted order based on the attribute key, accelerating key lookups and recursive comparisons. Figure 3 illustrates the AST encoding of an example statement that visualizes an inline SQL query joining two base relations. The AST presents two nodes of type REL_NAME that match the table names A and B. Nodes are created by the parser following a post-order traversal of matched grammar rules which emits children before their parents. This eliminates additional serialization steps since nodes can be written to the buffer while parsing. In the example, the relation names are matched as table references and then form the two children of a from clause. The output of the syntactic analysis is a program description representing statements as offsets of root nodes in this AST buffer. This representation is not only cache efficient, but also simplifies the crossing of system boundaries as the consecutive memory buffer and fixed size nodes simplify the communication between system components and languages. ### _From AST to Task_ The actionable units of our system are called tasks. Tasks are derived from statements and form a graph based on the statement dependencies. A query statement that references a load statement, for example, translates into a query task that scans the output of a load task. These tasks are partially ordered and evaluate the entire script starting with data ingestion and ending with the visualization of derived tables. New tasks are derived on every user interaction based on the difference between the AST and its predecessor. This includes tasks to undo the effects of deleted statements, update the effects of modified statements and add the effects of new statements. For example, if a SQL statement that created a table is deleted from the script, the system derives a task to undo the effects by dropping the table. This mechanism is more abstract than the traditional transaction isolation of database systems as all tasks are maintaining a single workflow state that is updated with respect to changes in the script text and the user input. VISUALIZE statements, for example, compile Vega-Lite specifications only once and delete or update the specification only when the statement changes. FETCH statements that download data using HTTP will further cache the data until a script change invalidates the output. The task graph drives the execution of an analysis workflow and serves as anchor for any operations on derived state. ### _Adaptive Task Graphs_ The task graph of DashQL is adaptive as it reflects all continuous changes in the script and the user input. We implement a variant of an algorithm known as _Patience Diff_ that is implemented in the version control systems _GNU Bazzar_ and _Git_. The algorithm derives task updates from the difference between two scripts and works as follows: We first determine all unique statement mappings between a script and its predecessor. Two statements are compared based on their ASTs instead of texts for whitespace insensitivity and support for incremental changes. The similarity can be quantified by counting equal AST nodes in two simultaneous DFS traversals and weighting them by the distance Fig. 4: Example of a task graph that is derived from a previous task graph and an AST-based script difference. The two scripts visualize grouped timeseries data and differ in a deleted statement and the grouping granularity. The AST colors equal statements in green, changes in blue and deletions in orange. to the AST root. The tree traversals profit from the compact and cache efficient encoding of nodes into a single AST buffer. Next, we compute the longest common subsequence among the mapped statements and use them as anchors for the remaining assignments. The remainder is then iterated in sequence and assigned to the most similar matches that haven't been assigned yet. This identifies new and deleted statements and emits a similarity score for the rest. Afterward, we determine the _applicability_ of all previous tasks. A task is _applicable_ if it was derived from a statement that stayed the same, does not _transitively_ depend on an _inapplicable_ task, and is not followed by an _inapplicable_ task that successfully modified the own output. The _applicability_ can be determined through a single DFS traversal with backwards propagation when encountering an _inapplicable_ task. _Applicable_ tasks and their state are migrated and marked as completed while the effects of all other tasks are updated or undone. Figure 4 illustrates the entire process with an example of two scripts that analyze site activity data stored in AWS S3. The first script starts with an INPUT statement that receives a time interval for the analysis. It then downloads the data from an AWS S3 bucket using a FETCH statement and inserts the data into the database as Parquet file using LOAD. The statements are followed by a traditional SQL query to filter the site activity in the input interval and compute aggregates grouped by days. The final two statements visualize the result of this query as a table and a stacked bar chart. The second script is almost identical to the first one except that the data is now grouped per hour instead of days and is no longer visualized as table. AST buffers of both scripts are shown below with the node color indicating the statement differences. The first three statements and the last are equal and therefore don't need to change. The query statement differs in the string literal that is passed to the function date.trunc and is marked as updated. The first visualization statement is no longer present in the new script and is marked as deleted. The figure also contains a task graph derived for the previous script. It shows one task for every statement and a checkmark indicating that all of them were successfully executed. This task graph is then combined with the computed statement mappings to derive a new set of tasks reflecting the changes between the scripts. The table visualization was deleted, emitting a task called DROP VIZ to remove the table. The query statement was updated and results in the task DROP TABLE to undo the effects of the SQL query. However, this effect propagates since both visualizations depend on the table data. We therefore also undo the effects of the second visualization and recreate it after executing the updated SQL statement. The remaining tasks that fetch and load the remote data into the database and receive the input from the user are migrated and marked as completed. This example demonstrates differences with the traditional script execution in relational database systems. DashQL is defining entire analysis workflows, including external data, visualizations, and interactions with a user. Scripts are therefore not evaluated independently but in the context of a preceding execution, rewarding awareness of existing state. ### Complementing Vega-Lite Vega-Lite offers a grammar to describe an expressive range of charts in declarative JSON specifications. The VISUALIZE statement of DashQL supports Vega-Lite specifications as nested key-value pair lists in SQL. VISUALIZE does not need to embed its own grammar of graphics and users already familiar with Vega-Lite don't have to learn a new language. Vega-Lite specifications are self-contained and describe visualizations without the context of an existing data model. In DashQL, visualizations are always backed by SQL queries which offer an opportunity to auto-complete parts of a specification. This reduces the pressure on Vega-Lite and pushes costly data introspection into the database system. An example for this are encoding types and scale domains. We know the data types of all involved attributes based on the SQL metadata which enables robust defaults, for example, when selecting between _quantitative_, _ordinal_, and _nominal_ encoding types. Additionally, we can determine a value domain or range efficiently upfront using SQL queries. DashQL further provides simplified VISUALIZE statements that can be written in tandem with SQL queries. This follows the observation, that explicit defaults can guide the writing of SQL queries with respect to a subsequent visualization. For example, users can express a preferred field assignment through attribute aliases. A projection like SELECT time AS x, hits AS y, site AS color -> full specification whenever it is needed. ### _Language Extensions_ The syntax of the DashQL statements INPUT, FETCH, LOAD and VISUALIZE end with optional settings provided as key-value pair lists. This offers a mechanism to extend DashQL without modifying the grammar rules or the language model. The settings translate to a generic dictionary that is passed to the derived tasks. Custom task implementations can read this dictionary and enable extensions based on available keys. Our reference implementation, for example, extends the loading of JSON data through JMESPath expressions. By default, our embedded database DuckDB-Wasun can load a table from a JSON document in two formats. Either in row-major format as top-level array of objects where each object contains all attributes of the relation or in column-major format as top-level object with members storing column arrays. If a JSON document is not in either of those formats, it has to be transformed first. The JSON task therefore checks for the key "jmespath" in the settings. If it is present, the task evaluates the expression on the input data first before loading it into the database. Figure 6 lists an example DashQL script, that loads two relations from a single JSON document that was returned from a remote HTTP API. The documents stores population data of Oklahoma. City populations are stored as a single object with city names as properties, whereas county populations are provided as an array of objects. The first expression emits an object with the field _city_ storing an array of city names and the field _pop_ holding an array of population values. The second expression returns the county object array with changed attribute names. This example demonstrates the extensibility of the DashQL language through custom task implementations that can be configured through dynamic configuration options. ### _Holistic Optimization_ Data transformations can be expensive which makes their optimization indegensable for every data analysis workflow. Query optimizers are therefore a vital component of every data processing system today and have a significant impact on overall execution times. Research around query optimization is profound and has been expanded for decades. Yet, databases are universal and face the difficult task to accelerate specific queries without losing the generality. As a result, database systems rarely include external information during planning, leaving these non-trival problems to the applications. DashQL unifies the data retrieval, transformations, and visualizations in the same language which presents an opportunity for holistic optimizations. #### 3.6.1 Visualization-Driven Aggregation The first example for holistic optimization is the automatic aggregation of SQL results for VISUALIZE statements. Jugel et al introduced the value-preserving aggregation M4 [15] to accelerate the visualization of time series data. M4 follows the observation, that the amount of rendered data points in line charts can be limited by the number of visible pixels on the screen. Instead of visualizing every single tuple of a time series, we can select a subset of the tuples based on the chart dimensions. The authors group values by time bins and compute the four name-giving aggregates min(x), max(x), min(y), and max(y) per bin. The associated points span a bounding box around all tuples in a bin that intersects any pixels that should be colored for the line chart. With DashQL, introducing M4 becomes an optimization that propagates the visualization context towards the backing SQL query. M4 is a value-preserving time series aggregation equivalent to the one listed in Figure 7. The query scans the relation user_data and computes the four aggregates grouped by a bin key. Afterward, the query resolves the corresponding x- and y-values of the aggregates by joining the aggregates again with the input data. A tuple in the input qualifies in that join, if there exists an aggregate with the same key and either x or y equals an extreme value. The query does not rely on any specific aggregation functions which makes it compatible with a wide variety of database systems. Yet, the original version of M4 introduces a subtle but important assumption. It scans the input relation twice and joins the extreme values to reconstruct the corresponding input tuples. This assumes, that the extreme values are unique as the join might otherwise emit duplicates. For example, a constant function like \(f(x)=42\) will resolve 42 as minimum and maximum y value of every group. The following Fig. 8: AM4, a more efficient version of M4 that provides value-preserving time series aggregation using a single scan and the aggregation functions arg_min and arg_max. Fig. 6: Two load statements that extract two relations from a single JSON document using JMESPath expressions. Both expressions extract populations in Oklahoma. The first expression emits the city data in column-major format, the second expression returns county data in row-major format. Fig. 7: M4, a query for value-preserving time series aggregation, described by Jugel et al [15]. This version uses a CTE instead of a subquery with equal semantics. join will then emit the entire input relation since all tuples contain the same value for \(\mathbf{y}\). To support non-unique \(\mathbf{y}\)-values, we therefore also have to make the output distinct on \(\mathbf{k}\), \(\mathbf{x}\), and \(\mathbf{y}\). M4 therefore consists of a repeated scan, a join and two aggregations or otherwise has to fall back to significantly slower window functions. We propose an alternative version of M4, called AM4, shown in Figure 8. It uses the aggregation functions arg_min and arg_max, sometimes implemented as min_by, and max_by, that are provided by several databases today (e.g., by ClickHouse, DuckDB and Presto). The function arg_min(a, b) selects an arbitrary attribute for a where b is minimal and can be computed alongside a min(b) aggregate at negligible costs. We extend M4 by additionally computing the aggregates arg_min(c, x), arg_max(c, y) and arg_max(c, y). This resolves existing points associated with the extreme values in a single efficient grouping, eliminating the second scan and the distinct aggregation. #### 3.6.2 Adaptive Materialization A second example for holistic optimization is called Adaptive Materialization. DashQL statements like FETCH and LOAD only _declare_ data sources and formats. It is left the optimizer to decide at runtime if the file contents should be materialized as table upfront or if the data should be loaded lazily as part of a following SQL query. This decision not only depends on a single query but the entire script context as multiple statements might refer to the same data. If the file format allows it, DashQL can further use projection and predicate pushdown of databases to only fetch relevant parts of a file based on the specific query. Predicate pushdown is a common optimization technique in databases and describes the evaluation of predicates as far down in the query plan as possible. The direction _down_ refers to the widespread representation of relational algebra where relations form _leaves_ of a tree that are combined using joins. When optimizing relational algebra, a common task is to push individual predicates towards these _leaves_ to reduce the cardinality of a relation as early as possible. If such a predicate is evaluated right after scanning file formats like Parquet, the database can evaluate the predicates on file statistics and skip reading entire row groups. The database DuckDB, for example, supports reading remote Parquet files partially using a HTTP filesystem and skips row groups based on predicates in the table function parquet_scan. With DuckDB, DashQL fetches and loads the Parquet files in following SQL queries, if the data is not consumed by multiple statements. Formats like CSV, on the other hand, require downloading and parsing the entire file, independent of subsequent filters. In these cases, DashQL materializes the CSV contents once and shares the table with all following statements. The decision to materialize data therefore depends on the data source, the data format, all queries in the script and the capabilities of the underlying database. We call this technique Adaptive Materialization and see it as an opportunity to replace traditional caching logic with query-driven optimization passes. ## 4 Example Data Exploration We demonstrate data exploration with DashQL by constructing an example analysis workflow. The example analyzes a dataset with website activity data and builds a dashboard to view daily total page views for individual websites. We describe the textual changes to the script in every step and how they affect the reevaluation of the derived task graph. The script text and the associated output of the tool are shown in Figure 9. **Data Input.** Our exploration begins with a declaration of the workflow's input data. The first script is labeled with 1 and consists of three DashQL statements. A FETCH statement declares that a file with name data can be retrieved using HTTP, a LOAD statement interprets this data as Parquet file and a VISUALZE statement colored in (Frec) displays the file contents. The figure also presents the output of the first statements that visualizes the unaggregated site activity data using a single table. This table is virtualized, which means that only visible rows are rendered. In SQL, this virtualization translates to LIMIT and OFFSET clauses to only query the relevant subset of the data. With a coherent language model, we can propagate the LIMIT and OFFSET specifics towards the data retrieval during an optimization pass. As a result, this first step only reads the file metadata and the first bytes of the Parquet file using HTTP range requests. When the user scrolls through the data, the table dynamically reads following tuples by adjusting both specifics. The internal WebAssembly database also uses an accelerating readahead buffer for the remote file to minimize the number of roundruptions to the remote server. This reduces the latency that users have to wait until seeing a visualization and provides a graceful fallback to large reads when the data is being requested. Figure 9: Authoring an example analysis workflow with DashQL. The workflow explores website activity data in four steps. The steps are labeled with 1�⃃� **Aggregate Views.** Next, we want to aggregate the site activity to inspect the hourly sum of page views. We modify the script as shown in (2) and add an explicit SQL statement that groups the site activity data as well as an additional VISUALIZE statement in green to display the aggregates using an area chart. During reevaluation, the former workflow state is left untouched since the previous statements were neither modified, nor invalidated. The new query statement, however, needs to scan the attributes timestamp and views of all tuples in the Parquet file to compute the new aggregates. The additional visualization statement waits for the grouping to complete and then displays an area chart. This demonstrates the generation of Vega-Lite specifications as outlined in Section 3.4 since the tool automatically selects the time and sum attributes for the x- and y-values and identifies temporal and quantitive axes. **Filter Website.** The next step makes the analysis dashboard interactive. Instead of showing the total page views across all websites, we want to filter the activity data by a website name that is provided dynamically by the user. For this, (3) introduces an INPUT statement colored in orange and includes a filter predicate in the SQL statement. The new input with name website is of type VARCHAR and displays a text field on top of the previous area chart. The added filter predicate checks if the website is either NULL or if the website attribute of the tuple equals the website variable in the script. By default, the input value will be NULL which means that the dashboard will show the total page views until a website name is entered. During reevaluation, the Patience Diff algorithm identifies the additional WHERE clause in the query statement and marks it as updated. The system therefore drops and recreates the grouped activity table as well as the area chart that consumes its data. The query now filters the attribute website, which means that an additional column needs to be fetched from the remote Parquet file. This input statement shows the capability of DashQL to parameterize any SQL statement without explicit text instantiation. The AST allows us to reference the input variable by qualifying its name with the default schema. **Polish Aschettes.** The last step polishes the aesthetics of the generated analysis dashboard. The short syntax of DashQL offers a frictionless visualization of arbitrary SQL statements but may be insufficiently generic for a final workflow output. For example, the former area chart visualization falls back to the SQL attribute names for axis labels and default colors for the covered area. As described in Section 3.4, DashQL internally lowers the short syntax to verbose specifications. To adjust fine-granular settings, DashQL can therefore rewrite existing statements and specify all lowered options explicitly. (4) demonstrates this by replacing the single area chart visualization with explicit settings after interacting with the previously rendered chart. It uses the verbose specification to adjust the title, the axis labels, the tick count and the area opacity in the workflow script. This example demonstrates that DashQL allows for a progressive construction of analysis workflows. The interplay between textual adjustments and continuous visualizations provides short feedback loops during the data exploration. Propagating limit and offset specifiers is an example for a holistic optimization that reduces the amount of loaded data based on user input. ## 4 Visualization with AM4 In this section, we measure the performance of AM4, a visualization-driven aggregation and an example for a holistic optimization in DashQL. As described in Section 3.6, AM4 accelerates chart rendering and reduces the total amount of downloaded data in a client-server setting by filtering minimum and maximum values of grouped data. We want to demonstrate the effects of this optimization by analyzing render and download times with increasing data sizes. The experiments were performed on a Ryzen 5800X CPU with Node.js v17.6.0 that is powered by the V8 engine v9.6. Figure 10 contains three plots. The plot at the top shows in green color the time it takes to draw all points on a Cairo-backed canvas, using a prepared Vega view. It further adds the time to download the data in blue with either mobile (Cellular) or fixed broadband speeds. The Cisco Annual Internet Report [7] projects average global network performances of 110.4 Mbit s\({}^{-1}\) for fixed broadband and 43.9 Mbit s\({}^{-1}\) for mobile networks by 2023. We assume a small record size of 16 B and compute the required time to download all tuples without any network latencies. Both durations present a significant delay especially for data sizes beyond 100k tuples that hamper any interactive exploration. The plot in the bottom right shows execution times of M4 and AM4. If we assume a canvas width of 1000 pixels and a device pixel ratio of 2, M4 and AM4 reduce the data cardinality to 8k tuples. AM4 computes the aggregates for a relation with 500k entries in 22.3 ms and is twice as fast as M4 which takes 53.7 ms. The plot in the bottom left shows the render and download and times for up to 10k tuples. A vertical line marks the resulting 8k tuples emitted by both algorithms that can be visualized quickly. The experiment shows that both, M4 and AM4, accelerate the visualization of large data sets. This holds even when computing the analysis locally without downloads since rendering alone becomes expensive with an increasing number of tuples. AM4 is therefore a good example for an optimization that propagates data from visualizations, such as the canvas width, back to the SQL query. ## 5 Related Work DashQL builds on ideas from declarative visualization and analysis languages and automatically optimizes workflows to make them more scalable. ### _Declarative Analysis Languages_ Visualization and analysis languages are fundamental to exploratory analysis, either as a programming interface or as the underlying representation of a UI tool. Declarative, textual languages provide a high-level notation to describe data science workflows. They often come with runtimes that optimize the data representation and query execution. These advantages make them a popular choice over imperative languages like Python, R, or JavaScript. For example, SQL remains a popular tool for data scientists to express queries to databases decades after its invention [5]. Additionally, many declarative visualization and analysis languages have emerged. Vega [27] and Vega-Lite [26], for example, describe visualizations in JSON syntax. Their runtimes reduce redundant computation in these specifications and fill in rendering details. DashQL extends this research around declarative visualizations by integrating Vega-Lite specifications in the VISUALIZE statements. Vega also supports declarative data loading and transformations but authoring and debugging them can be cumbersome [14] and is often not performant enough. Dedicated analysis languages can fill this gap, for example by Fig. 10: Downloading and rendering dominate the visualization times for increasing data sizes in a client-server setting. M4 and AM4 efficiently reduce large datasets to a small cardinality that can be visualized quickly. extracting analytical queries into explicit steps that can be annotated and tracked. Glinda [29] is a declarative format for specifying data science workflows including data loading, transformation, machine learning, and visualization. In contrast to Vega and Vega-Lite, Glinda describes analysis steps in YAML. DashQL follows the principles of declarative analysis languages but extends the language SQL instead. This approaches the goal of a coherent analysis language from the opposite direction as data ingestion and visualization are embedded into the database query language itself. Vega, Glinda and DashQL, as most analysis languages, build on relational algebra and share a similar expressiveness in terms of the analyses they can describe [8]. Beyond the analysis steps, DashQL specifications describe inputs via UI widgets and outputs via tables and visualizations. Unlike Precision interfaces [42] and the recent PI2 [32] which implicitly generate UIs from SQL queries, the UI components in DashQL are explicitly described using the statements INPUT and VISUALIZE. Like Vega visualizations, DashQL dashboards are interactive and update relatively to changes. Vega proposed a reactive runtime for visualizations [27] but all declarative components need to be specified by the author. When a declaration changes, the runtime needs to re-parse and re-evaluate the entire JSON specification. When languages are used as model in a UI tool, analysts interactively modify an underlying specification that the system can reason about [12]. Polaris, which led to the creation of Tableau, explored this concept with the language VisQL [30]. In Voyager [36], people interactively change CompassQL [37] specifications and a recommender system suggests a gallery of visualizations. Lyra is an interactive visualization design environment that authors Vega-Lite specifications on behalf of the user [25]. We show in the paper that DashQL extends these ideas with an compact AST representation that allows for efficient updates. Systems often blend code and graphical interfaces and allow modifications through either direct manipulation or code. Mage [16] and B2 [39] blend the boundaries between code and UI in Jupyter Notebooks. In Sketch-n-Sketch [13], people can write a program to generate graphics or manipulate the graphics directly in the rendering canvas. Inspired by these ideas, DashQL scripts describe visualizations like inputs, tables, and charts with text, but users can also change the statement by interacting with the UI. For example, DashQL offers to expand the short syntax of VISUALIZE statements or updates chart dimensions in the text when resizing the UI widget. ### Scalable Visual Analysis Even small latencies in visual analysis systems negatively affect people's behavior during data exploration [18, 41]. Initial exposure to delays impair the subsequent performance even when delays are removed. Therefore, we want DashQL to respond to user interactions with low latency. DashQL builds on two ideas to achieve this goal. First, DashQL uses an efficient in-browser analytical database based on DuckDB [23]. The database allows to evaluate analysis workflows entirely on the client, avoiding costly roundtrips to a backend server. This has the foundation for a distributed evaluation of workflows in the future that optimize dynamic client server scaling using a cost model [20]. Second, DashQL leverages the declarative format of analysis scripts to apply known optimizations from the database literature. These optimizations reduce redundant and unnecessary computations and avoid loading data that is not needed to answer a query. For example, DashQL reads data dynamically from remote files based on query predicates and projected attributes [2]. Propagating such information across statements shares similarities with provenance-supported interactions described by Psallidas et al. [22]. DashQL further implements a variant of the algorithm M4 [15] to reduce the rendering overhead with time series data. Ideally, these optimizations happen transparently without the user having to manually specify them (as they would need to if they wrote their analysis in e.g., D3). Previous systems [17, 19, 21] used specialized engines to enable interactive response times. DashQL does not yet apply some of the indexing techniques these systems proposed but it supports a wide range of analysis scenarios through general SQL queries. VegaPlus [40] is a related project that aims to improve performance of general visualizations by extracting data transformations from Vega [27] specifications and running them in a system that is more scalable than the Vega runtime. VegaPlus shifts computation but does not automatically apply data reduction techniques like M4. ## 7 Discussion This paper introduces the language DashQL. We list example scripts throughout the sections and discuss iterative data exploration in Section 4. The examples demonstrate the proximity of the language to SQL and the capability to describe complete analysis workflows. DashQL extends SQL by defining how data can be resolved and how results should be visualized. The coherent language model facilitates holistic optimizations covering data input, transforms and visualizations. Nevertheless, we identify two major areas for future improvements. First, DashQL models interactivity with the dedicated input statement. These statements render form controls in the resulting dashboard and reevaluate parts of the task graph upon user interaction. This enables parameterizing arbitrary SQL statements but raises the question how to support interactions with rendered visualizations. A prominent example is cross-filtering where user interactions with one visualization translate into applied filters for another. A possible solution could be to expose the concept of Vega-Lite signals to the language, for instance, by updating values for input statements through brushes in a time series chart. Future versions of DashQL should therefore focus on integrating input variables as more than just constant scalar values in SQL queries. Second, the language DashQL does not specify yet how a workflow can be executed across multiple machines. Traditionally, there has been a clear separation between the analytics server performing computations and the client visualizing the results. With DashQL, these boundaries are blurred as a system could spread the execution of a workflow across multiple machines. The distributed evaluation of a workflow becomes an optimization problem that has to consider the data locality, the bandwidth and computation capacities and resulting interaction latency. For example, large datasets might require to evaluate certain predicates close to the data in the cloud but might still favor analyzing the filtered results locally. We see DashQL as a step towards distributed analysis workflows that optimize for low interaction latencies even on large data sizes. ###### Acknowledgements. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 725286).
2307.14141
Roadmap towards the redefinition of the second
This paper outlines the roadmap towards the redefinition of the second, which was recently updated by the CCTF Task Force created by the CCTF in 2020. The main achievements and the open challenges related to the status of the optical frequency standards, their contribution to time scales and UTC, the possibility of their comparison and the knowledge of the Earth's gravitational potential at the necessary level of uncertainty are discussed. In addition, the mandatory criteria to be achieved before redefinition and their current fulfilment level, together with the redefinition options based on a single or on a set of transitions are described.
N. Dimarcq, M. Gertsvolf, G. Mileti, S. Bize, C. W. Oates, E. Peik, D. Calonico, T. Ido, P. Tavella, F. Meynadier, G. Petit, G. Panfilo, J. Bartholomew, P. Defraigne, E. A. Donley, P. O. Hedekvist, I. Sesia, M. Wouters, P. Dube, F. Fang, F. Levi, J. Lodewyck, H. S. Margolis, D. Newell, S. Slyusarev, S. Weyers, J. -P. Uzan, M. Yasuda, D. -H. Yu, C. Rieck, H. Schnatz, Y. Hanado, M. Fujieda, P. -E. Pottie, J. Hanssen, A. Malimon, N. Ashby
2023-07-26T12:12:31Z
http://arxiv.org/abs/2307.14141v1
# Roadmap towards the redefinition of the second ###### Abstract This paper outlines the roadmap towards the redefinition of the second, which was recently updated by the CCTF Task Force created by the CCTF in 2020. The main achievements and the open challenges related to the status of the optical frequency standards, their contribution to time scales and UTC, the possibility of their comparison and the knowledge of the Earth's gravitational potential at the necessary level of uncertainty are discussed. In addition, the mandatory criteria to be achieved before redefinition and their current fulfilment level, together with the redefinition options based on a single or on a set of transitions are described. ## 1 Introduction The definitions of the base units of the International System of Units (SI) [1] are decided by the General Conference on Weights and Measures (CGPM) that supervises the work of the International Committee for Weights and Measures (CIPM) and its Consultative Committees. Following definitions based on astronomical phenomena, the definition of the SI unit of time, the second, has relied since 1967 on the caesium atom hyperfine transition frequency (Section 2). Caesium primary frequency standards are currently realizing this unit with a relative frequency uncertainty at the low \(10^{-16}\) level, but in the last two decades they have been surpassed by optical frequency standards (OFS) showing much lower uncertainties, currently 2 orders of magnitude better. In 2016, the Consultative Committee for Time and Frequency (CCTF) set up a first version of the roadmap towards the redefinition of the second and the associated conditions for the redefinition [2, 3]. Since June 2020, the roadmap has been updated by a dedicated CCTF Task Force on this topic, with three subgroups related to: A. Requests from user communities, National Metrology Institutes and Liaisons B. Atomic frequency standards, and possible redefinition approaches C. Time and Frequency dissemination and time scales. The CCTF has gathered feedback on the redefinition of the second through a global consultation of concerned communities and stakeholders, which was carried out through an online survey from December 2020 to January 2021. It has analysed the needs and possible impacts of a new definition, not just scientific and technological, but also regulatory and legislative (Section 3). The choice of the new definition is central to the debate: the CCTF has analysed the various options that can be envisaged and identified the pros and cons of each possibility (Section 4). The CCTF has updated criteria and conditions that quantify the status of the developments and their maturity for a redefinition (Section 5). The fulfillment of mandatory criteria relies on the progress of ultra-low uncertainty and reliable Optical Frequency Standards (OFS - Section 6) and Time and Frequency (TF) transfer and comparison techniques (Section 7) required for the realization of the new definition and its dissemination towards users, including the contribution of OFS to the International Atomic Time scale (TAI). ## 2 History of definitions Until 1967, the SI definition of time had been based on astronomy. It was initially the fraction 1/86 400 of the mean solar day but observation of unpredictable variations in the Earth rotation rate led in 1960 to a change of the definition to choose a more stable astronomical phenomenon: the motion of the Earth around the Sun, with an SI second equal to the fraction 1/31 556 925.9747 of the tropical year 1900. Thanks to the rapid progress of caesium thermal beam frequency standards, the SI definition of the second left the field of astronomy in 1967 to enter the field of quantum physics, with the definition exploiting the benefits of high precision frequency measurements [4]. The second became at that time the "the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom". In 1999, to take black body radiation shifts into account, an addendum to the initial definition was issued to specify that the definition refers to a caesium atom at rest at a temperature of 0 K. The 26th meeting of the CGPM (2018) marked an important step with the revision of the SI system of units and the redefinition of four base units, by fixing the values of fundamental constants: kilogram (Planck constant \(h\)), ampere (elementary charge \(e\)), kelvin (Boltzmann constant \(k_{\mathrm{B}}\)), and mole (Avogadro constant \(N_{\mathrm{A}}\)). The basis of the definition of the SI second remained the same but the wording changed in order to be consistent with the general spirit of the new SI, fixing the value of the caesium frequency: "The second, symbol s, is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency \(\Delta\)v\({}_{\mathrm{C}}\), the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s-1". In this revised SI, the unit of time has a central position since fixing the values of fundamental constants leads to a direct dependence of all the units, except the mole, on the definition of the second (Table 1). The evolution from astronomy to quantum physics in 1967 was associated with a deep conceptual change for the type of measured quantity underlying the _mise en pratique_ of the definition. In astronomy, it was the angle/phase linked to the considered Earth motion that was determined theoretically as a given function of time. With quantum \begin{table} \begin{tabular}{|c|l|l|l|l|l|l|l|l|} \hline **Unit** & \multicolumn{3}{c|}{**Defining constant**} & \multicolumn{1}{c|}{**s**} & \multicolumn{1}{c|}{**m**} & \multicolumn{1}{c|}{**A**} & \multicolumn{1}{c|}{**kg**} & \multicolumn{1}{c|}{**K**} & \multicolumn{1}{c|}{**Cd**} \\ \hline \multirow{3}{*}{**s**} & \(\Delta\)v\({}_{\mathrm{C}\mathrm{S}}\): unperturbed ground-state hyperfine transition & \multicolumn{1}{c|}{} & & & & & & \\ & frequency of the caesium-133 atom & \multicolumn{1}{c|}{} & & & & & & \\ \hline **m** & \(c\): speed of light in vacuum & \multicolumn{1}{c|}{} & & & & & & \\ \hline **A** & \(e\): elementary charge & \multicolumn{1}{c|}{} & & & & & & \\ \hline **kg** & \(h\): Planck constant & \multicolumn{1}{c|}{} & & & & & & \\ \hline **K** & \(k_{\mathrm{B}}\): Boltzmann constant & \multicolumn{1}{c|}{} & & & & & & \\ \hline \multirow{2}{*}{**Cd**} & \(K_{\mathrm{cd}}\): luminous efficacy of monochromatic radiation & \multicolumn{1}{c|}{} & & & & & & \\ & of frequency \(540\times 10^{12}\) Hz & \multicolumn{1}{c|}{} & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Dependencies of the defining constants on other SI base unit physics, the realization of the definition is now based on frequency measurements, with the assumption provided by the Standard Model that the atomic resonance frequencies are universal and constant, both in time and in space [5, 6, 7]. Today, the primary representation of the SI second is realized by caesium primary frequency standards, with relative frequency uncertainties at the 10\({}^{-16}\) level offered by cold atom fountains (see [https://www.bipm.org/en/time-ftp/circular-t](https://www.bipm.org/en/time-ftp/circular-t) and [8]). Secondary representations of the SI second (SRS) are provided by rubidium or optical frequency standards (OFS). The list of recommended values of standard frequencies for transitions that may be used as SRS is regularly updated [3, 9, 10]. ## 3 Main needs in TF metrology and stimulus for a new definition With the SI second underlying the realization of other SI units, its redefinition may potentially impact a very wide range of communities. Here we consider the impact and the drive for a new definition of the SI second on the metrological community represented by the National Metrology Institutes (NMIs) and the Designated Institutes (DIs), and on the wider timing community. In addition, the findings of the CCTF survey are summarized. ### Significance of the redefinition for the NMIs and DIs The NMIs and DIs, as part of their mandates, strive to develop the best realizations of the SI units and build the highest accuracy primary standards. They also typically have the most demanding requirements for accessing accurate time and frequency signals because they provide the highest tier SI dissemination services for their respective countries. The current primary frequency standards have now been surpassed in terms of stability and systematic uncertainty by optical frequency standards, and, therefore, the NMIs and DIs are expected to drive the transition to the new state-of-the-art definition. The implementation of a new definition of the SI second, based on optical standards, and an improved Coordinated Universal Time (UTC) will require the metrology labs to acquire new systems and adopt new methods. The stakeholder survey that was conducted in December 2020 to January 2021 showed an overall positive response to the redefinition plans, which indicates high levels of commitment and technical maturity that is essential to support the redefinition work. ### Significance of the redefinition for the wider timing community Although relatively unknown to the general public, sub-\(\upmu\)s timing and synchronization capability has become an essential and crucial feature of most critical infrastructure, including telecommunications, energy, finance, cloud computing, transportation and space activities. Even though these applications do not require the accuracies of the optical clocks today, they, in general, depend on TF metrology. In addition, many scientific applications require nanosecond levels of stability and/or accuracy such as radio astronomy, particle physics experiments, and time metrology. In the next five to ten years, the need for higher precision in both time and frequency is estimated to grow across all fields. Initially, scientific applications will benefit more than industrial ones from the redefinition of the second and the development in the time and frequency metrology that this may underpin: for example, quantum communications, with some time accuracy and stability requirements at the level of femtoseconds, which is hardly achievable with current technologies. ### Meeting current and future stakeholder needs From the CCTF survey and other references [11-14], timing accuracy needs are currently in the range from 1 \(\upmu\)s down to 10 ns, while future needs seem to focus below 100 ns for most users. Some scientific users highlighted the need for a sub-nanosecond timing accuracy. The most stringent fractional frequency accuracy needs are currently around 1E-14, while future needs are specified up to 1E-15 or 1E-18 for some specific users. The most fundamental of the existing scientific applications that will be improved by a redefinition and the resulting improvement in timing infrastructure, are tests of fundamental physics, for which the levels of accuracy achievable with optical clocks can underpin tests of fundamental physical theories, including the investigation of physics beyond the standard model and time variation of the fundamental constants, the search for dark matter, gravitational wave detection, and more [15]. Better clocks will also enable higher-precision atomic and molecular spectroscopy as well as improved time synchronization for high-resolution telescope arrays and future VLBI generations [16], geopotential monitoring with centimetre resolution [17], quantum networks for quantum encrypted communications [18], and others. These emerging fields of research that already require better TF accuracy or stability than is available today and applications that promise to transition from the research lab into commercial use in the next decades will benefit from the improved accuracy enabled by a redefinition. A redefinition of the SI second will also lead to timing infrastructure improvements, including improved time scales and frequency transfer methods. These improvements will benefit the wider stakeholder community, including clock and equipment manufacturers and users. The redefinition of the second constitutes a required step in stabilizing and directing the technology development, standardization and adoption. Table 2 lists the stakeholder requests for their future needs in the accuracy of frequency references. It is clear from the high level of interest in more accurate frequency reference signals that many research opportunities will arise with better access to optical clocks and better dissemination methods. \begin{table} \begin{tabular}{|r|c|} \hline **uncertainty level** & **Application opportunity** \\ \hline 1E-14 & holdover \\ \hline 1E-15 & spectroscopy/dark matter/secure com/holdover \\ \hline 1E-16 & cosmology \\ \hline 1E-17 & dark matter/connected interferometry \\ \hline 1E-18 & positioning/real time geodesv/new clocks \\ \hline 1E-19 & geodynamics \\ \hline 1E-20 & relativistic geodesy/alternative theories of gravitation \\ \hline \end{tabular} \end{table} Table 2: _Stakeholder responses to the question: What level of frequency uncertainty would you like to access in the future?_ ## 4 Options for the redefinition of the SI second The current definition of the SI units is established in terms of a set of seven defining constants with fixed numerical values, as declared in Resolution 1 of the 26th meeting of the CGPM (2018) [19]. Three of these defining constants: \(c\), \(h\), and \(e\), are directly embodied in the fundamental theoretical framework of general relativity and the standard model of particle physics. The defining constant for the unit of time, \(\Delta v_{\mathrm{Cs}}\), is a property of the Cs atom and consequentially a natural constant. The other three defining constants have a less direct connection to the fundamental framework, \(k_{B}\), \(N_{\mathrm{A}}\) being conversion factors, and \(K_{\mathrm{cd}}\) being linked to the sensitivity of the human eye. There are three options for the redefinition of the second, which all keep the same principle of applying seven defining constants but would replace \(\Delta v_{\mathrm{Cs}}\)by a different constant. Option 1 consists of choosing one single atomic transition in lieu of the Cs hyperfine transition and to fix the numerical value of the frequency of this transition \(v_{\mathrm{Xy}}\) \(v_{\mathrm{Xy}}=N\) Hz, where \(N\) is the defining value. Option 2 consists of creating a defining constant based on several transitions rather than just a single one, as described in [20]. The quantity whose numerical value is used in the definition is a weighted geometrical mean of the frequency of an ensemble of chosen transitions. The unit of time is set by the relation: \(\prod_{i}v_{i}^{w_{i}}=N\) Hz, where \(w_{i}\) and \(N\) are the defining values, with the sum of all \(w_{i}\) being equal to 1. Option 3 consists in fixing the numerical value of one more fundamental constant, in addition to \(c\), \(h\) and \(e\). From the fundamental standpoint, a good choice for this constant is the electron mass \(m_{e}\) (see e.g. [21]), in which case the system of units is set by the relations: \(m_{e}\)=\(M\) kg, where \(M\) is the defining value, completed by the other defining relations for \(c\), \(h\), \(e\), \(k_{B}\), \(N_{\mathrm{A}}\) and \(K_{\mathrm{cd}}\). In this system, one can see that the Compton frequency \(v_{\mathrm{e}}\) defined by \(h\)\(v_{\mathrm{e}}\)=\(m_{e}c^{2}\) has a defined value, which shows how such a system defines the unit of time. Another choice is to directly fix the numerical value of \(v_{\mathrm{e}}\) instead of \(m_{e}\). A third choice is to fix the numerical value of the Rydberg frequency \(R_{e}\) which is also linked to the electron mass via the relation \(R_{e}\) = \(\alpha v_{\mathrm{e}}\gamma\)/2, where \(\alpha\) is the fine-structure constant. The two first choices are two different formulations for systems of units that are physically identical. The third choice defines a physically different system of units since \(\alpha\) is a dimensionless constant that can only be measured and cannot be fixed by our choice. While all three options concern primarily the definition of the SI second, they would have a formal impact on the definitions of all other base units with the exception of the mole, because these make use of the definition of the second via \(\Delta v_{\mathrm{Cs}}\). To complement these formal aspects of the redefinition options, several points are worth noting. Regarding Option 1, it is anticipated that besides the primary transition selected for the definition, other transitions will contribute to realizations and disseminations of the unit of time according to the mechanism of SRS that is already in place and will be described in more detail in section 6. As a possibility associated to Option 2, it is also proposed that future revisions of the defining values \(w_{i}\) and \(N\) could be adopted by the CIPM, based on the recommendation of the CCTF and CCU, and according to a set of rules adopted beforehand by the CGPM. Rules include a quantitative criterion to trigger a revision that ensures the convergence through successive updates (see [20]). Rules are designed to ensure that revisions are made only when significant improvement of the realization and dissemination will ensue. This dynamic option is referred to as option 2b, while the option 2 with fixed values of weights and \(N\) is named option 2a. Regarding Option 2, the realization makes use of best estimates of optical frequency ratios established via the fitting procedures that are already in place of the CCL-CCTF WGFS [22]. Given these ratios, one single frequency standard based on either of the chosen transitions can realize the unit of time [20]. In addition to the conceptual aspect, i.e. the possibility to define the unit of time and the system of units using several transitions, Option 2 gives a possible approach to cope with the present context where many different atomic transitions give optical frequency standards with uncertainties near \(10^{-18}\) and where the field will remain highly dynamic. Under Option 3, the numerical value of the defining constant for the unit of time relies on experiments that presently lead to the determination of the chosen constant. The evaluations of relevant experiments are the work of CODATA and are reported in [23]. Currently, the value of \(m_{e}\) has an uncertainty of 3.0 parts in \(10^{10}\), while the uncertainty in the Rydberg constant is 1.9 part in \(10^{12}\). These uncertainties are several orders of magnitude larger than the present realizations of the unit of time of the current SI system (few parts in \(10^{16}\)) and even further away from the capabilities of optical frequency standards (\(10^{-18}\) or better). Consequently, Option 3 is not practical in the current state of science and technology. It is also worth noting that measurements between the optical frequency domain and the current best realizations of the SI second are already done with low enough uncertainty (near \(10^{-16}\), the limit of fountain frequency standards) and with sufficient redundancy to ensure the continuity between the current definition and any definition based on optical transitions. To summarize the trade-offs between the three options, we present here their most significant respective strengths, weaknesses, opportunities, and threats in tabular form (i.e., a SWOT analysis) ( Table **3**). We note that these considerations have taken into account the needs of both the user and research communities, as assessed by the CCTF Task force via input from user surveys and BIPM workshops. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & Option 1 & Option 2 & Option 3 \\ \hline Strengths & Offers two orders of magnitude improvement of the existing definition with significant improvement likely in the future & Offers two orders of magnitude improvement of the existing definition with significant improvement likely in the future & Consistent with the approach adopted by CIPM based on the physical constants, \(c\), \(h\), \(e\), and \(k_{\text{B}}\) \\ \multirow{4}{*}{Strengths} & Maintains continuity with the current Cs definition & Maintains continuity with the current Cs definition & Direct connection to the theoretical framework of fundamental physics \\ & Intuitive extension of the existing definition & Flexible scheme that is well matched to the current experimental situation and could adapt well to rapid progress in optical standards & \\ & Familiar and practical, using primary and secondary realizations as we do today & Could more easily lead to a consensus on the chosen species. & \\ & The unit of time can be realized without additional uncertainty & Can be difficult to understand and convey to general users & Would lead to poor accuracy for time realization in the present and foreseeable future \\ \multirow{4}{*}{Weaknesses} & The unit of time may be hard to realize by a single institute in isolation & Would represent a step backwards in time realization by four orders of magnitude (six relative to Options 1 and 2) \\ & The version which allows for revisions of the defining values _wi_ and \(N\) constitutes a conceptual deviation from the principle of applying fixed defining constants for the SI units as implemented in 2019. & Would not allow continuity with the current Cs definition, which allows a much better accuracy in the realization \\ \multirow{4}{*}{Opportunities} & A better uncertainty obtained with one transition alone is not enough to have a better realization of the unit & The defining constant has no physical meaning – all realizations are secondary representations & \\ & A more complex definition of time may present legal issues for some countries & \\ \hline Opportunities & The many benefits associated with an improvement of a factor of 100 (or more) in the definition of the unit of time & The many benefits associated with an improvement of a factor of 100 (or more) in the definition of the unit of time & This approach would lead to a consistent set of SI definitions that is close to the theoretical foundations of physics. \\ \multirow{4}{*}{The Threats} & A clear path forward for development of primary standards & Provides a strong stimulus to explore new frequency standard options & Could stimulate further research in simple atoms, calculable quantum systems and the measurements of fundamental constants \\ & Provides a stimulus for the development of commercial standards & Depending on the quality of future or OFS reports for TAI calibration, it might be difficult to provide at least as as good uncertainty of dTAI after the redefinition & There would be a severe degradation in the realization of the SI unit of time \\ \multirow{4}{*}{Theats} & The new definition might rapidly become obsolete – SRS could end up dominating contributions to TAI & A multi-species definition might lead to difficulty for industry (and NMIs) in choosing which standard to develop uncertainty with which the old definition was realized \\ & Could discourage future progress on frequency standards, by biasing work towards the chosen transition & & Such a definition would break the metrological principle that redefinitions should be consistent with previous definitions within the uncertainty with which the old definition was realized \\ \hline \end{tabular} \end{table} Table 3: Collection of Strengths, weaknesses, opportunities and threats of the 3 options for the redefinition, based on input from a community survey in 2022 Criteria and conditions for the redefinition In order to choose the best new definition and its implementation timeline, and to provide the CGPM with all the required information for making its decision, criteria and conditions (Table 4) have been defined to assure that the redefinition: * offers an improvement by 10 to 100 of the realization of the new definition in the short term after the redefinition (reaching \(10^{-17}\) to \(10^{-18}\) relative frequency uncertainty) and potentially a larger improvement in the longer term (_criteria I.1, I.2, III.1 and condition III.3_), requiring the capability to compare OFS with an adequate uncertainty to validate OFS uncertainty budgets (_criteria II.1, II.2_); * ensures continuity with the current definition based on caesium (_criterion I.3_); * ensures continuity and sustainability of the availability of the new SI second through TAI/UTC and enables a significant improvement of the quality of TAI and UTC(_k_) as soon as the definition is changed (_criterion I.4 and conditions I.6, III.3_), relying on the reliability of OFS and TF transfer infrastructures (_conditions I.5, II.3_); * is acceptable to all NMIs and stakeholders and enables the dissemination of the unit to broad categories of users (_criterion III.2 and conditions III.4, III.5_); Criteria and conditions are distinguished in the following way: * the mandatory criteria that must be achieved before changing the definition; * the ancillary conditions that are not required to be fully achieved to change the definition but are important to ensure the best realization and exploitation of the new definition in the short and long terms. Thus, these conditions correspond to essential work that must have started before the redefinition, with a reasonable amount of progress at the time of redefinition and a commitment of stakeholders to continue their efforts on the associated activities. Fulfilment indexes have been defined to evaluate the fulfilment level for mandatory criteria to quantitatively follow the improvements, to be aware of the remaining work to fulfil all mandatory criteria and ultimately, to decide it is time to change the definition. The details of criteria and conditions and their current fulfilment levels or progress statuses are presented in Section 8. \begin{table} \begin{tabular}{|c|c|c|l|} \hline & & & \\ \cline{3-4} & & & **Criteria and conditions** \\ \hline \multirow{4}{*}{\begin{tabular}{c} **Frequency** \\ **standards,** \\ **including the** \\ **contribution of** \\ **OFS to time scales** \\ \end{tabular} } & **X** & & I.1 - Accuracy budgets of optical frequency standards \\ & **X** & & I.2 - Validation of Optical Frequency Standard accuracy budgets – Frequency ratios \\ & **X** & & I.3 - Continuity with the definition based on Cs \\ & **X** & & I.4 - Regular contributions of optical frequency standards to TAI (as secondary \\ & & & representations of the second) \\ & & **X** & I.5 - High reliability of OFS \\ & & **X** & I.6 - Regular contributions of optical frequency standards to UTC(_k_) \\ \hline \multirow{4}{*}{\begin{tabular}{c} **TF links for** \\ **comparison or** \\ **disemination** \\ \end{tabular} } & **X** & & II.1 – **Availability of sustainable techniques for Optical Frequency Standards** \\ & **X** & & comparisons \\ & **X** & & II.2 – Knowledge of the local geopotential with an adequate uncertainty level \\ & & **X** & II.3 – High reliability of ultra-high stability TF links \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **Acceptability of** \\ **the new definition** \\ \end{tabular} } & **X** & & III.1 - Definition allowing more accurate realizations in the future \\ & **X** & & III.2 – Access to the realization of the new definition \\ & & **X** & III.3 - Continuous improvement of the realization and of time scales after \\ & & & redefinition \\ & & **X** & III.4 - Availability of commercial optical frequency standards \\ & & **X** & III.5 - Improved quality of the dissemination towards users \\ \hline \end{tabular} \end{table} Table 4: _Mandatory criteria and ancillary conditions to ensure the benefit and the acceptability of a new definition._ ## 6 Optical frequency standards - Categories and characteristics ### Types, characteristics and performance of optical frequency standards Due to their demonstrated potential for low fractional frequency instabilities and uncertainties, there is currently considerable research activity directed towards investigating optical transitions to serve as frequency standards. These standards fall into two categories distinguished by the charge state of the atom and the method used for trapping: trapped ion optical clocks and optical lattice clocks with neutral atoms. Presently, ten optical transitions and one microwave transition (\({}^{87}\)Rb) are recommended as SRS, as listed in Table 5. We note that due to the lower uncertainties associated with most of the optical standards themselves, the uncertainties for the realizations of the second with these standards as listed in the Table are largely determined by the uncertainty of microwave standards based on the Cs transition that enters into the recommended frequencies. Advances in several key technologies have been critical to the rapid improvement in optical standards. To achieve a low instability, it is necessary to start with an extremely narrow linewidth clock laser. Thus, pre-stabilization of the clock laser to a high-performance optical cavity is a standard component of any high-performance standard. Fractional frequency instabilities as low as \(8\times 10^{-17}\) on 1 s timescales have been achieved with a clock laser locked to the resonance frequency of a room temperature 48 cm ULE FP cavity [24], while locking to cryogenic single-crystal optical cavities has led to frequency instabilities in the low \(10^{-17}\) range [25, 26]. In addition, the development of optical frequency combs (OFC) [27, 28], which are needed to link optical frequencies directly with microwave frequencies, has made high-fidelity measurements of absolute optical frequencies at the low \(10^{-16}\) uncertainty level of fountain clocks feasible. In fact, simultaneous measurements of the same optical frequency ratio with two independent OFCs have shown agreement at the level of \(10^{-21}\)[29], thereby confirming the capability of OFCs to support optical frequency ratio measurements at the limit of the uncertainties of current optical clocks. These capabilities have enabled more precise (and more rapid) comparisons between standards, with many of the optical standards realizing SRS as listed in Table 5 reaching Type B uncertainties below \(10^{-17}\). The current record for systematic uncertainty of an atomic clock is held by the \({}^{27}\)Al\({}^{+}\) quantum logic clock, with a fractional frequency systematic uncertainty of \(9.4\times 10^{-19}\)[30]. This level of performance is closely followed by that of an Yb optical lattice clock (\(1.4\times 10^{-18}\)[31]), a Sr optical lattice clock (\(2.0\times 10^{-18}\)[32]), an \({}^{171}\)Yb\({}^{+}\) ion clock operated on the octupole (E3) transition (\(2.7\times 10^{-18}\)[33, 34]), and recently a \({}^{40}\)Ca\({}^{+}\) ion clock (\(3.0\times 10^{-18}\)[35]). Interestingly, it seems there is not a fundamental limitation for the accuracy of the optical clocks that are being developed based on different ion and neutral atom species. Most of the currently proposed optical transitions can potentially achieve an uncertainty level below \(10^{-18}\). We note that the lowest instabilities achieved at 1 s averaging time have been observed with optical lattice clocks: \(4.8\times 10^{-17}\)[32] and \(6\times 10^{-17}\)[36]. For single ion clocks, the lowest reported instabilities at 1 s are typically around \(1\times 10^{-15}\)[30, 37]. ### Ratio measurements between frequency standards In order to verify the predicted levels of performance for these standards, there has been a great effort over the past decade to perform measurements of frequency ratios between co-located or remotely located standards. Such comparisons can be based on the same transition or different transitions. Comparing different optical standards based on the same transition provides a way to validate uncertainties by verifying that the realized transition frequencies agree within stated uncertainties. To date, several such comparisons performed within the same \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Transition & Approximate & Recommended frequency & Recommended & Used to calibrate TAI \\ & wavelength & (Hz) & relative uncertainty & scale interval \\ \hline \({}^{199}\)Hg & 265 nm & 1 128 575 290 808 154.32 & 2.4E-16 & \\ \hline \({}^{27}\)Al\({}^{+}\) & 267 nm & 1 121 015 393 207 859.16 & 1.9E-16 & \\ \hline \({}^{199}\)Hg\({}^{+}\) & 282 nm & 1 064 721 609 899 146.96 & 2.2E-16 & \\ \hline \({}^{171}\)Yb’(E2) & 436 nm & 688 358 979 309 308.24 & 2.0E-16 & \\ \hline \({}^{171}\)Yb’(E3) & 467 nm & 642 121 496 772 645.12 & 1.9E-16 & \\ \hline \({}^{171}\)Yb & 578 nm & 518 295 836 590 863.63 & 1.9E-16 & Yes (4 institutes) \\ \hline \({}^{88}\)Sr\({}^{+}\) & 674 nm & 444 779 044 095 486.3 & 1.3E-15 & \\ \hline \({}^{88}\)Sr & 698 nm & 429 228 066 418 007.01 & 2.0E-16 & \\ \hline \({}^{87}\)Sr & 698 nm & 429 228 004 229 872.99 & 1.9E-16 & Yes (3 institutes) \\ \hline \({}^{40}\)Ca\({}^{+}\) & 729 nm & 411 042 129 776 400.4 & 1.8E-15 & \\ \hline \({}^{87}\)Rb & & 6 834 682 610.9043126 & 3.4E-16 & Yes (1 institute) \\ \hline \end{tabular} \end{table} Table 5: List of secondary representations of the second adopted by the 22nd CCTF (March 2021) [9]. institute have reached an overall uncertainty better than \(5\times 10^{-18}\)[31, 34], with the lowest reaching \(1\times 10^{-18}\)[31]. Comparisons between standards based on the same transitions from different institutes are at the level of \(5\times 10^{-17}\)[38]. We note that comparisons between clocks in different locations are much more challenging because they involve either remote comparison, which can be limited by the instability of long-distance time transfer capabilities or transportable standards, which generally have lower levels of performance than their lab-based counterparts. In general, such comparisons are of utmost importance to validate the frequency standards' uncertainties. Equally valuable are frequency ratios measured between standards based on different transitions. Such ratios between unperturbed atomic transitions are significant, because they are dimensionless quantities given by nature. As a result, two independent measurements of such ratios should coincide within the combined measurement uncertainties. Thus, comparisons between independent measurements of given ratios provide further means to validate stated uncertainties of optical frequency standards. We note that such measurements almost always rely on optical frequency combs to span the frequency gap between standards. Therefore, comparing independent measurements of a given optical frequency ratio tests not only the stated uncertainties of optical standards themselves, but those of the combs (and any other optical frequency metrology capabilities relevant to the use of optical frequency standards). To date, the most accurate measurement of an optical frequency ratio has a fractional uncertainty of \(6\times 10^{-18}\) (between two labs about 2 km apart) [38, 39]. A few optical ratios have been measured multiple times by different institutes, thereby enabling first comparisons of such measurements at uncertainty levels ranging from \(3\times 10^{-17}\) to \(2\times 10^{-16}\). We also emphasize that frequency ratio measurements between optical and microwave standards are common and serve to validate our capabilities to connect the optical domain with the microwave domain, as well as to link a potential future definition to the current one. The accuracies of such measurements are now at the limit of the primary standards based on Cs (\(\sim 10^{-16}\)). In the last few years, many such absolute measurements of optical standards have been performed by comparison with TAI, whose rate with respect to the SI second is provided by BIPM publications, based on the currently available reports from primary and secondary frequency standards ([https://www.bipm.org/en/time-ftp/circular-t](https://www.bipm.org/en/time-ftp/circular-t)). Several groups have performed extended measurement campaigns involving both optical and microwave clocks that have lasted from several months [40, 41, 42, 43, 44] to several years [45, 46, 47]. Although not continuous, these campaigns were realized by performing multiple measurements over a given time span. Taken as a whole, the resulting ensemble of high accuracy measurements of atomic frequency ratios published after peer-review provides an overdetermined dataset from which one can determine the best values for these atomic frequency ratios, using an adjustment procedure. This task is done on a regular basis by the CCL-CCTF working group on frequency standards (CCL-CCTF WGFS). The resulting output of this calculation provides the basis for the recommended values and uncertainties of frequency standards shown in Table 5[9]. In addition, given the strongly overdetermined nature of the dataset, this adjustment provides a global validation of the status of high accuracy atomic frequency standards and of related measurement capabilities, as described in [3]. In the last implementation reported to the 22nd meeting of the CCTF on 19 March 2021, the adjustment took into account 105 measurements (69 in 2017), including 33 optical frequency ratios (11 in 2017) and 72 absolute frequency measurements (58 in 2017). We note that it is necessary to take into account correlations (483 for the latest adjustment) between these measurements to perform the calculation correctly [10]. Ongoing research activities and future prospects for optical standards (new transitions, improved stability, transportable standards) Despite the considerable progress to date in optical clock performance, there remains much room for further improvements in terms of clock stability, uncertainty, and robustness. Reduced clock instability is not only useful in direct timing applications, but the extremely low uncertainty of optical clocks is only useful if the statistical uncertainty (Allan deviation) can be reduced to the evaluated uncertainty level at a practical averaging time for the measurement application. Improvements in the observed stability of optical lattice clocks and long-lived ion transitions (\({}^{27}\)AI*, \({}^{171}\)Yb+ (E3)) are ongoing but are technically challenging, as they require ultra-stable lasers with coherence times of several seconds to minutes. In addition to continued advances in cavity performance mentioned earlier, there are efforts in parallel to develop novel measurement protocols that mitigate the limitations caused by reference cavity noise, such as zero-dead time interrogation [36], correlation spectroscopy [48, 49], and dynamic decoupling of laser phase noise in compound atomic clocks [50]. It is anticipated that the use of compound clocks could improve the stability of single ion clocks with long clock transition lifetimes to levels comparable to that of optical lattice clocks [50]. For ion species with shorter lifetimes, the stability can be improved directly by increasing the number of ions, but this approach requires special care in the selection of the atomic transition and the control of the systematic shifts to preserve accuracy [51, 52]. Entanglement in multi-ion or neutral atom clocks offers the potential for a stability beyond the standard quantum limit and thus could be a method to further improve the stability of optical clocks [53]. A new type of clock with high relative stability has been demonstrated recently, called a "tweezer array optical clock" that balances the benefits of non-interacting particles as found in single-ion clocks with the large number of atoms as found in optical lattice clocks [54]. Another critical aspect for the spread of optical clock performance throughout the clock community will be the demonstration of high duty cycle, high performance, robust optical systems. In this direction there has been considerable effort with many systems under development. Indeed, all major subsystems of an optical clock with laser cooled atoms or ions have already been developed as robust transportable devices for autonomous operation, which have been partially tested for operation in space. This includes vacuum systems and traps for atoms [55] and ions [56], tunable laser systems for cooling and interrogation, optical reference cavities for obtaining a narrow linewidth of the reference laser [57, 58, 59], and optical frequency combs for transfer of the optical stability to a microwave output signal [60]. However, the integration of an optical clock from the subsystems also requires the robust optical alignment of multiple laser beams and the monitoring, control and adjustment of a few dozen electrical and mechanical parameters. Fully integrated prototype systems that have been used as transportable optical clocks on the footprint of a small trailer have been demonstrated for a Sr optical lattice clock [61, 91] and for a clock with a single trapped Ca\({}^{+}\) ion [62]. Some groups have demonstrated high clock operation uptimes, for example 80.3 % for a duration of six months [41], 93.8 % uptime for a period of 10 days [44]. More recently fully autonomous operation for two weeks with 99.8 % uptime at \(2\times 10^{-17}\) systematic uncertainty inside a laboratory has been demonstrated for the OptiClock based on the E2 transition of \({}^{171}\)Yb\({}^{+}\)[63, 64]. The system fits inside the volume of two 19-inch racks and has been developed by PTB jointly with industry [64]. Optical clocks with (nearly) 100 % uptimes for one month of continuous operation or longer are expected to become common in the next few years. These results indicate that the development of a turn-key autonomous optical clock is technically feasible at a performance level that is superior to available microwave frequency standards and shows the way towards a commercial high-performance optical reference. Finally, one of the most exciting directions in optical clock research today is the search for transitions that have still lower sensitivities to external fields than current optical clocks in an effort to further reduce clock uncertainties. Some of these include a nuclear transition in \({}^{229m}\)Th [65], and transitions in highly-charged ions [66, 67] and lutetium ions [52]. While all of these systems present their own technological challenges, they could well be among the main candidates for future optical clocks with performance at the 19th and 20th digits. ## 7 TF transfer and time scales - Categories and characteristics ### TF transfer Remote comparison of time scales and frequency standards is possible using various space-based microwave techniques for time and frequency transfer, including Global Navigation Satellite Systems (GNSS), Two-way Satellite Time and Frequency Transfer (TWSTFT), and Very Long Baseline Interferometry (VLBI) radio antennas. In the last decade, optical techniques using fibre optic links have offered greatly improved stability and accuracy. Innovative satellite transfer in the optical domain is also envisaged. Lastly, Transportable Optical frequency standards or Clocks (TOCs) used as travelling standards can support a redefinition of the second that requires comparisons at an accuracy level of \(10^{-18}\) and global geographical coverage. GNSS time transfer is a one-way technique used since the 1980s, notably for the realization of UTC. A collaboration with the International GNSS Service (IGS) has led to the use of Precise Point Positioning (PPP) for time and frequency comparisons and development of the integer ambiguity PPP technique (IPPP), which to date offers the best long-term stability among the GNSS techniques. Fig. 1[68] shows that IPPP provides time transfer with a modified Allan deviation of \(7\times 10^{-16}\)/\(\uptau\), where \(\uptau\) is the duration in days of continuous phase measurements. A twofold improvement is expected using satellites from all the GNSS, as opposed to just GPS as at present. TWSTFT, the second intercontinental-capable satellite-based microwave method, typically employs the code-phase of a signal modulated by a pseudorandom noise code sent and received by microwave link via a geostationary telecommunications satellite, at Ku-band frequencies [69]. Improved performance is achieved by the use of Two-Way Carrier-Phase (TWCP), which exploits carrier-phase measurements, with an instability of a few parts in \(10^{16}\) at one day. Further results [70] indicate that TWCP performs at least as well as IPPP in terms of stability. Fig. 2 shows the modified Allan deviation of Code Phase and Carrier Phase TWCP. In addition, a recently implemented software-defined receiver (SDR) successfully reduced the long-term instability by about a quarter [71]. Similar technology is expected to be applied to the transmitters for further improvement resulting in integrated digital modems that are an important step to improve TWSTFT beyond the current state of art. Moreover, in order to reach to the sub 1e-17 level it is essential to improve on modeling of all non-reciprocal error sources, such as signal propagation, atmospheric turbulence, and relativistic effects [72]. VLBI utilizes the reception of radio signals from extragalactic radio sources, with the time difference between the arrivals of the signals measured at two antennas equipped with local atomic clocks. Using VLBI, the frequency of Figure 1: Modified Allan deviation of the comparison between IPPP and several other high accuracy techniques: The optical fiber links DTAG-PTB (blue), AOS-GUM (orange) and SMD-ESTEC (red) and the two-way carrier phase link NICT-KRIS (green) [68] Figure 2: Modified Allan deviation of UTC(NICT)-UTC(KRIS) from MJD 57851 to 57883 measured by different techniques [70] an Yb and a Sr optical standard has been compared [73], with a statistical uncertainty from the VLBI link of \(9\times 10^{-17}\) over 300 hours of measurements. Using optical communication, satellite-based comparisons were demonstrated with the Time Transfer by Laser Link (T2L2), onboard the Jason-2 satellite [74]. Three T2L2 links were compared with IPPP links [75], with the standard deviation of the time difference well below 100 ps. Promising results have also been obtained using terrestrial free-space optical time and frequency transfer, using cw or coherent pulsed lasers. For both, uncertainties of parts in \(10^{16}\) in a few minutes have been achieved over distances up to tens of kilometers. The synchronization of two clocks 28 km apart below 1 fs within 100 s, even at high Doppler velocities of up to \(\pm\)24 m/s, and under stable weather conditions has been shown [76]. A comparison at 113 km with modified Allan deviation of \(10^{-19}\) at \(10^{4}\) seconds was also reported [77], the first evidence of the method compatibility with Low Earth Orbit satellites. Figure 3 indicates both results. Optical fibres offer several key advantages compared to free-space techniques: high isolation from external interference; high bandwidth; and low propagation losses, when compensated by optical amplifiers and regeneration devices, at distances more than 1000 km. For time and frequency comparisons, three main methods are used: CW light from an ultra-stable laser, without modulation; modulated laser light (amplitude, frequency, or phase modulation); and protocol-based signals, based on digital data transfer. Propagation of optical signals in optical fibres for frequency comparisons offers two main choices: bi-directional fibre links, providing the best performance, and unidirectional fibre links, which are easier to implement on common telecommunication networks. Submarine links are less noisy than terrestrial links [78], as shown in Figure 4. Figure 3: _Free space optical link fractional frequency instability. Left: Modified Allan deviation over 28 km [76]. M is the ratio \(\textbf{f}\)/\(\Delta\textbf{f}_{r}\), where \(f\). is the nominal repetition rate, and \(\Delta\textbf{f}_{r}\) is the real difference between the repetition rates of the two involved combs. Right: Modified Allan deviation over 113 km [77] (Black circles, well-aligned free-space time–frequency link; blue squares, mis-aligned link; orange triangles, free-running link). The performances of the best optical clock, the I-SOC (Space Optical Clock on the International Space Station) laser link, the I-SOC microwave link and the TDEF of 1 fs are also shown._ Optical frequency transfer over fully bi-directional links [79] exhibits typical Allan deviations of \(\sim\)10\({}^{-15}\) at 1 s and \(<\)10\({}^{-18}\) for greater than 100 s, over 100 to 1000 km long links. There is no systematic frequency shift reported so far at the level of 10\({}^{-18}\). Conversely, optical frequency transfer over unidirectional links has demonstrated an Allan deviation of \(\sim\)10\({}^{-15}\) at 1 s integration time, unidirectional links has demonstrated an Allan deviation of \(7\times 10^{-17}\) for averaging times between 30 s and 200 s [80]. There is no systematic frequency shift reported so far in the range of 10\({}^{-16}\)[81]. Modulation of the optical carrier frequency enables a frequency reference in the radio and microwave domain (10 MHz -10 GHz) to be transmitted, with typical uncertainty less than 10\({}^{-17}\) at 10\({}^{4}\) s. Latest synchronization experiments report 300 km free space link and demonstrate a sub ps capability [82]. Time transfer over fibre can be in the radio/microwave domain (10 MHz - 10 GHz) or in the optical domain. In either case, the technique requires the modulation (amplitude, phase or frequency) to be tied to a time scale. The time uncertainty is less than 1 ns, approaching tens of picoseconds, in particular with White Rabbit Precise Time Protocol [83] and the ELSTAB technique [84]. Transportable optical clocks offer the best immediate prospects to meet the criteria for the redefinition of the SI second in regard to the required accuracy level and geographical coverage. As described above, space microwave techniques need to significantly improve their uncertainty levels. Fibre techniques meet the required uncertainty, but obtaining global coverage requires a large effort and investment. Satellite-based optical comparisons have not yet been demonstrated on a full metrological and operational basis. On the other hand, several TOCs have already reported the performance results that meet the redefinition requirements. An accuracy ranging from 10\({}^{-15}\) down to parts in 10\({}^{18}\) has been reported for several \({}^{87}\)Sr TOCs [85-87]. A bosonic \({}^{88}\)Sr TOC achieved \(2\times 10^{-17}\) uncertainty [88]. TOCs based on ions have also been reported: a Ca\({}^{+}\) TOC with a systematic uncertainty of \(1.3\times 10^{-17}\)[89]; an Al\({}^{+}\) standard, with four main biases evaluated at the 10\({}^{-18}\) level [90]; and a Yb\({}^{+}\) standard demonstrated with 10\({}^{-17}\) accuracy [64]. In addition to their role in the redefinition, the TOCs are essential tools for chronometric levelling and some have already been used for this purpose [86,89,91]. The time and frequency transfer techniques described above allow us to compare timescales and their scale intervals around the world. We can also compare the scale intervals by evaluating them with respect to locally available accurate frequency standards. However, this assumes that we have knowledge of the geopotential at the clocks location, since the atomic clocks generate their proper time and the tick rate is affected by the relativistic frequency shift. We should also note that International Atomic Time (TAI) is defined in Resolution 2 of the 26th CGPM (2018) as a realization of Terrestrial Time, which has a reference potential of W\({}_{0}\). Thus, the local geopotential needs to be obtained with respect to W\({}_{0}\) particularly for the calibration of the TAI scale interval. For Figure 4: _Submarine testbeds, round-trip phase noise [78]. L indicates land links, S submarine links. Red line: submarine 2 \(\times\) 96.4 km link; grey line: measurement noise floor; green line: 2 \(\times\) 150 km fibre along highway; black line: 2 \(\times\) 92 km fibre along highway (other area)._ the modelling of the geopotential, satellite data only provides information valid at a spatial resolution of 200 km or worse. Combining regional information from gravity measurements with the global model as well as the results of the levelling from the nearest reference to the trapped atoms, the gravity shifts of optical clocks in some metrological laboratories are now evaluated with uncertainty at the mid \(10^{-18}\) level or better [92, 93, 94]. ### Time scales Resolution 2 of the 26th CGPM (2018) states that Coordinated Universal Time (UTC), based on TAI, is the only recommended international time reference and provides the basis of civil time in most countries. Thus, the scale interval of TAI needs to be maintained with respect to optical frequency standards (OFS) for the redefinition of the second. The future TAI should have at least a similar or better performance than the current realization of TAI, which is nowadays calibrated mainly by microwave-based primary frequency standards. To this end, more than ten days of regular operation of optical clocks or a local timescale steered by an optical clock is required, in each Circular T reporting period, since the frequency link of local clocks to TAI is made by GNSS or TWSTFT. For the determination of TAI, the BIPM employs an uncertainty of \(\sim\)\(10^{-15}\) / (\(t\)/5), where \(t\) is the signal integration time in days [95]. The capabilities for TAI calibration of several specific optical frequency standards have been examined by the CCTF Working Group on Primary and Secondary Frequency Standards (CCTF-WGPSFS), and as a result, eight OFSs, recognized at present as Secondary Frequency Standards (SFS), have contributed to TAI. The first data for TAI calibration with an OFS was obtained in 2014 by an optical \({}^{87}\)Sr lattice clock from SYRTE [44], applied for TAI calibration in 2017, and since mid-2021, at least one SFS has calibrated TAI every month. The BIPM incorporates the data from SFS into the TAI steering with additional uncertainty \(\mu_{\text{step}}\), which is determined by the uncertainty in the CIPM recommended frequency of the SFS (Table 5). The recent update of CIPM recommended frequencies has reduced \(u_{\text{step}}\), leading to an increased total weight of typically more than 10 % for all SRS for the determination of the TAI scale interval. The stated uncertainties from the laboratories, ignoring the recommended uncertainty (\(u_{\text{step}}\)) of the SRS, range from \(1.9\times 10^{-16}\) to \(3.3\times 10^{-15}\), limited primarily by dead time and link uncertainties. Until now, the lowest uncertainty in the SFS data submitted to TAI was reached by the NICT-Sr1 in _Circular T_ 408, and IT-Yb1 in _Circular T_ 411. The calibrations provided from all OFSs [31, 41, 44, 96, 97, 98, 99] are so far consistent with those provided by primary frequency standards (see also [https://webtai.bipm.org/database/d_plot.html](https://webtai.bipm.org/database/d_plot.html)). The development of OFSs with high uptimes over the typical reporting intervals of 15 to 30 days, the development of better local oscillators, and advances in frequency transfer are crucial goals to obtain significant improvements in the stability of TAI. UTC is a post-processed timescale determined by the BIPM. For civil time, time and frequency metrology laboratories generate and provide real-time signals equivalent to UTC. These real time signals are called UTC(\(k\)), denoting a real-time UTC generated at the laboratory "\(k\)". In general, such a UTC(\(k\)) is often employed as a national standard time with the addition of a time offset appropriate to the respective time zone. For the future redefinition of the second, UTC(\(k\)) generated or at least steered by an optical clock is one ancillary condition. UTC(\(k\)) time scales must be continuous, whereas it is unrealistic at this point to operate optical clocks completely without dead time. The operation of multiple optical clocks for redundancy is not yet realized since maintaining multiple optical clocks is a difficult task and the procedure to switch between optical clocks has not yet been studied. On the other hand, intermittent operation of an optical clock enables generation of a real-time timescale steered by the optical clock [100 - 104]. Here, the source oscillator is still a microwave oscillator (hydrogen maser), but the scale interval is tuned with respect to an optical clock. In some metrology institutes, a similar generation of UTC(\(k\)) has already been successfully implemented for some time utilizing caesium fountain frequency standards [105 -108]. In future, an all-optical timescale is expected [109], particularly for improvement of the short-term stability. Here, a CW laser, stabilized to a stable optical cavity, would play the role of the source oscillator. Considerable progress in mode-hop free operation of CW lasers has been made in the last decade. ## 8 Fulfilment levels of mandatory criteria - progress statuses for ancillary conditions Details on mandatory criteria and ancillary conditions are presented in Sections 8.1 to 8.3 for OFS, TF transfer and the acceptability of a new definition. A synthesis of the fulfilment levels of mandatory criteria in 2022 is shown in Figure 5. Fulfilment regions have been defined, from very low fulfilment levels (\(<\)30 %, region in red) to satisfactory fulfilment levels (90-100 % and above, region in green). The vertical dashed blue line defines the threshold above which the criteria can be considered as fulfilled. While for certain criteria the fulfilment seems almost achieved, for others the fulfilment is more challenging. Due to the considerable number of optical frequency standards under development, good progress has been made on OFS performance Criteria I.1 and I.3 (fulfilment levels close to 50 % and 100 %, respectively) and on their contributions to TAI Criterion I.4 (fulfilment level of 30-50 %). Regardless of which redefinition option is chosen, the realizations of the definition will be accessible widely and their accuracy will likely continue to improve in the future with further developments on OFS (Criteria III.1 and III.2). However, the challenges associated with limited resources for developing multiple standards in one institute (along with limitations in long distance time transfer) have led to a low fulfilment level (\(<30\) %) for the optical frequency standards' comparison Criterion I.2. The fulfilment of the criterion II.2 related to the knowledge of the geopotential is achieved in the majority of NMIs operating an OFS. For the criteria II.1, a sustainable technique for OFS comparison at the proper uncertainty level is more challenging. Over intracontinental scales (baselines of about 1000 km), the requirement is fulfilled by optical fibre links, even if a significant effort for regular comparison campaigns should be addressed. Criteria and conditions related to optical frequency standards and their contribution to time scales This section contains a detailed description of the criteria and the estimation of their fulfilment level in 2022. **Mandatory criterion I.1 - Accuracy budgets of optical frequency standards** I.1.a - At least three optical frequency standards based on the same reference transition, in different institutes, have demonstrated evaluated relative frequency uncertainties \(\lesssim 2\times 10^{-18}\) based on comprehensive, comparable and published accuracy budgets. Fulfilment level: 20-40 % [91, 31, 34] I.1.b - At least three frequency evaluations of optical frequency standards based on different reference transitions, either in the same institute or different institutes, have demonstrated evaluated uncertainties \(\lesssim 2\times 10^{-18}\) based on comprehensive, comparable and published accuracy budgets. Figure 5: _Fulfilment levels of mandatory criteria in 2022_ Fulfilment level: 80-100 % [30 - 32]. Overall Fulfilment level of criterion I.1: 30-50 % **Mandatory criterion I.2 - Validation of optical frequency standard accuracy budgets - Frequency ratios** I.2.a - Unit ratios (frequency comparison between standards with same clock transition): at least three measurements between OFS in different institutes in agreement with an overall uncertainty of the comparison \(\Delta\)v/v \(\lesssim\) 5 \(\times\) 10\({}^{-18}\) (either by transportable clocks or advanced links). Applicable to at least one radiation of I.1. Fulfilment level: 0-20 % [91, 31, 34]. Strictly speaking the reported measurements of unit ratios are not between different institutes and should not count in this fulfilment level. Nevertheless, a fulfilment level at 0-20 % has been assigned based on these in house comparisons with uncertainties significantly lower than 5 \(\times\) 10\({}^{-18}\) that can be considered as the first step in the right direction. I.2.b - Non unit ratios (frequency comparison between standards with different clock transitions): at least five measurements between standards among I.1 or other, each ratio measured at least twice by different institutes in agreement with an overall uncertainty of the comparison \(\Delta\)v/v \(<\) 5 \(\times\) 10\({}^{-18}\) (either by direct comparisons, transportable clocks or advanced links). Fulfilment level: 0-20 % [43]. Again, this measurement alone is not valid in terms of the criterion which demands ratio measurements \(\approx\) twice by independent \(\approx\) institutes. However, it is the first measurement at about the required uncertainty level, and it is considered the first step towards the fulfilment of this index. Overall Fulfilment level for Criterion I.2: \(<\) 30 % **Mandatory criterion I.3 - Continuity with the definition based on Cs** There are at least three independent frequency evaluations of the optical frequency transitions utilized by the standards in I.1) with TAI or with three independent Cs primary frequency standards (in different or the same institutes), possibly via optical frequency ratio measurements, where the measurements are limited essentially by TAI or by the uncertainty of these Cs frequency standards (\(\Delta\)v/v \(<\) 3 \(\times\) 10\({}^{-16}\)). Fulfilment level: 90-100 % [44, 45, 46, 97, 98, 104, 110, 111, 112] **Mandatory criterion I.4 - Regular contributions of optical frequency standards to TAI (as secondary representations of the second)** At least three state-of-art calibrations of TAI (uncertainty \(\lesssim\) 2 \(\times\) 10\({}^{-16}\) without counting the recommended uncertainty of the secondary representation of the second \(u_{\text{\it{srep}}}\)) each month from a set of at least five Optical Frequency Standards for at least one year. Check that there is no degradation of TAI if its calibrations were done by OFS considered as primary standards and Cs frequency standards considered as secondary standards. Fulfilment level: 30-50 % [see [https://www.bipm.org/en/time-ftp/circular-t](https://www.bipm.org/en/time-ftp/circular-t), and [https://webtai.bipm.org/database/show](https://webtai.bipm.org/database/show) psfs.htm, [https://webtai.bipm.org/database/d](https://webtai.bipm.org/database/d) plot.html] **Ancillary condition I.5 - High reliability of OFS** Reliable continuous operation capability of OFS, in a laboratory environment, with the appropriate level of uncertainty. Progress status: Typical uptimes of OFS over measurement durations \(>\) 10d currently cover a wide range from a few percent to 90 % [44, 112, 113], and [https://www.bipm.org/en/time-ftp/circular-t](https://www.bipm.org/en/time-ftp/circular-t) **Ancillary condition I.6 - Regular contributions of optical frequency standards to UTC(\(k\))** Progress status : Preliminary tests of UTC(\(k\)) steered by an OFS [100 - 103, 109] ### Criteria and conditions related to TF links for comparison or dissemination **Mandatory criterion II.1 - Availability of sustainable techniques for optical frequency standard comparisons** Availability and sustainability of transportable clocks or TF links with uncertainties \(<5\times 10^{-18}\) for frequency comparisons between at least NMIs operating optical frequency standards of I.1), on a national / intracontinental basis (baseline up to about 1000 km). Capability of repeated uncertainty estimations of these links. * Fulfilment level: 50-70 % [91, 114, 115] **Mandatory criterion II.2 - Knowledge of the local geopotential with an adequate uncertainty level** Knowledge of geopotential differences for NMIs operating OFS of I.2) to be consistent with the uncertainty budget of a frequency comparison between OFS using advanced links, i.e. including the uncertainty budget of the two OFS and of the link. Knowledge of local geopotential for NMIs operating OFS of I.4) with an uncertainty corresponding to a frequency uncertainty \(\lesssim 10^{-17}\), for the calibration of TAI. * Fulfilment level: 70-90 % [86, 92, 93, 94, 38] and [https://www.bipm.org/en/time-ftp/data](https://www.bipm.org/en/time-ftp/data) **Ancillary condition II.3 - High reliability of ultra high stability TF links** On-demand continuous operation capability of TF links over sufficient durations that do not limit OFS comparisons and their regular contributions to TAI. Progress Status: a few months continuous operation of fibre links for intracontinental comparisons [114, 116] but no existing link allowing OFS intercontinental comparisons without degradation. ### Criteria and conditions related to the acceptability of the new definition **Mandatory criterion III.1 - Definition allowing future more accurate realizations** The new definition must be long lasting. On the short term (just after the redefinition), it must ensure an improvement by 10/100 of its realization with OFS, i.e. reaching \(10^{-17}\)/\(10^{-18}\) relative frequency uncertainty. On the longer term, it must have the potential for further improvement of the realization of \(10^{-18}\) and beyond in order to avoid any early obsolescence of the definition. * Fulfilment level: 100 % (To be confirmed, based on the chosen option for the redefinition, but no identified fundamental effect limiting OFS accuracy at \(10^{-18}\) level for all species in I.1, and some newer systems have the potential to go beyond \(10^{-18}\)) **Mandatory criterion III.2 - Access to the realization of the new definition** * III.2.a Realization / "mise en pratique" of the new definition must be easily understandable with a clear uncertainty evaluation process; * Fulfilment level: 0 % (No existing document; pending the choice of the redefinition option) * Access for NMIs and high accuracy users to primary or secondary realizations of the new definition; * Fulfilment level: 100 % (To be confirmed, based on the chosen option for the redefinition, but primary or secondary representations of the SI second will continue to be accessible via metrology institutes or TAI) * Cs frequency standards ensure a secondary realization of the new definition. * Fulfilment level: 100 % (existing TAI architecture will be maintained at current level or better and Cs will be a secondary representation of the second) * Overall Fulfilment level for Criterion III.2: 70-90 % **Ancillary condition III.3 - Continuous improvement of the realization and of time scales after redefinition** Commitment of NMIs to make the best effort to: * improve and operate optical frequency standards that provide primary or secondary realizations of the new definition (reliable / continuous operation, regular contributions to TAI,...); Progress status: Several OFS are already in operation and used by the CCL-CCTF Working Group on Frequency Standards (CCL-CCTF-WGFS) to calculate the Recommended values of standard frequencies 2021 [10] * maintain the operation of Cs fountain standards over the appropriate duration; Progress status: 12 Cs fountains in operation [117- 125] * development of new OFS; Progress status: Several other atomic species are being investigated as potential candidates for the next generation, for example \({}^{229}\)Th\({}^{+}\), Lu\({}^{+}\), Cd, and several highly charged ions. The most recent references can be found for example in [Proceedings of the annual IEEE IFCS [https://ieee-uffc.org/symposia/ifcs](https://ieee-uffc.org/symposia/ifcs), and EFTF conferences [https://www.efft.org/](https://www.efft.org/)] **Ancillary condition III.4 - Availability of commercial optical frequency standards** Progress status: No available commercial OFS **Ancillary condition III.5 - Improved quality of the dissemination towards users** Progress status of TF links (GNSS, TWSTFT, Fibre / Internet) for the dissemination of the definition towards users: * 10\({}^{-16}\) for satellite microwave techniques (GNSS, TWSTFT); 10\({}^{-20}\) level for fibre links [126] * Time accuracy: 1 ns for satellite microwave techniques (GNSS, TWSTFT); 50 ps for fibre links [127] ## 9 Schedule, conclusions, and perspectives The possible redefinition scenarios depend on capabilities of optical frequency standards and their envisaged evolution, considering their performance, their readiness for sustainable contributions to the realization of time scales, especially TAI, and also their potential for commercial availability, and space qualification. A roadmap also needs to address TF transfer techniques required for the comparison of atomic clocks, for the construction of international time scales, and for the dissemination of reference signals to users, with an adequate uncertainty level. Depending on the achievements and the development progress, the CCTF envisaged the possible three schedule options for the redefinition (Figure 6). It appeared clear that a redefinition at the 28th meeting of the CGPM (2026) was unrealistic since today there is no consensus on the preferred option and still some important work to do to fulfil all mandatory criteria. The 28th CGPM (2026) could validate a roadmap towards a redefinition in 2030 if, in 2026, there is a consensus on the redefinition option to be chosen and if the work to fulfil mandatory criteria is likely to be achievable by 2030. If a redefinition is not possible in 2030, it will have to be postponed until the meeting of the CGPM to be held in 2034 Figure 6: Scenarios for the roadmap. or the following one. But, with this third scenario, it will require the continued operation of Cs fountains primary frequency standards until the late 2030s. The redefinition will be the occasion to further educate stakeholders on the concept of metrological traceability and the best practices for accuracy and stability measurements and their specification. The CCTF will set up a subgroup to address this particular matter and educate the public about the redefinition. In November 2022, the 27th CGPM approved Resolution 5 [128] corresponding to the CCTF roadmap towards the redefinition of the second as presented in this paper, with a preferred scenario leading to a redefinition at the 29th CGPM (2030). This scenario is realistic, even if there is still considerable work to converge on a preferred option and to fulfil all mandatory criteria by pushing the limits of optical frequency standards and T/F transfer. All these efforts will be determining factors in reaching the goal of a new definition of the SI second with an improved quality of the _mise en pratique_, in order to serve current and future needs in metrology and to foster scientific and technological applications at the highest accuracy. ## 10 Authors contribution This paper is based on the work of the CCTF Task Force on the "Roadmap to the redefinition of the second". The Task Force was chaired by N. Dimarcq, P.Tavella, and formed by three subgroups, whose members are listed below. \begin{tabular}{|p{42.7pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Subgroup & chair & Executive secretary & members \\ \hline A & M.Gertsvolf, G. Mileti & F. Meynadier I. Sesia, M. Wouters & J. Bartholomew, P. Defraigne, E. A. Donley, P. O. Hedekvist, I. Sesia, M. Wouters \\ \hline B & S.Bize, C.W. Oates, E. Peik I. Sesia, T. Ido & G. Petit P. Dube, F. Fang, T. Ido, F. Levi, J. Lodewyck, H. S. Margolis, D. Newell, S. Slyusarev, S. Weyers, J.-P. Uzan, M. Yasuda, D.-H. Yu \\ \hline C & D.Calonico, T. Ido & G. Panfilo P. Defraigne, E. A. Donley, M. Fujieda, M.Gertsvolf,, T. Ido & P. Defraigne, E. A. Donley, M. Fujieda, M.Gertsvolf,, T. Hanado, J. Hanssen, H. S. Margolis, G. Petit, P.-E. Pottie, C. Rieck, H. Schnatz,, A. Malimon, M. Wouters, N. Ashby, \\ \hline \end{tabular}
2303.09381
Multi-modal Differentiable Unsupervised Feature Selection
Multi-modal high throughput biological data presents a great scientific opportunity and a significant computational challenge. In multi-modal measurements, every sample is observed simultaneously by two or more sets of sensors. In such settings, many observed variables in both modalities are often nuisance and do not carry information about the phenomenon of interest. Here, we propose a multi-modal unsupervised feature selection framework: identifying informative variables based on coupled high-dimensional measurements. Our method is designed to identify features associated with two types of latent low-dimensional structures: (i) shared structures that govern the observations in both modalities and (ii) differential structures that appear in only one modality. To that end, we propose two Laplacian-based scoring operators. We incorporate the scores with differentiable gates that mask nuisance features and enhance the accuracy of the structure captured by the graph Laplacian. The performance of the new scheme is illustrated using synthetic and real datasets, including an extended biological application to single-cell multi-omics.
Junchen Yang, Ofir Lindenbaum, Yuval Kluger, Ariel Jaffe
2023-03-16T15:11:17Z
http://arxiv.org/abs/2303.09381v1
# Multi-modal Differentiable Unsupervised Feature Selection ###### Abstract Multi-modal high throughput biological data presents a great scientific opportunity and a significant computational challenge. In multi-modal measurements, every sample is observed simultaneously by two or more sets of sensors. In such settings, many observed variables in both modalities are often nuisance and do not carry information about the phenomenon of interest. Here, we propose a multi-modal unsupervised feature selection framework: identifying informative variables based on coupled high-dimensional measurements. Our method is designed to identify features associated with two types of latent low-dimensional structures: (i) shared structures that govern the observations in both modalities and (ii) differential structures that appear in only one modality. To that end, we propose two Laplacian-based scoring operators. We incorporate the scores with differentiable gates that mask nuisance features and enhance the accuracy of the structure captured by the graph Laplacian. The performance of the new scheme is illustrated using synthetic and real datasets, including an extended biological application to single-cell multi-omics. Introduction In an effort to study biological systems, researchers are developing cutting-edge techniques that measure up to tens of thousands of variables at single-cell resolution. The complexity of such systems requires collecting multi-modal measurements to understand the interplay between different biological processes. Examples of such multi-modal measurements include SHARE-seq [1], DBiT-seq [2], CITE-seq [3], etc., which have provided biological insights and advancements in applications such as transcription factor characterization [4], cell type identification in human hippocampus [5], and immune cell profiling [6]. Multi-modal learning is a powerful tool widely used across multiple disciplines to extract latent information from high-dimensional measurements [7, 8]. Humans use complementary senses when attempting to "estimate" spoken words or sentences [9]. For example, lip movements can help us distinguish between two syllables that sound similar. The same intuition has inspired statisticians and machine learning researchers to develop learning techniques that exploit information captured simultaneously by complementary measurement devices. Due to their applicability in multiple domains, there has been a growing interest in multi-modal approaches. Algorithms such as Contrastive Language-Image Pre-training (CLIP) [10], and Audioclip [11] have pushed the performance boundaries of machine learning for image, text, audio, analysis, and synthesis. The multi-modal data fusion task dates back to [12], which proposed the celebrated Canonical Correlation Analysis (CCA). CCA has many extensions [13, 14], and applications in diverse scientific domains [15, 16]. Despite their tremendous success, classical or advanced multi-modal schemes are often unsuitable for analyzing biological data. The large number of nuisance variables, which often exceeds the number of measurements, often causes correlation-based methods to overfit. To attenuate the influence of nuisance or noisy features, several authors proposed unsupervised feature selection (UFS) schemes [17]. UFS seeks small subsets of informative variables in order to improve downstream analysis tasks, such as clustering or manifold learning. Empirical results demonstrate that informative features are often smooth with respect to some latent structure [18]. In practice, the smoothness of features can be evaluated based on how slowly they vary with respect to a graph [19]. Follow-up works exploited this idea to identify informative features [20, 21]. An alternative paradigm for UFS seeks subsets of features that can be used to reconstruct the entire data effectively [22]. While most fusion methods focus on extracting information shared between modalities, we propose a multi-modal UFS framework to identify features associated both with structures that appear in both modalities, and structures that are _modality-specific_, and appear in only one modality. To capture the shared structure, we construct a symmetric shared graph Laplacian operator that enhances the shared geometry across modalities. We further propose differential graph operators that capture smooth structures that are not shared with the other modality. To perform multi-modal feature selection, we incorporate differentiable gates [23, 24] with the _shared_ and _modality-specific_ graph Laplacian scoring functions. This leads to a differentiable UFS scheme that attenuates the influence of nuisance features during training and computes a more accurate Laplacian matrix [25]. Our contributions are four folds: (i) Develop a _shared_ and _modality-specific_ Laplacian scoring operators. (ii) Motivate our operators using a product of manifolds model. (iii) develop and implement a differentiable framework for multi-modal UFS. (iv) Evaluate the merits and limitations of our approach with synthetic and real data and compare it to existing schemes. ## 2 Problem setting and preliminaries We are given two data matrices \(\mathbf{X}\in\mathbb{R}^{n\times d},\mathbf{Y}\in\mathbb{R}^{n\times m}\) whose rows contain \(n\) observations captured simultaneously in two modalities. The two sets of observations can be, for example, two arrays of sensors, cameras with different angles, etc. We are interested in processing modalities with bijective correspondences, which implies that there is a registration between the observations in both modalities. Though the observations are high-dimensional, we assume that there are a small number of parameters governing the physical processes that underlies the data. These parameters can be continuous such as in a developmental process, or discrete - for example, when the observations can be characterized by clustering. However, the latent structure in both modalities may not be identical. For example, the two sets of observations may be generated by sets of sensors with different resolutions or sensitivity. For illustration, consider the observations shown in Fig. 1 (left). Both modalities follow a very similar tree structure. The bottom tree, however, has an additional bifurcating point that does not appear in the upper tree (green points). Thus, we assume the latent parameters can be partitioned into two subsets. The first component denoted \(\mathbf{\theta}_{s}\), captures the structures shared by both modalities. The second component, denoted \(\mathbf{\theta}_{x}\) for modality \(\mathbf{X}\), and \(\mathbf{\theta}_{y}\) for modality \(\mathbf{Y}\), captures the modality-specific structures that only appear in one set of observations. For example, the additional branch in the bottom tree (modality \(\mathbf{Y}\)) in Fig. 1 is governed by a parameter in \(\mathbf{\theta}_{y}\). Thus, the observations \(\mathbf{X}\) and \(\mathbf{Y}\) are nonlinear transformations of \(\mathbf{\theta}_{s},\mathbf{\theta}_{x}\) and \(\mathbf{\theta}_{s},\mathbf{\theta}_{y}\), respectively. Many biological data modalities are high dimensional and contain noisy features, which hinders the discovery of the underlying shared or modality-specific structures. Here, our goal is to identify groups of features associated with the shared structures (e.g., the groups of features that are smooth on the shared bifurcated tree in Fig. 1) and groups of features associated with the modality-specific structures \(\mathbf{\theta}_{x}\) and \(\mathbf{\theta}_{y}\) (e.g., the features that are smooth with respect to the additional branch \((\mathbf{\theta}_{y})\) of modality \(\mathbf{Y}\) in Fig. 1). To achieve this goal, we compute two graphs that correspond to the two modalities. We use a spectral method to uncover the shared and graph-specific structures and apply a feature selection method to detect variables relevant to these structures. To better understand our approach, we first introduce some preliminaries about graph representation in Sec. 2.1, and discuss related work on feature selection in Sec. 2.2. ### The graph Laplacian and Laplacian score A common assumption when analyzing high-dimensional datasets is that their structure lies on a low dimensional manifold in the high dimensional space [26, 27]. Methods for manifold learning are often based on a graph that captures the affinities between data points. Let \(\mathbf{x}^{(i)},\mathbf{y}^{(i)}\) denote the \(i\)-th observation in the \(\mathbf{X}\) and \(\mathbf{Y}\) modalities and let \(\mathbf{K}_{x},\mathbf{K}_{y}\) be, respectively, their affinity matrices whose elements are computed by the following Gaussian kernel functions. \[\left(\mathbf{K}_{x}\right)_{i,j} =\exp\Big{(}-\frac{\|\mathbf{x}^{(i)}-\mathbf{x}^{(j)}\|^{2}}{2\sigma_{x} ^{2}}\Big{)},\] \[\left(\mathbf{K}_{y}\right)_{i,j} =\exp\Big{(}-\frac{\|\mathbf{y}^{(i)}-\mathbf{y}^{(j)}\|^{2}}{2\sigma_{y} ^{2}}\Big{)},\] where \(\sigma_{x},\sigma_{y}\) are user-defined bandwidths that control the decay of each Gaussian kernel. Intuitively, the affinities decay exponentially with the distances between samples, thus capturing the local neighborhood structure in the high-dimensional space. Figure 1: Overview of the goal: discovering features associated with shared and modality specific latent structures We compute the normalized Laplacian matrix by \(\mathbf{L}_{x}=\mathbf{D}_{x}^{-\frac{1}{2}}\mathbf{K}_{x}\mathbf{D}_{x}^{-\frac{1}{2}}\), where \(\mathbf{D}_{x}\) is a diagonal matrix of row sums of \(\mathbf{K}_{x}\). Similarly, \(\mathbf{L}_{y}\) is computed for modality \(\mathbf{Y}\). An important property of the Laplacian matrix is that its eigenvectors corresponding to large eigenvalues reflect the underlying geometry of the data. The Laplacian eigenvectors are used for many applications including data embeddings [28], clustering [29], and feature selection [19]. For the latter, a popular metric for unsupervised identification of informative features is the Laplacian Score (LS) [19], \[\mathbf{f}^{T}\mathbf{L}_{x}\mathbf{f}=\sum_{i=1}^{n}\lambda_{i}(\mathbf{f}^{T}\mathbf{u}_{i})^{2}, \tag{1}\] where \(\mathbf{L}_{x}=\sum_{i=1}^{n}\lambda_{i}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\) is the eigendecomposition of \(\mathbf{L}_{x}\) and \(\mathbf{f}\) is the normalized feature vector. Intuitively, when \(\mathbf{f}\) varies slowly with respect to the underlying structure of \(\mathbf{L}_{x}\), it will have a significant component projected onto the subspace of its top eigenvectors, and a higher score. ### Differentiable Unsupervised Feature Selection A key limitation of the Laplacian score stems from its underlying assumption that the Laplacian matrix \(\mathbf{L}_{x}\) accurately reflects the latent structure of the data. This assumption, however, may not be valid in the presence of many noisy features. In such cases, the top eigenvectors of \(\mathbf{L}_{x}\) may be heavily influenced by noise and would not capture the underlying structure accurately. A recent work [25] addresses this problem by developing Differentiable Unsupervised Feature Selection (DUFS), a framework that estimates the Laplacian matrix while simultaneously selecting informative features using Laplacian scores. Specifically, DUFS computes a binary vector \(\mathbf{s}\in\{0,1\}^{d}\) that indicates which features are kept (\(s_{j}=1\)) and which features are not (\(s_{j}=0\)). Let \(\Delta(\mathbf{s})\) denote a diagonal matrix with \(\mathbf{s}\) on the diagonal. At each iteration of DUFS, the Laplacian is computed based on \(\mathbf{\tilde{X}}=\mathbf{X}\Delta(\mathbf{s})\), while simultaneously updating \(\mathbf{s}\) by optimizing over the following loss function. \[\mathcal{L}=-\frac{1}{n}\text{Tr}[\mathbf{\tilde{X}}^{T}\mathbf{L}_{\tilde{x}}\mathbf{ \tilde{X}}]+\lambda\|\mathbf{s}\|_{0}, \tag{2}\] where \(\text{Tr}[]\) denotes the matrix trace. The first term equals the sum of Laplacian Scores across all features normalized by the total number of samples \(n\) in a training batch. The second term is a \(\ell_{0}\) regularizer that imposes sparsity to the number of selected features, with \(\lambda\) being a tunable parameter that controls the sparsity level. The output of DUFS is a list of a small number of selected features, and the Laplacian matrix \(\mathbf{L}_{\tilde{x}}\) learned from them. However, due to the discrete nature of the \(\ell_{0}\) regularizer, the standard discrete indicator vector \(\mathbf{s}\in\{0,1\}^{D}\) will make objective in Eq. (2) not differentiable and finding the optimal solution intractable. Following, [23], one can relax the \(\ell_{0}\) norm to a probabilistic differentiable counterpart, by replacing the binary indicator vector \(\mathbf{s}\) with a relaxed Bernoulli vector \(\mathbf{z}\). Specifically, \(\mathbf{z}\) is a continuous Gaussian reparametrization of the discrete random variables, termed Stochastic Gates. It is defined for each feature \(i\): \[z_{i}=\max(0,\min(1,0.5+\mu_{i}+\epsilon_{i})),\quad\epsilon_{i}\sim\mathcal{N} (0,\sigma^{2}) \tag{3}\] where \(\mu_{i}\) is a learnable parameter, and \(\sigma\) is fixed throughout training. The loss function in Eq. (2) can now be reformulated as follows, which is the final objective of the DUFS: \[\mathcal{L}=-\frac{1}{n}\text{Tr}[\mathbf{\tilde{X}}^{T}\mathbf{L}_{\bar{x}}\mathbf{\tilde {X}}]+\lambda\|\mathbf{z}\|_{0}. \tag{4}\] ## 3 Method We now derive our approach for unsupervised feature selection in multi-modal settings. Our method is designed to capture two types of features: (i) Features associated with latent structures that are _shared_ between two modalities. (ii) Features associated with _differential latent structures_, that appear in only one modality. In Sec. 3.1 and 3.2, we derive two operators designed to capture shared and differential structures, respectively. To motivate our approach and illustrate the difference between shared and differential structures, we specifically address two examples: (i) shared and differential clusters and (ii) product of manifolds. We use the proposed operators in Sec. 3.3 to derive mmDUFS. ### The shared structure operator To motivate our approach, let us consider the artificial example illustrated in Fig. 2. The lower figure in the left panel shows the observations in modality \(\mathbf{Y}\), which contains samples from a mixture of three distinct Gaussians. The upper figure shows modality \(\mathbf{X}\), where one of the three clusters is partitioned again into three (less distinct) clusters. It is instructive to study the _ideal setting_ where we make the following assumptions: (i) The largest distance between two nodes within a cluster, denoted \(d_{\text{within}}\) is much smaller than the smallest distance between pairs of nodes of two clusters, denoted \(d_{\text{between}}\). (ii) The bandwidth \(\sigma_{x},\sigma_{y}\) is chosen such that \(d_{\text{within}}\ll\sigma_{x},\sigma_{y}\ll d_{\text{between}}\). In this setting, the three Gaussians constitute three main clusters, with no connections between pairs of nodes of different clusters and similar weights between pairs of nodes within clusters. Thus, the leading eigenvectors of \(\mathbf{L}_{y}\) span the subspace of the three _indicator vectors_. That is vectors that contain the square root of the degree of a node in a cluster and a zero value outside the cluster. See [29] and illustration in Fig. 2. The matrix \(\mathbf{L}_{x}\) has two extra significant eigenvectors that span the separation of the third cluster, which appears only in \(\mathbf{X}\). We denote by \(\mathbf{V}_{s}\) a matrix that contains the indicator vectors of the three partitions that appear in \(\mathbf{X}\) and \(\mathbf{Y}\) and by \(\mathbf{V}_{x}\) a matrix that contains the partitions that appear only in \(\mathbf{X}\). In our ideal setting, the two Laplacian matrices \(\mathbf{L}_{x},\mathbf{L}_{y}\) are equal to \[\mathbf{L}_{x}\approx\mathbf{V}_{s}\mathbf{V}_{s}^{T}+\mathbf{V}_{x}\mathbf{V}_{x}^{T},\qquad\mathbf{L }_{y}\approx\mathbf{V}_{s}\mathbf{V}_{s}^{T}. \tag{5}\] To capture _shared_ latent structures we compute the following shared operator \(\mathbf{P}_{\text{shared}}\), \[\mathbf{P}_{\text{shared}}=\mathbf{L}_{x}\mathbf{L}_{y}+\mathbf{L}_{y}\mathbf{L}_{x}. \tag{6}\] For the cluster setting, the orthogonality between the matrices \(\mathbf{V}_{s},\mathbf{V}_{x}\) implies \(\mathbf{P}_{\text{shared}}\approx 2\mathbf{V}_{s}\mathbf{V}_{s}^{T}\). The symmetric product of the two Laplacians captures clusters that appear in both modalities while removing modality-specific clusters, see right panel of Fig. 2. We note that a similar operator to Eq. (6) is proposed in [30] for computing low-dimensional representations. Here, we combine our operator with DUFS to develop a multi-modal feature selection pipeline. We illustrate the usefulness of the shared operator for the product of manifold setting. Product of manifolds.Let \(\mathcal{M}_{a},\mathcal{M}_{b}\) and \(\mathcal{M}_{s}\) be three low-dimensional manifolds embedded in \(\mathbb{R}^{n}\), which are smooth transformations of three sets of latent variables \(\mathbf{\theta}_{a},\mathbf{\theta}_{b}\) and \(\mathbf{\theta}_{s}\). To further motivate our approach, consider the case where modalities \(\mathbf{X}\) and \(\mathbf{Y}\) each contains observations from the products \(\mathcal{M}_{y},\mathcal{M}_{x}\) given by, \[\mathcal{M}_{y}=\mathcal{M}_{s}\times\mathcal{M}_{a},\qquad\mathcal{M}_{x}= \mathcal{M}_{s}\times\mathcal{M}_{b}.\] Figure 2: Visualization of the eigenvectors and the affinity matrix of the proposed operators on an artificial cluster example. Left: Visualization of the clusters. Middle: Leading eigenvectors of \(\mathbf{L}_{x}\) and \(\mathbf{L}_{y}\). Right: Affinity matrices of the proposed shared graph operator (top) and the differential graph operator (bottom) with/without the presence of noisy features. Note that the dependence on \(\mathbf{\theta}_{s}\) is shared between \(\mathcal{M}_{x},\mathcal{M}_{y}\), while the dependence on \(\mathbf{\theta}_{a},\mathbf{\theta}_{b}\) is modality-specific. In a product of manifolds \(\mathcal{M}_{x}=\mathcal{M}_{s}\times\mathcal{M}_{b}\), every point \(\mathbf{x}\in\mathcal{M}_{x}\) is associated with two points \(\mathbf{x}_{s}\in\mathcal{M}_{s}\) and \(\mathbf{x}_{b}\in\mathcal{M}_{b}\). Thus, we can define projection operators \(\pi^{x}_{b}(\mathbf{x}),\pi^{x}_{s}(\mathbf{x})\) that map a point \(\mathbf{x}\) in \(\mathcal{M}_{x}\) to points in \(\mathcal{M}_{b},\mathcal{M}_{s}\), respectively. In addition, for every function \(f^{b}:\mathcal{M}_{b}\to\mathbb{R}\) we define its extension to the product manifold \(\mathcal{M}_{x}\) by \[(f^{b}\circ\pi^{x}_{b})(\mathbf{x})=f^{b}(\pi^{x}_{b}(\mathbf{x})).\] An important property of a product \(\mathcal{M}_{x}\) is that the eigenfunctions \(f^{x}_{l,m}\) of the Laplace Beltrami operator are equal to the pointwise product of the eigenfunctions of \(\mathcal{M}_{b},\mathcal{M}_{s}\), extended to \(\mathcal{M}_{x}\). \[f^{x}_{l,m}=(f^{s}_{l}\circ\pi^{x}_{s})(f^{b}_{m}\circ\pi^{x}_{b}). \tag{7}\] We refer to [31] for a detailed description of the properties of the product of manifolds. A simple example of a product of manifolds is a 2D rectangle area \((\mathbf{\theta}_{s},\mathbf{\theta}_{b})\in[0,l_{s}]\times[0,l_{b}]\). the projection \(\pi^{x}_{s}\) yields the first coordinate, while \(\pi^{x}_{b}\) yields the second. The eigenfunctions of the product with Neumann boundary conditions are equal to, \[f_{l,m}=\cos(\pi l\mathbf{\theta}_{s}/l_{s})\cos(\pi m\mathbf{\theta}_{b}/l_{b}). \tag{8}\] **Observations generated uniformly at random over the product of manifolds.** Here, we assume that the observations in the two modalities are generated by random and independent uniformly distributed samples over \(\mathcal{M}_{x},\mathcal{M}_{y}\). Let \(\mathbf{\phi}^{x}_{l,m}(\mathbf{x}_{i}),\mathbf{\phi}^{y}_{l,k}(\mathbf{y}_{i})\) denote the eigenvectors of \(\mathbf{L}_{x},\mathbf{L}_{y}\) evaluated at \(\mathbf{x}_{i},\mathbf{y}_{i}\) respectively. In the asymptotic regime where the number of points \(n\to\infty\), the eigenvectors converge to the eigenfunctions as characterized in Eq. (7). \[\mathbf{\phi}^{x}_{l,m}(\mathbf{x}_{i}) =\mathbf{\phi}^{s}_{l}(\pi^{x}_{s}(\mathbf{x}_{i}))\mathbf{\phi}^{b}_{m}(\pi^ {x}_{b}(\mathbf{x}_{i}))\] \[\mathbf{\phi}^{y}_{l,k}(\mathbf{y}_{i}) =\mathbf{\phi}^{s}_{l}(\pi^{y}_{s}(\mathbf{y}_{i}))\mathbf{\phi}^{a}_{k}(\pi^ {y}_{a}(\mathbf{y}_{i})). \tag{9}\] Details about the definition and rate of convergence can be found, for example, in [32, 33], and reference therein. It is instructive to consider the ideal case, where due to their dependence on the independent projections \(\pi^{x}_{b}\) and \(\pi^{x}_{a}\), the eigenvectors \(\mathbf{\phi}^{x}_{l,m},\mathbf{\phi}^{y}_{l,k}\) satisfy the following orthogonality property, \[(\mathbf{\phi}^{x}_{l,m})^{T}\mathbf{\phi}^{y}_{l^{\prime},k}=\begin{cases}1&l=l^{ \prime},m=k=0\\ 0&o.w.\end{cases} \tag{10}\] It follows that the operator \(\mathbf{P}_{\text{shared}}\) is equal to, \[\mathbf{P}_{\text{shared}}=\mathbf{L}_{x}\mathbf{L}_{y}+\mathbf{L}_{y}\mathbf{L}_{x}=\sum_{l}(\bm {\phi}^{s}_{l}\otimes\mathbf{\phi}^{a}_{0})(\mathbf{\phi}^{s}_{l}\otimes\mathbf{\phi}^{b} _{0})^{T}, \tag{11}\] where \(\otimes\) denotes the Hadamard product. The vectors \(\mathbf{\phi}_{0}^{a},\mathbf{\phi}_{0}^{b}\) constitute the degree of the different observations and have little effect on the outcome. Thus, the leading eigenvectors of \(\mathbf{P}_{\rm shared}\) are associated with the shared component and not the differential components in the product of manifolds. Below, we illustrate this phenomenon with two examples. Example 1: points in a 3D cube.Consider points generated uniformly at random over a 3D cube of dimensions \([0,l_{s}]\times[0,l_{a}]\times[0,l_{b}]\). Let \(\mathbf{Y}\in\mathbb{R}^{n\times 2}\) constitute the first two coordinates of \(n\) independent observations, and let \(\mathbf{X}\) constitute the first and third coordinates. This is a simple case of a product of manifolds, where the shared variable \(\theta_{s}\) is the first coordinate, while the modality-specific variables \(\theta_{a},\theta_{b}\) are the second and third coordinates. Following Eq. (8), the eigenvectors of the graph Laplacian matrices \(\mathbf{L}_{x},\mathbf{L}_{y}\), evaluated at \((\theta_{s},\theta_{b})\) and \((\theta_{s},\theta_{a})\) converge to, \[\phi_{lm}^{x}(\theta_{s},\theta_{b})=\cos(\pi l\theta_{s}/l_{s}) \cos(\pi m\theta_{b}/l_{b})\] \[\phi_{lk}^{y}(\theta_{s},\theta_{a})=\cos(\pi l\theta_{s}/l_{s}) \cos(\pi k\theta_{a}/l_{a}). \tag{12}\] The first row of Fig. 1 (Appendix A) shows a scatter plot of the points in \(\mathbf{X}\) (located according to the first two coordinates), colored by the values of the leading eigenvectors of \(\mathbf{L}_{x}\). The second row shows the points in \(\mathbf{X}\), but colored by the eigenvectors of \(\mathbf{P}_{\rm shared}\). As expected, all the eigenvectors of \(\mathbf{P}_{\rm shared}\) are functions of the shared coordinate \(\theta_{s}\). Example 2: videos taken from different angles.Our second example is based on an experiment done in [34], where the two modalities constitute two videos of three dolls rotating at different angular speeds. The first camera (modality \(\mathbf{X}\)) captures the middle and left doll, while the second camera (modality \(\mathbf{Y}\)) captures the middle and right dolls (see Fig. 4a). Here, the shared variable \(\mathbf{\theta}_{s}\) is the angle of the middle doll captured by both modalities. The modality-specific variables \(\mathbf{\theta}_{a},\mathbf{\theta}_{b}\) are the angles of the left and right dolls captured by each modality separately. To illustrate Eq. (11) in this example, we first compute an approximation of the eigenvectors \(\mathbf{\phi}_{l}^{s}\). To that end, we cropped each image in one of the videos such that only the middle doll (which appears in both modalities) is shown. One may think of this operation as a projection to the shared manifold. Next, we computed from the cropped images the leading eigenvectors \(\mathbf{\phi}_{l}^{s}\) of the Laplacian matrix. Fig. 2 (Appendix A) shows the leading three eigenvectors of \(\mathbf{P}_{\rm shared}\) as a function of \(\mathbf{\phi}_{1}^{s},\mathbf{\phi}_{2}^{s},\mathbf{\phi}_{3}^{s}\) as computed by the cropped images. The figure shows a linear dependency between the vectors, which implies that the shared operator retained only the shared component of the two modalities. ### The Differential Graph Operators We design two operators \(\mathbf{Q}_{x}\) and \(\mathbf{Q}_{y}\) to infer latent structures that are _modality specific_ to \(\mathbf{X},\mathbf{Y}\) respectively. \[\mathbf{Q}_{x}=\tilde{\mathbf{L}}_{y}^{-1}\mathbf{L}_{x}\tilde{\mathbf{L}}_{y}^{-1},\qquad\mathbf{Q} _{y}=\tilde{\mathbf{L}}_{x}^{-1}\mathbf{L}_{y}\tilde{\mathbf{L}}_{x}^{-1}, \tag{13}\] where \(\tilde{\mathbf{L}}_{x}=\mathbf{L}_{x}+c\mathbf{I}\), \(\tilde{\mathbf{L}}_{y}=\mathbf{L}_{y}+c\mathbf{I}\), and \(c\) is a regularization constant. We address the cluster example used for the shared operator to motivate the use of these operators. Differential clusters.In the synthetic cluster example in Fig. 2, modality \(\mathbf{X}\) has three smaller clusters not observed in modality \(\mathbf{Y}\). We show that one can detect the _differential clusters_ of modality \(\mathbf{X}\) via the leading eigenvectors of \(\mathbf{Q}_{x}\). By Eq. (5), we can approximate \(\tilde{\mathbf{L}}_{y}\) via, \[\tilde{\mathbf{L}}_{y}=(1+c)\mathbf{V}_{s}\mathbf{V}_{s}^{T}+c\mathbf{V}_{\text{comp}}\mathbf{V}_{ \text{comp}}^{T}, \tag{14}\] where \(\mathbf{V}_{\text{comp}}\in\mathbb{R}^{n\times(n-3)}\) contains, as columns, vectors that span the complementary subspace to \(\mathbf{V}_{s}\). We write \(\mathbf{Q}_{x}\) as: \[\mathbf{Q}_{x}=\tilde{\mathbf{L}}_{y}^{-1}\mathbf{L}_{x}\tilde{\mathbf{L}}_{y}^{-1}=c^{-2}\bm {V}_{x}\mathbf{V}_{x}^{T}+(1+c)^{-2}\mathbf{V}_{s}\mathbf{V}_{s}^{T}. \tag{15}\] The differential operator in Eq. (15) has two terms. The first spans the subspace corresponding to the differential structure \(\mathbf{V}_{x}\), while the second spans the subspace of the shared structure \(\mathbf{V}_{s}\). Since \(c^{-2}>(1+c)^{-2}\), it follows that the leading eigenvectors of \(\mathbf{Q}_{x}\) span the subspace of \(\mathbf{V}_{x}\). In theory, we can directly apply these operators to learn the structures. However, in many real-world applications, e.g., single-cell multi-omic technologies, both \(\mathbf{X}\) and \(\mathbf{Y}\) can be very noisy. In particular, abundant noisy features (e.g., genes) might dominate the data, and the top eigenvectors of \(\mathbf{L}_{x}\) and \(\mathbf{L}_{y}\) might not capture the underlying structure, which would be detrimental to the learning of \(\mathbf{P}_{\text{shared}}\), \(\mathbf{Q}_{x}\), and \(\mathbf{Q}_{y}\). As shown in the affinity matrices on the right of Fig. 2, the structures are less clear when many noisy features are present. Therefore, it is necessary to have a feature selection framework that can effectively remove these noisy features in our multi-modal setting. With the aforementioned DUFS feature selection framework as the foundation, we will show in the next section how we can incorporate it into our proposed operators in the multi-modal setting. ### mmDUFS In this section, we describe our framework, termed multi-modal Differential Unsupervised Feature Selection (mmDUFS)1. We incorporates differentiable gates [25] with loss functions based on the shared and differential operators, detailed in Sec. 3.1 and 3.2. Our goal is to compute an accurate shared graph operator (\(\mathbf{P}_{\text{shared}}\) in Eq. (6)) and differential graph operators (\(\mathbf{Q}_{x}\) and \(\mathbf{Q}_{y}\) in Eq. (13)) while simultaneously selecting the informative features. Let \(\mathbf{f}_{x},\mathbf{f}_{y}\) denote a feature vector in \(\mathbf{X},\mathbf{Y}\), respectively. To quantify how noisy or informative the features are with respect to the shared structure, we replace the Laplacian \(\mathbf{L}\) in Eq. (1) with \(\mathbf{P}_{\text{shared}}\), which yields the shared score \(\mathbf{f}_{x}^{T}\mathbf{P}_{\text{shared}}\mathbf{f}_{x}\) and \(\mathbf{f}_{y}^{T}\mathbf{P}_{\text{shared}}\mathbf{f}_{y}\). Similarly, \(\mathbf{f}_{x}^{T}\mathbf{Q}_{x}\mathbf{f}_{x}\) and \(\mathbf{f}_{y}^{T}\mathbf{Q}_{y}\mathbf{f}_{y}\) quantify the smoothness of these features with respect to the differential graph operators \(\mathbf{Q}_{x}\) and \(\mathbf{Q}_{y}\). The rationale behind these generalized Laplacian Scores is similar to the original score. For instance, let \(\mathbf{P}_{\text{shared}}=\sum_{i=1}^{n}\lambda_{i}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\) be the eigendecomposition of \(\mathbf{P}_{\text{shared}}\). If \(\mathbf{f}_{x}\) varies slowly with respect to the underlying shared structure, it will have a larger component projected onto the subspace of \(\mathbf{P}_{\text{shared}}\), thus leads to a higher score. To learn features with high generalized Laplacian Scores and accurate graph operators, mmDUFS learns two sets of Stochastic Gates \(\mathbf{z}_{x}\) and \(\mathbf{z}_{y}\) that filter irrelevant features in each modality. Similar to DUFS [25], these stochastic gates multiply the data matrices \(\mathbf{X}\) and \(\mathbf{Y}\) to remove nuisance features, i.e., \(\mathbf{\tilde{X}}=\mathbf{X}\Delta(\mathbf{z}_{x})\) and \(\mathbf{\tilde{Y}}=\mathbf{Y}\Delta(\mathbf{z}_{y})\). At each iteration, the updated graph operators (\(\mathbf{\tilde{P}}_{\text{shared}}\), \(\mathbf{\tilde{Q}}_{x}\), \(\mathbf{\tilde{Q}}_{y}\)) are recomputed based on the gated inputs. mmDUFS has two modes: (i) detecting shared structures using the shared graph operator \(\mathbf{\tilde{P}}_{\text{shared}}\), and (ii) detecting modality-specific structures using the differential graph operators \(\mathbf{\tilde{Q}}_{x}\), and \(\mathbf{\tilde{Q}}_{y}\). To learn the shared structure and the corresponding features, we propose to optimize \(\mathbf{z}_{x}\) and \(\mathbf{z}_{y}\) by minimizing the following loss function: \[\mathcal{L}_{\text{shared}} =-\frac{1}{n}\text{Tr}[\mathbf{\tilde{X}}^{T}\mathbf{\tilde{P}}_{\text{ shared}}\mathbf{\tilde{X}}]-\frac{1}{n}\text{Tr}[\mathbf{\tilde{Y}}^{T}\mathbf{\tilde{P}}_{ \text{shared}}\mathbf{\tilde{Y}}]\] \[+\lambda_{x}\|\mathbf{z}_{x}\|_{0}+\lambda_{y}\|\mathbf{z}_{y}\|_{0},\] where the first two terms are the Shared Laplacian Scores for each modality, and the regularizers \(\lambda_{x}\|\mathbf{z}_{x}\|_{0}\) and \(\lambda_{y}\|\mathbf{z}_{y}\|_{0}\) control the number of selected features for each modality, with tunable parameters \(\lambda_{x},\lambda_{y}\) that control the level of sparsity. In Appendix B.1, we suggest a procedure to tune these regularization parameters. Similarly, the loss functions \(\mathcal{L}_{x},\mathcal{L}_{y}\) are designed to detect features associated with structures that appear only in modality \(\mathbf{X},\mathbf{Y}\), respectively. \[\mathcal{L}_{x} =-\frac{1}{n}\text{Tr}[\mathbf{\tilde{X}}^{T}\mathbf{Q}_{\tilde{x}}\mathbf{ \tilde{X}}]+\lambda_{x}\|\mathbf{z}_{x}\|_{0},\] \[\mathcal{L}_{y} =-\frac{1}{n}\text{Tr}[\mathbf{\tilde{Y}}^{T}\mathbf{Q}_{\tilde{y}}\mathbf{ \tilde{Y}}]+\lambda_{y}\|\mathbf{z}_{y}\|_{0}, \tag{16}\] where the first term in each loss is termed Differential Laplacian Scores. In the following section we show the usefulness of these score functions for detecting relevant features. Results We benchmark mmDUFS using synthetic and real multi-modal datasets. For discovering the shared structures and associated features, we compare mmDUFS with the shared operator to the following variants of kernel fusion-based methods previously proposed for dimensionality reduction: (1) Matrix Concatenation (MC), where the Laplacian is computed based on a concatenated matrix of the two modalities. (2) Multi-modal Kernel Sum (mmKS) [35], where the Laplacian is equal to \(\mathbf{L}_{x}+\mathbf{L}_{y}\). (3) Multi-modal Kernel Product (mmKP) [36, 37]. where the Laplacian is equal to \(\mathbf{L}_{x}\mathbf{L}_{y}\). For each baseline, the \(k\) features with the highest Laplacian Scores are selected. For the synthetic datasets, we set \(k\) to be the correct number of informative features. We evaluate the performance of different methods by the F1-score \(\text{F1}=\text{TP}/(\text{TP}+\frac{1}{2(\text{FP}+\text{FN})})\), where TP is the number of informative features selected by each method, FP is the number of uninformative selected features, and FN is the number of missed informative features. For the rescaled MNIST and rotating doll examples, the informative features are set to the 25% pixels with the highest standard deviation. ### Synthetic Examples Rescaled MNIST.We designed a rescaled MNIST example with shared and modality-specific digits. We first randomly sample one image (\(28\times 28\) pixels) of digits 0, 3, 8. Then, we rescale each digit randomly and independently 500 times resulting with 500 images of 0, 3, and 8. We concatenate pairs of 0 and 3 to create modality \(\mathbf{X}\), and pairs of the same 3 and random 8 to create \(\mathbf{Y}\), see example in Fig. 3a. Thus, this dataset consists of 500 samples and \(28\times 56\) pixels in each modality, with digit 3 shared between the modalities and digit 0 and 8 modality specific. We apply mmDUFS with the shared operator to this example to select pixels corresponding to 3. The left column of Fig. 3b shows the pixels gate values from mmDUFS for modality \(\mathbf{X}\) (top) and \(\mathbf{Y}\) (bottom). We can see that selected pixels outline the shape of the digit 3 well. Table 1 compares the F1-score achieved by mmDUFS to three baselines. We can see that mmDUFS achieves a higher F1-score than all the baselines on both modalities, demonstrating its ability to identify informative features accurately. Lastly, we apply mmDUFS with the differential operator to select modality-specific pixels. The right column of Fig. 3b shows the pixel gate values for both modality \(\mathbf{X}\) (top) and \(\mathbf{Y}\) (bottom). We can see that mmDUFS selects pixels that outline digits \(0,8\) for modalities \(\mathbf{X},\mathbf{Y}\), respectively. Additionally, mmDUFS achieves F1-score 0.8059 and 0.8832 for \(\mathbf{X}\) and \(\mathbf{Y}\), showcasing its effectiveness in identifying features contributing to the differential structures. Synthetic Developmental Tree.Tree structures are ubiquitous throughout different biological processes and data modalities in single-cell biology [38, 39]. To understand the interplay of different mechanisms underlying the complex developmental process, it is vital to discover the genetic features that contribute to the tree structure shared across modalities and those that contribute to modality-specific structures. \begin{table} \begin{tabular}{|c|c||c|c|c|c|} \hline Dataset & Modality & MC & mmKS & mmKP & mmDUFS \\ \hline \multirow{2}{*}{Rescaled MNIST} & X & 0.3547 & 0.5291 & 0.5291 & **0.7093** \\ & Y & 0.4826 & 0.6219 & 0.6219 & **0.8159** \\ \hline \multirow{2}{*}{Synthetic Developmental Tree} & X & 0.6000 & 0.7800 & 0.8400 & **0.8800** \\ & Y & 0.7800 & 0.8000 & 0.8200 & **0.9000** \\ \hline \multirow{2}{*}{Original Gaussian + 10 Noisy Feats} & X & 0.5000 & 0.7333 & **1** & **1** \\ & Y & 0.5500 & 0.6500 & 0.9500 & **1** \\ & X & 0.5000 & 0.7333 & **1** & **1** \\ & Y & 0.5000 & 0.6500 & 0.9000 & **1** \\ & X & 0.4667 & 0.7000 & 0.9667 & **1** \\ & Y & 0.4500 & 0.5500 & 0.8500 & **1** \\ & X & 0.4000 & 0.6333 & 0.9333 & **0.9667** \\ & Y & 0.4000 & 0.5500 & 0.8000 & **0.8500** \\ \hline \end{tabular} \end{table} Table 1: Comparison of F1-score between different methods on the rescaled MNIST example, the synthetic tree example, and the Gaussian mixture example with different numbers of additive noisy features. Figure 3: Left (a-b): Evaluation of the proposed approach on the rescaled MNIST dataset. (a): Random images from modality \(X\) (upper row) and modality \(Y\) (bottom row) in gray-scale. (b): Selected pixels (dark blue) for the shared operator (left column) and the differential operator (right column). Right (c-e): Synthetic developmental tree example. (c): UMAP embeddings of the tree using data from modality \(\mathbf{X}\) (top) and modality \(\mathbf{Y}\) (bottom). (d-e): Change of the Shared/Differential Laplacian Scores, regularization loss, and the F1-score of the selected features concerning the number of epochs (x-axis) for mmDUFS with the shared operator (panel (c)) and the differential operator (panel (e)). We evaluate mmDUFS using a simulated developmental tree example generated via a tree simulator 2. The original data has 1000 samples and 100 features. We divide the data into half, such that each modality has 50 informative features that contribute to the shared tree structure, as shown in the UMAP embeddings in Fig. 3c, where the samples in the tree are grouped into different branch groups (labeled \(G_{1}\) to \(G_{6}\)). We then add 50 features drawn from negative binomial distributions to each modality to create differential branches, that are only observed in one modality. Specifically, branches \(G_{1}\) and \(G_{2}\) are bifurcated in modality \(\mathbf{X}\) (top UMAP embeddings) but are mixed in modality \(\mathbf{Y}\) (bottom UMAP embeddings), and \(G_{3}\) and \(G_{4}\) are bifurcated in modality \(\mathbf{Y}\) but are mixed in modality \(\mathbf{X}\) (see Supplementary section B.3 for further details). After log transformation and z-scoring the data, we concatenate 200 features drawn from \(N(0,1)\) to each modality as noisy features. Footnote 2: [https://github.com/dynverse/dyntoy](https://github.com/dynverse/dyntoy) We apply our model with the shared and differential operators to recover the features that contribute to the overall tree structure and the set of features that contribute to the split branches, respectively. Fig. 3d shows the change, during training with the shared loss, in the Shared/Differential Laplacian Scores, the regularization loss, and the F1-score. Fig. 3e shows the same properties for the differential loss. Table 1 compares the F1-score of the selected features between different methods. Here as well, mmDUFS clearly outperforms the other methods. Figure 4: Left (a-c): Rotating dolls example. (a): Random images of the dolls from each video. (b-c): Selected pixels are marked in blue for mmDUFS with shared operator (b) and the differential operator (c). Right (d-e): CITE-seq data example. (d): UMAP embeddings using the RNA (top) and protein data (bottom), colored by cell type labels. (e): Similar UMAP embeddings colored by the expression level of several genes selected by mmDUFS with the differential operator. Synthetic Gaussian Mixtures.We generated a multi-modal Gaussian mixture dataset, where \(\mathbf{X}\) and \(\mathbf{Y}\) each have 3 clusters. Two clusters are shared between modalities, and cluster 3 and 4 are specific to \(\mathbf{X}\) and \(\mathbf{Y}\), respectively. Each cluster has a set of informative features drawn from a multivariate Gaussian, along with noisy features (see Appendix B.2 for details). We first apply mmDUFS to uncover the informative features of the shared clusters and the modality-specific clusters. In the figure of Supplementary section B.2, we plot the change of the average shared/differential Laplacian Scores across features, the regularization loss, and the F1-score of the selected features from mmDUFS with respect to the number of epochs, where we can see that mmDUFS gradually selects the correct features corresponding to high scores while sparsifying the number of features. To evaluate mmDUFS's feature selection capability in challenging regimes, we further inject 10, 30, and 50 noisy features into each modality and compare the F1-score of the selected features from different methods in each regime. As shown in Table 1, mmDUFS consistently outperforms the baseline methods while maintaining accurate feature identification capability, demonstrating its robustness against noise. ### Real Data Rotating Dolls.We evaluate mmDUFS's performance on the rotating doll video dataset described in Sec. 3.1 in which 2 cameras capture 2 dolls from different angles (Fig. 4a). By treating each video frame as one sample (4050 in total) and the gray-scaled pixels as features, we aim to uncover pixels that correspond to the shared doll (the dog) and the modality-specific dolls (Yoda and rabbit). For mmDUFS with the shared operator, Fig. 4b shows selected pixels in both videos, as indicated by the blue dots. The shape of the dog is clearly delineated in both modalities. We further compute the F1-score of the selected pixels with respect to the underlying pixels that correspond to the dog. mmDUFS achieves F1-score of 0.7158 and 0.8033 for the two modalities, whereas MC achieves 0.2390 and 0.3822, and mmKS and mmKP achieve 0.5452 and 0.6868. Fig. 4c shows the selected pixels of mmDUFS with the differential operator in the two videos. In videos 1, mmDUFS select mostly pixels corresponding to the Yoda (F1-score: 0.8861). For video 2, mmDUFS select mostly pixels corresponding to the rabbit (F1-score: 0.7446). CITE-seq Dataset.In single-cell biology, cell states are characterized by different features at different molecular levels. Identifying the contributing features is an open question crucial to understanding the underlying cell systems. We apply mmDUFS to a CITE-seq dataset from [3], in which cells are profiled at both transcriptomic and proteomic levels measuring expressions of genes and protein markers, to identify the genes and proteins that characterize the cell states in the multi-modal setting. In this data, a group of murine cells is spiked-in as controls to human cord blood mononuclear cells (CBMCs), and CITE-seq sequences the resulting cell system. Fig. 4d shows UMAP embeddings of the cells based on their RNA expression (top) and protein expression (bottom). From the full dataset, we analyzed 3 cell populations: murine cells (blue) and 2 CBMCs cell populations (Erythroids (orange) and CD34+ cells (green)). This dataset has 832 cells, with 500 top variable genes from modality 1 and 10 protein markers from modality 2. We can see that the murine cells are separable from the Erythroids in the RNA space but not in the proteomic space. To identify which gene markers contribute to the separation between cell groups, we apply mmDUFS with the differential operator to this data. We found that all the selected genes are murine genes that only express in the murine cells, as shown in Fig. 4e. This example demonstrates that mmDUFS can identify genetic markers contributing to the differential structures observed in single-cell multi-omic data. ## 5 Discussion We present mmDUFS, a feature selection method that learns two novel graph operators that capture the _shared_ and the _modality-specific_ structures in multi-modal data, while simultaneously selecting the features that are informative for these structures. MmDUFS can operate on small batches which makes it scalable to large datasets. On the other hand, finding the optimal regularization parameters for mmDUFS on real data may be challenging, for which we suggest an automatic procedure in Appendix B.1. A second potential limitation is the \(\mathcal{O}(n^{3})\) computational complexity required to compute \(\tilde{\mathbf{L}}\) (Eq. (13)). A possible solution is to reduce the complexity by computing a sparse Laplacian matrix. ## Acknowledgements The authors thank Amit Moscovich for the helpful discussions and feedback.
2302.05692
Ultracold Feshbach molecules in an orbital optical lattice
Quantum gas systems provide a unique experimental platform to study a fundamental paradigm of quantum many-body physics: the crossover between Bose-Einstein condensed (BEC) molecular pairs and Bardeen Cooper Schrieffer (BCS) superfluidity. Some studies have considered quantum gas samples confined in optical lattices, however, focusing on the case, when only the lowest Bloch band is populated, such that orbital degrees of freedom are excluded. In this work, for the first time, ultracold Feshbach molecules of fermionic $^{40}K$ atoms are selectively prepared in the second Bloch band of an optical square lattice, covering a wide range of interaction strengths including the regime of unitarity. Binding energies and band relaxation dynamics are measured by means of a method resembling mass spectrometry. The longest lifetimes arise for strongly interacting Feshbach molecules at the onset of unitarity with values around 300 ms for the lowest band and 100 ms for the second band. In the case of strong confinement in a deep lattice potential, we observe bound dimers also for negative values of the s-wave scattering length, extending previous findings for molecules in the lowest band. Our work prepares the stage for orbital BEC-BCS crossover physics.
Yann Kiefer, Max Hachmann, Andreas Hemmerich
2023-02-11T13:23:15Z
http://arxiv.org/abs/2302.05692v1
# Ultracold Feshbach molecules in an orbital optical lattice ###### Abstract **Quantum gas systems provide a unique experimental platform to study a fundamental paradigm of quantum many-body physics: the crossover between Bose-Einstein condensed (BEC) molecular pairs and Bardeen Cooper Schrieffer (BCS) superfluidity. Some studies have considered quantum gas samples confined in optical lattices, however, focusing on the case, when only the lowest Bloch band is populated, such that orbital degrees of freedom are excluded. In this work, for the first time, ultracold Feshbach molecules of fermionic \({}^{40}K\) atoms are selectively prepared in the second Bloch band of an optical square lattice, covering a wide range of interaction strengths including the regime of unitarity. Binding energies and band relaxation dynamics are measured by means of a method resembling mass spectrometry. The longest lifetimes arise for strongly interacting Feshbach molecules at the onset of unitarity with values around \(300\,\mathrm{ms}\) for the lowest band and \(100\,\mathrm{ms}\) for the second band. In the case of strong confinement in a deep lattice potential, we observe bound dimers also for negative values of the \(s\)-wave scattering length, extending previous findings for molecules in the lowest band. Our work prepares the stage for orbital BEC-BCS crossover physics.** The crossover between the regimes of BEC and BCS superfluidity is a hallmark of quantum gas physics [1; 2; 3; 4; 5; 6; 7; 8]. In most studies, the quantum gas sample is held in a nearly harmonic optical trapping potential. BEC-BCS crossover physics in optical lattices [9; 10] has been much less explored, and this research is limited to the lowest Bloch band, which exclusively provides local \(s\)-orbitals. For example, in the ground state of a three-dimensional (3D) optical lattice, binding energies of strongly interacting fermionic potassium pairs have been studied [11]. Signatures of coherence and superfluidity have been reported [12] for strongly interacting fermionic lithium pairs. In an earlier work with fermionic potassium, prepared in the transverse ground state of an array of effectively one-dimensional wave guides [13], Feshbach dimers have been shown to exist even at negative scattering lengths owing to a confinement induced scattering resonance [14; 15]. Very recently, \(p\)-wave interacting atomic pairs tightly confined in excited motional states of isolated microscopic traps have been investigated [16]. Similarly, as orbital degrees of freedom in electronic condensed matter, e.g. in transition metal oxides [17; 18], can give rise to unconventional order, the combination of the conventional BEC-BCS scenario with orbital physics in higher Bloch bands holds the intriguing perspective to discover unexplored fundamental many-body phases such as exotic forms of superfluidity [19]. Examples of chiral, nematic, or topological superfluids have been experimentally demonstrated for bosonic atoms in the second Bloch bands of square [20; 21], triangular [22], or hexagonal [23] optical lattices, respectively. Extending these scenarios to composite bosons composed of pairs of fermionic atoms would open up a new regime of unconventional BEC-BCS physics. In a recent work [24], we have demonstrated spin-polarized non-interacting fermionic potassium atoms (\({}^{40}\)K) and weakly interacting spin-mixtures in higher Bloch bands of an optical square lattice. The present work investigates the vastly different regime of strong interactions between spin-up and spin-down fermions accessed by tuning a Feshbach resonance. This allows us to demonstrate and investigate bosonic Feshbach dimers [25], composed of fermionic \({}^{40}\)K atoms, in higher Bloch Figure 1: _Production and detection of Feshbach molecules._ (a) A single mass spectrometry image of a mixture of \({}^{40}\)K atoms and \({}^{40}\)K Feshbach dimers excited to the second Bloch band. (b) Composition of the image in (a) by the small second BZ of the molecules and the double-sized second BZ of the atoms. The velocity scale is \(v_{M}=\sqrt{2}\,2\hbar k/M\) with \(k=2\pi/\lambda\) and the mass \(M\) of the Feshbach dimers. (c) Feshbach resonance and protocol for molecule preparation. See text. (d) Sketch of the lattice geometry in the xy-plane for \(\Delta V=0\) (left) and \(\Delta V<0\) (right). The grey shaded squares denote the corresponding Wigner-Seitz unit cells. At the lower edge, sections along the dashed lines in the upper panels of (d) are shown. bands of an optical square lattice. We study shallow and deep lattices, where the dimers can tunnel or are confined in one-dimensional channels. A method inspired by mass spectrometry is applied to discriminate atoms and dimers by separating them in a ballistic time-of-flight protocol (see Methods). This leads to images of the Brillouin zone (BZ) structures of atoms and molecules in velocity space, which differ in size by their mass ratio. This is seen in Fig. 1(a) for a mixture of molecules and atoms populating the second Bloch band. The molecules appear in the small - and the atoms in the large second BZ, forming a nested structure according to the sketch in Fig. 1(b). Hence, atoms and molecules can be separately counted. Using this method, we present measurements of binding energies, dissociation dynamics, and exceptionally long molecule lifetimes for a wide range of interaction strengths including the strongly correlated regime. Such long lifetimes exceeding all other relevant time scales are crucial, to study long-lived equilibrium states of the system. _Production of ultracold Feshbach molecules._ In short, the production of Feshbach dimers in the second Bloch band of a square lattice proceeds as follows: A balanced fermionic spin mixture is produced in the lowest Bloch band of the lattice. Subsequently, spin-up and spin-down atoms are associated to form Feshbach dimers by rapid adiabatic tuning of the magnetic field across an \(s\)-wave Feshbach resonance. Finally, a quench of the lattice potential selectively excites the dimers to the second Bloch band. The following detailed protocol is applied. A non-interacting spin-polarized degenerate Fermi gas of \(6\times 10^{5}\)\({}^{40}\)K atoms in the state \(|F=9/2,m_{F}=9/2\rangle\) at a temperature of \(T=0.17\,T_{F}\) is prepared in an optical dipole trap (ODT), formed by two orthogonally intersecting laser beams with a wavelength of \(\lambda=1064\,\)nm, where \(T_{F}\) denotes the Fermi temperature. Note that for spin mixtures prepared at low magnetic fields, the lowest temperature reached in the ODT is \(T/T_{F}=0.09\). Next, a radio-frequency with a constant value of \(46\,\)MHz is applied, while a homogeneous magnetic field \(B\) (pointing along the \(z\)-axis) is ramped up from zero to approximately \(B_{0}\approx 209.9\,\)G, such that a rapid adiabatic passage is obtained that inverts the sample to the \(|\downarrow\rangle\equiv|F=9/2,m_{F}=-9/2\rangle\) state. Due to a Feshbach resonance located at \(B_{\text{res}}=202.1\,\)G [1], the \(s\)-wave scattering length for contact interaction between the states \(|\downarrow\rangle\) and \(|\uparrow\rangle\equiv|F=9/2,m_{F}=-7/2\rangle\), approximated as \[a_{S}(B)=a_{bg}\left[1-\Delta B/(B-B_{\text{res}})\right], \tag{1}\] takes the value \(a_{S}(B_{0})\approx 0\), which corresponds to position (1) in Fig. 1 (c). Here, \(a_{bg}=174\,a_{0}\) is the background scattering length and \(\Delta B=7.8\,\)G is the width of the Feshbach resonance. Subsequently, the sample is adiabatically loaded into the lowest Bloch band of a bipartite optical square lattice, formed by two mutually orthogonal standing waves with wavelengths \(\lambda=1064\,\)nm, oriented perpendicularly to the \(z\)-axis. The optical lattice is shaped in a Michelson-Sagnac interferometer that allows for precise control of the associated band structure (for details see Ref. [24]). The resulting optical potential is composed of two classes of independently tunable potential wells \(\mathcal{A}\) and \(\mathcal{B}\) arranged as the black and white squares of a chequerboard (see Fig. 1 (d)). In the \(xy\)-plane, the lattice potential can be approximated by \[\begin{split} V(x,y)=&-V_{0}\left[\cos^{2}(kx)+\cos^ {2}(ky)\right]\\ &-\frac{1}{2}\Delta V\cos(kx)\cos(ky)\end{split} \tag{2}\] with the wave number \(k=2\pi/\lambda\), the lattice depth parameter \(V_{0}\), and \(\Delta V\equiv-4V_{0}\cos(\theta)\) denoting the potential difference between the \(\mathcal{A}\) and \(\mathcal{B}\) wells. The experimental parameter \(\theta\) can be tuned within the interval \([0,\pi]\), i.e., \(\Delta V\in 4V_{0}\times[-1,1]\). Note that tightly bound dimers with mass \(M\equiv 2m\) possess twice the polarizability and hence twice the value of \(V_{0}\) as compared to atoms with mass \(m\). Henceforth, we indicate lattice depth parameter values for atoms and molecules as \(V_{0}^{(m)}\) and \(V_{0}^{(M)}=2\,V_{0}^{(m)}\), respectively. Along the \(z\) direction, the atoms are held by the weak approximately harmonic confinement of the optical dipole trap, such that the lattice wells acquire a tubular shape. After lattice loading, we apply a radio frequency pulse during \(13\,\mu\)s to create a Fermi gas with equal populations in the states \(|\uparrow\rangle\) and \(|\downarrow\rangle\) at position (1) in Fig. 1(c). Next, by tuning \(B\) to \(202.3\,\)G (cf. position (2) in Fig. 1 (c)), the \(s\)-wave scattering length \(a_{S}\) for collisions between \(|\uparrow\rangle\) and \(|\downarrow\rangle\) is adjusted to a large negative value, in order to obtain efficient evaporative cooling of the atomic sample in presence of the lattice. Finally, the previously unpaired mixture of Fermions is converted into bosonic molecules by adiabatically sweeping the magnetic field across the Feshbach resonance from \(202.3\,\)G to \(200.46\,\)G (indicated as step (3) in Fig. 1(c)). For the lowest temperatures, upon arrival at position (4) in Fig. 1(c), we observe close to hundred percent conversion efficiencies with no discernible atomic fraction. Next, a quench from an initial value \(\theta\approx 0.4\,\pi\), used for loading the lowest band, to \(\theta\approx 0.53\,\pi\) efficiently prepares a large fraction of the atoms or molecules in the second band (for details see Ref. [24]). By means of a final adiabatic change of \(B\) (position (5) in Fig. 1 (c)), we may subsequently tune to a desired target value of \(a_{S}\), which lets us adjust the molecular binding energy. _Binding energies_ Binding energy of Feshbach molecules are measured by dissociating them with a \(5\,\)ms long radio frequency pulse, converting \(|\uparrow\rangle\) atoms into atoms in the auxiliary state \(|\text{Aux}\rangle\equiv|F=9/2,m_{F}=-5/2\rangle\). The unbound \(|\downarrow\rangle\) atoms, which remain trapped in the optical lattice, are readily discriminated from the molecules as explained in the context of Fig. 1 (a,b) and in Methods. Let us first discuss the case of a deep optical lattice, such that tunnelling is practically suppressed for molecules and their motion is restricted to quasi one-dimensional (1D) tubes along the \(z\)-direction. An example for this regime is realized by adjusting \(V_{0}^{(M)}=40\,E_{\rm rec}^{(M)}\), \(\theta=0.4\,\pi\) and hence \(\Delta V^{(M)}=-49.4\,E_{\rm rec}^{(M)}\) with \(E_{\rm rec}^{(M)}\equiv\frac{\hbar^{2}k^{2}}{2M}\) denoting the single photon recoil energy for molecules. A typical example of a dissociation spectrum for molecules \(|{\rm Mol}:1\rangle\) prepared in the lowest band is shown in Fig. 2(a) for \(B=201.95\,\)G corresponding to the data point in (d) indicated by an arrow. We plot the numbers of molecules (diamonds) and atoms (disks) in the lowest band, the atoms in all excited bands (squares), and the total number of particles, i.e., atoms plus molecules (triangles), against \(\Delta f=f-f_{0}\), i.e., the applied radio frequency \(f\) minus the \(|\uparrow\rangle\rightarrow|{\rm Aux}\rangle\) transition frequency \(f_{0}\) (see Fig. 2(b)). The latter depends only on \(B\) and is readily measured after preparing a pure \(|\uparrow\rangle\) sample in the dipole trap. Hence, \(\Delta f=0\) corresponds to zero molecular binding energy. To understand the spectral features in Fig. 2(a), it is helpful to first look at the atomic and molecular levels sketched in Fig. 2(b). The black horizontal bars show the energies of the bare two-atom states \(|\uparrow,\downarrow\rangle\) and \(|{\rm Aux},\downarrow\rangle\) separated by the frequency \(f_{0}\). The blue horizontal bar shows the energy of the Feshbach molecules \(|{\rm Mol}\rangle\) made of atom pairs \(|\uparrow,\downarrow\rangle\), shifted by the binding energy \(E_{B}\). The red horizontal bars show the additional energy shifts for the Bloch bands due to the presence of the optical lattice. The molecules \(|{\rm Mol}\rangle\) are prepared in the lowest molecular band, denoted \(|{\rm Mol}:1\rangle\) in Fig. 2(b). As the radio frequency \(f\) is increased, we expect the first drop of the molecular population in \(|{\rm Mol}:1\rangle\), when \(f\) reaches the resonance frequency \(f_{1}\) for the transition \(|{\rm Mol}:1\rangle\rightarrow|{\rm Aux},\downarrow\,:1\rangle\) such that unbound atoms in the first Bloch band are produced. According to Fig. 2(a), this occurs at \(\Delta f=\Delta f_{1}\equiv 22.7\,\)kHz, which is identified with the value of the binding energy \(E_{B}\). To further illustrate the conversion of molecules into atoms, mass spectrometry images are shown in Fig. 2(c) for \(\Delta f=-5.3\,\)kHz, well below \(\Delta f_{1}\) (c1), and at \(\Delta f=\Delta f_{1}\) (c2). In fact, in (c1), predominantly molecules in the first BZ are seen, while in (c2) most molecules are dissociated into atoms, giving rise to a first BZ expanded by a factor two. At larger values of \(\Delta f>\Delta f_{1}\), further resonances occur, leading to reduced molecule numbers due to dissociation into higher bands \(|{\rm Aux},\downarrow\,:\nu\rangle\) with \(\nu\in\{2,3,5,6,7\}\). The respective resonance frequencies are readily calculated by a band structure calculation and are plotted into Fig. 2(a) as vertical dashed red lines. Note that, for the fourth band \(\nu=4\), no resonance arises, which can be explained by the small Franck-Condon overlap between the wave function of \(|{\rm Mol}:1\rangle\) and \(|{\rm Aux},\downarrow\,:4\rangle\). This has been checked by calculating the respective Bloch functions, with the result that \(|{\rm Mol}:1\rangle\) predominantly resides in the deep wells and \(|{\rm Aux},\downarrow\,:4\rangle\) in the shallow wells. Dissociation spectra as in Fig. 2(a) let us determine the binding energies for molecules prepared in the first (dark purple disks) and second (orange diamonds) bands, shown in Fig. 2(d). The gray solid line shows a calculation using the implicit equation \[\frac{a_{S}-r_{0}}{(1+\delta)a_{r}}=-\frac{1}{\zeta\,(1/2,-E_{B}/2\hbar\omega _{r})}. \tag{3}\] adapted from Ref. [15] for a single 1D tubular potential, where \(\zeta\) denotes the Hurwitz zeta function. We insert the radial fundamental frequency \(\omega_{r}\), measured in the radially symmetric quasi 1D tubes of our lattice, and the associated harmonic oscillator length \(a_{r}\equiv\sqrt{\hbar/\mu\omega_{r}}\) for the relative atomic motion with \(\mu\) denoting the reduced atomic mass. We multiply \(a_{r}\) by \(1+\delta\) with a small positive \(0<\delta\ll 1\), to account for the anharmonicity of the tube potential. The free space \(s\)-wave scatter Figure 2: _Binding energies in a deep lattice._ (a) A typical dissociation spectrum for a magnetic field \(B=201.95\,\)G, i.e., corresponding to the data point in (d) highlighted by an arrow, with molecules initially prepared in the lowest Bloch band (\(V_{0}^{(M)}=40\,E_{\rm rec}^{(M)}\), \(\theta=0.4\,\pi\)). The orange diamonds and dark purple disks show the molecule and atom populations in the first band, respectively, plotted against \(\Delta f=f-f_{0}\), with the irradiated radio frequency \(f\), and \(f_{0}\) according to (b). The leftmost vertical red dashed line indicates the frequency, where the first significant drop of the molecule population and a corresponding peak of the atom population in the first band is observed. This frequency is identified with the molecular binding energy \(E_{B}\). Further dashed vertical lines denote the positions of higher Bloch bands \(|{\rm Aux},\downarrow\,:\nu\rangle\) with \(\nu\in\{2,3,4,5,6,7\}\), leading to further resonant molecule dissociation (cf. (b)). The error bars show the standard deviation of the mean for 15 experimental runs. (c) Mass spectrometry images for \(\Delta f=-5.3\,\)kHz (c1) and \(\Delta f=22.7\,\)kHz (c2). The white dashed rectangles show the first BZ for molecules (c1) and atoms (c2). (d) Measured binding energies for molecules prepared in the first (dark purple disks) and second (orange diamonds) Bloch band with (\(V_{0}^{(M)}=40\,E_{\rm rec}^{(M)}\), \(\theta=0.4\,\pi\)) and (\(V_{0}^{(M)}=60\,E_{\rm rec}^{(M)}\), \(\theta=0.54\,\pi\)), respectively. The error bars are estimated from the spectral width of the underlying dissociation resonance. The dashed vertical line indicates the position of Feshbach resonance \(B_{\rm res}\). The grey line shows a calculation using Eq. 3. ing length \(a_{S}\) is expressed as a function of the magnetic field \(B\) according to Eq. 1. Finally, following Ref. [26], to account for a realistic van der Waals scattering potential \(-C_{6}\,r^{-6}\), we introduce the finite range parameter \(r_{0}=(mC_{6}/32\hbar^{2})^{1/4}\,\Gamma(3/4)/\Gamma(5/4)\), and replace \(a_{S}\) by \(a_{S}-r_{0}\). From Refs. [27; 28] we take \(C_{6}=3926\,a_{0}\) and \(r_{0}=65\,a_{0}\). Note that Eq. 3 is configured such that for zero interaction, zero binding energy is obtained. This theoretical model, based on Refs. [14; 15; 26], reproduces the binding energies in Fig. 2(d) remarkably well if one sets \(\delta=0.142\), which reasonably well agrees with the expected anharmonicity. The model accounts for effects of reduced dimensionality arising if the degree of radial confinement becomes comparable with \(a_{S}\), giving rise to a confinement induced resonance of the effective 1D scattering cross section. As a consequence, bound states become possible for negative values of \(a_{S}\) as is seen in Fig. 2(d). This has been previously reported for potassium Feshbach molecules in the lowest transverse state of an array of isolated 1D optical traps in Ref. [13]. Note that Fig. 2(d) shows equal binding energies for molecules prepared in the first and second bands. This results from engineering the lattice potentials via adjustment of \(V_{0}\) and \(\Delta V\), such that the lattice wells with predominant population, i.e., the deep wells, if the molecules are prepared in the first and the shallow wells if prepared in the second band, respectively, exhibit equal values of \(\omega_{r}\) and \(a_{r}\). For example, with \(V_{0}^{(M)}=40\,E_{\rm rec}^{(M)}\) and \(\theta=0.4\,\pi\) for molecules in the first band, and \(V_{0}^{(M)}=60\,E_{\rm rec}^{(M)}\) and \(\theta=0.54\,\pi\) for molecules in the second band, as used in Fig. 2(d), \(\omega_{r}=2\pi\,\times 30\,\)kHz is obtained. Next, we discuss molecules prepared in the second band (\(|{\rm Mol:2}\rangle\)) for a shallow lattice with \(V_{0}^{(M)}=10\,E_{\rm rec}^{(M)}\), \(\theta=0.54\,\pi\) and hence \(\Delta V^{(M)}=5\,E_{\rm rec}^{(M)}\) for an extended range of the magnetic field below the Feshbach resonance, \(B<B_{\rm res}\). In Fig. 3(a), we show a dissociation spectrum for molecules at \(B=200.6\,\)G corresponding to the dark purple diamond-shaped data point in (c). The plot in (a) shows the second band molecules (grey triangles), whose number is initially maximized, second band atoms (magenta squares), and molecules (orange diamonds) and atoms (dark purple disks) in the first band. The red dashed lines emphasize the frequencies \(\Delta f_{1}\) and \(\Delta f_{2}\), where local minima in the number of molecules in the second band are found, due to maximal efficiency of the dissociation process. At frequencies \(\Delta f\ll\Delta f_{1}\) dissociation is not resonant such that in mass spectrometry images one observes the second BZ filled with molecules, as exemplified in the panel (I) of Fig. 3(b), recorded at position (I) in (a). At the left red dashed line (\(\Delta f_{1}\)), dissociation arises due to a transition, mainly exciting second band molecules \(|{\rm Mol:2}\rangle\) to atomic pairs \(|{\rm Aux},\downarrow\!:1\rangle\) in the first band. This is confirmed by the mass spectrometry image in panel (II) in (b), recorded at position (II) in (a), close to the left red dashed line. Here, a partial filling of the first BZ with atoms is observed. The dissociation process shows limited efficiency due to the small Franck Condon overlap between the wave functions involved, similarly as discussed in the context of Fig. 2(a) for the transition \(|{\rm Mol:1}\rangle\rightarrow|{\rm Aux},\downarrow\!:4\rangle\). Around the second red dashed line (\(\Delta f_{2}\)), the dissociation transition couples \(|{\rm Mol:2}\rangle\) to \(|{\rm Aux},\downarrow\!:2\rangle\). Both states belong to second bands with a sizable Franck Condon overlap, such that the dissociation around \(\Delta f_{2}\) is notably more efficient than for \(\Delta f_{1}\), as seen by the nearly complete depletion of the molecule population in (a) around \(127\,\)kHz. Panel (III) in Fig. 3(b) confirms that the dissociated atoms in fact arise in the second band, giving rise to a filled second BZ for atoms, twofold increased as compared to the second BZ for molecules in panel (I). Note that \(\Delta f_{2}-\Delta f_{1}\) is approximately given by the separation of the second and first bands for atoms. A determination of \(\Delta f_{1},\Delta f_{2}\) below the scale of a few kHz is not supported by the width of the spectral features observed near the red dashed lines in Fig. 3(a). In Fig. 3(c), binding energies for molecules in the second band are shown, measured by analyzing dissociation spectra as in (a) for different magnetic fields. The grey solid line represents the theoretical prediction according to Eq. 3, showing remarkable agreement. _Molecular decay dynamics._ In this section we discuss the observation of two relaxation channels for Feshbach dimers [29]. The first process dominates for \(\xi\equiv\) Figure 3: _Binding energies in a shallow lattice._ (a) Dissociation spectrum for molecules initially prepared in the second band with a lattice depth parameter \(V_{0}^{(M)}=10\,E_{\rm rec}^{(M)}\) and \(\theta=0.54\,\pi\). The magnetic field is \(B=200.6\,\)G corresponding to the dark purple diamond in (c). Orange diamonds (dark purple disks) show populations of molecules (atoms) in the first band. Grey triangles (magenta squares) show populations of molecules (atoms) in the second band. The red dashed lines indicate the two frequencies \(\Delta f_{1}\) and \(\Delta f_{2}\), where dissociation maximally depletes the population of molecules in the second band. The error bars show the standard deviation of the mean for 15 experimental runs. (b) Mass spectrometry images for the values of \(\Delta f\) indicated by I, II, III in (a). The orange diamonds in (c) show binding energies obtained from spectra as plotted in (a) for varying magnetic fields. The errors are estimated from the spectral width of the underlying dissociation resonance to be less than \(5\,\)kHz, i.e. an order of magnitude smaller than the data symbols. The grey solid line presents the theoretical prediction of Eq. 3. \((k_{F}a_{S})^{-1}\gg 1\) (\(k_{F}\equiv\) Fermi momentum), i.e., for relatively weak scattering lengths, where Feshbach dimers become deeply bound. In this regime, the primary relaxation process is based on inelastic dimer-dimer collisions, where one molecule gains binding energy, while the other is dissociated, such that both molecules are lost. Dimer-atom collisions are less relevant since our experiments start with a nearly pure molecule sample. Larger binding energies provide larger Franck-Condon overlap between the involved molecular wave functions, so that the molecular lifetime \(\tau\) should decrease with \(\xi\), in accordance with the scaling \(\tau\propto(a_{S})^{2.55}\) predicted in absence of a lattice [30]. In the other extreme for \(\xi\ll 1\), according to Ref. [30], the elastic dimer-dimer scattering length \(a_{dd}\) scales as \(a_{dd}\approx 0.6\,a_{S}\) and hence due to the large size of \(a_{S}\), molecular three-body collisions are expected to introduce molecule loss and to give rise to a lifetime that decreases, when \(\xi\) approaches zero. In Fig. 4(a), we benchmark the lifetimes of the Feshbach molecules trapped in the approximately harmonic optical dipole trap. After the dimers are formed (Fig. 1(c)), the scattering length \(a_{S}\) is adjusted to a desired value via the magnetic field according to Eq. 1 and the dimers are held for a variable time in the trap. In order to count the remaining number of molecules \(n_{M}\), the magnetic field is rapidly tuned to \(B_{\rm img}=200.46\,\)G, associated with a moderate value of \(a_{S}\) and the molecules are allowed to ballistically expand (nearly unimpaired by interaction) during \(22\,\)ms, before an absorption image of the \(\ket{\uparrow}\) atoms is recorded. The dimer population plotted against the hold time is fitted with the two-body decay model \(\dot{n}_{M}=-\beta\,n_{M}^{2}\), with the solution \(n(t)=n_{0}\cdot(1+t/\tau)^{-1}\), the initial number of molecules \(n_{0}\equiv n(0)\), and the half-time of the molecule sample \(\tau\equiv(n_{0}\beta)^{-1}\). An exponential fit, assuming density independent loss, or a three-body decay model fail to describe the data. The obtained half-time \(\tau\) is plotted in Fig. 4(a) versus \(\xi\) and \(a_{S}\). The inset shows an exemplary measurement of \(\beta\) and \(n_{0}\) leading to the light green data point indicated by a black arrow. The main panel shows strikingly different behaviour in the unitarity regime \(\xi<1\), where a nearly constant half-time \(\tau\) around \(100\,\)ms is observed, and in the regime \(\xi>1\), where the data are well fitted with a straight line, indicating a power law \(\tau\propto\xi^{-\kappa}\) with an exponent \(\kappa=0.84\pm 0.05\). The largest values of \(\tau\) are found at \(\xi\approx 1\). Note that, \(\kappa\) does not agree with the prediction 2.55 for dimer-dimer collisions in Ref. [30] or the experimental value \(\approx 2.3\) for mixed dimer-dimer and dimer-atom collisions, reported in Ref. [2]. In Fig. 4(b), an analogous analysis is carried out in presence of a shallow optical lattice with \(V_{0}^{(M)}=8\,E_{\rm rec}^{(M)}\), which permits nearest-neighbour tunneling on a sub-millisecond time scale. The orange disks show the observed half-lives for molecules prepared in the first Bloch band with \(\theta=0.4\,\pi\) and hence \(\Delta V_{0}^{(M)}=-9.88\,E_{\rm rec}^{(M)}\). The magenta squares correspond to \(\theta=0.535\,\pi\) (i.e., \(\Delta V_{0}^{(M)}=3.51\,E_{\rm rec}^{(M)}\)) for molecules prepared in the second Bloch band. In the regime \(\xi>2\), both data sets show the same dependence on \(\xi\) and are fitted with the same straight line (black solid line in (b)), indicating a power law model \(\propto\xi^{-\kappa}\) with \(\kappa=1.28\pm 0.06\). The dashed black line is a continuation into the \(\xi<2\) domain. Note that also in the presence of a lattice, \(\kappa\) does not agree with the prediction 2.55 in Ref. [30] or the experimental value \(\approx 2.3\) from Ref. [2], both obtained for a scenario without a lattice. For \(\xi<1\), we find that \(\tau\) grows with increasing \(\xi\). While the data for the second band very well agree with the results for the first band in the entire range \(\xi>2\), for \(\xi<2\) a dramatic decrease of \(\tau\) is observed for the second band, which only rises up to a threefold shorter lifetime than observed for the first band. This indicates that an additional relaxation channel opens for the second band if \(\xi<2\). In fact, as illustrated in the two insets at the upper edge of Fig. 4(b), for \(\xi\) close to zero, e.g. for \(\xi\approx 0.1\), we see an initial pronounced decay of molecules into the first band, followed by a subsequent decay of the first band population, while in the domain \(\xi>2\), e.g., close to \(\xi\approx 6\), the molecules do not initially transit to the first band during relaxation. This may be explained as follows: For small \(\xi\), i.e., large \(a_{S}\), in the Figure 4: _Relaxation dynamics_. (a) Half-lives \(\tau\) of Feshbach dimers in the dipole trap plotted versus \(\xi\) (upper axis) and \(a_{S}\) (lower axis). The inset shows an exemplary measurement of \(\tau\) for the light green data point indicated by the black arrow. The black solid line is a fit for the domain \(\xi>2\) with a straight line, extrapolated to the domain \(\xi<2\), indicating a power law \(\propto\xi^{-\kappa}\) with \(\kappa=0.84\pm 0.05\). The orange disks and magenta squares in (b) show half-lives of Feshbach dimers prepared in the first and second Bloch bands of an optical lattice, respectively. The lattice depth is \(V_{0}^{(M)}=8\,E_{\rm rec}^{(M)}\) and \(\theta=0.4\,\pi\) for the first Bloch band and \(0.535\,\pi\) for the second Bloch band. The black solid line is a common fit for data of both bands within the domain \(\xi>2\) with a straight line, extrapolated to the domain \(\xi<2\) (dashed continuation), indicating a power law \(\propto\xi^{-\kappa}\) with \(\kappa=1.28\pm 0.06\). The error bars for \(\tau=(n_{0}\,\beta)^{-1}\) in (a) and (b) are determined by propagating the errors for \(n_{0}\) and \(\beta\) obtained in the fit procedure. The insets at the top boundary of (b) illustrate the different relaxation paths found in the second Bloch band for small and large \(\xi\) (cf. text). second band, the increasing elastic binary collision cross section can lead to a redistribution of energy from the lattice plane into the tube direction, which gives rise to an additional escape channel from the lattice. A similar relaxation channel for weakly interacting bosonic atoms has been recently reported in Ref. [31]. Also effects of reduced dimensionality in the tubular lattice sites are expected to be larger in the first than in the second band, which may also contribute to explain the longer lifetimes of first band dimers [16]. In summary, our work demonstrates strongly correlated ultracold Feshbach dimers prepared in higher Bloch bands of an optical lattice, which give rise to orbital physics. Using a method reminiscent of mass spectrometry, we find surprisingly long molecular lifetimes in the unitarity regime on the order of hundred milliseconds and study binding energies and dissociation and relaxation dynamics. Our work prepares the stage for future studies of BEC-BCS cross over physics with orbital degrees of freedom. ## Methods Mass spectrometry protocolIn order to uniquely distinguish and selectively count the populations of atoms and Feshbach dimers in presence of an optical lattice, they are separated in a ballistic time-of-flight protocol, which maps the population of the \(n\)-th band to the \(n\)-th Brillouin zone (BZ). This protocol consists of a rapid adiabatic termination of the lattice potential (in 5 ms) followed by a ballistic expansion (in 22 ms). During the second half of the expansion the magnetic field is tuned to \(B_{\text{img}}=200.6\,\text{G}\) in order to enable absorption imaging at the same resonant frequency in each experimental run. For the case that the dimer mass \(M\) is twice that of an atom \(m\), atoms travel twice as fast compared to homonuclear dimers with the same initial momentum. Hence, this protocol arranges the atoms and molecules in velocity space with a BZ structure scaled up by a factor two for atoms as compared to dimers. This is sketched in Fig. 5. The figure shows the first and second BZs for atoms in the background (black and orange areas) and for molecules in the foreground (grey and red areas), giving rise to a nested structure of four white squares denoted 1-4. The optical densities integrated across the areas enclosed by these squares are denoted as \(\square_{\nu},\nu\in\{1-4\}\). We denote the total populations as \(A_{\nu}\) and \(D_{\nu}\) for atoms and molecules in the band with band index \(\nu\in\{1,2\}\), respectively. If sectors in Fig. 5 comprise atoms and dimers, the counting protocol requires the assumption that the first band for atoms is uniformly filled. This assumption is reasonable since, in the experiments described in the main text, we typically start with the lowest band filled with fermionic atoms, form Feshbach dimers, and subsequently excite a fraction of them to the second band without changing their quasi-momenta [24]. Note that due to the bosonic nature of the Feshbach dimers, the respective Brillouin zones (red and grey areas in Fig. 5) are not necessarily filled homogeneously, since Bose-enhancement and hence multiple occupancy of available energy states are possible. Under the condition of a homogeneous filling of the first atomic band, the populations \(A_{1},A_{2},D_{1},D_{2}\) in different sectors of Fig. 5 are indicated in the figure accounting for the four-fold rotation symmetry of the BZs and the four times larger BZ areas for atoms. This leads to the following relations: \(A_{2}=\square_{1}-\square_{2}\), \(A_{1}=2(\square_{2}-\square_{3})\), \(D_{2}=\frac{3}{2}\square_{3}-\square_{4}-\frac{1}{2}\square_{2}\), and \(D_{1}=\frac{1}{2}\square_{3}-\frac{1}{2}\square_{2}+\square_{4}\). If all atoms and molecules reside in the second Band, i.e. \(A_{1}=D_{1}=0\), the simple case \(A_{2}=\square_{1}-\square_{2}\) and \(D_{2}=\square_{3}-\square_{4}\) arises, in which case the condition of uniform filling is not required. ## Acknowledgments We thank Raphael Eichberger for help in the early stage of the experiment. We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) through the collaborative research center SFB 925 (Project No. 170620586, C1). M.H. was partially supported by the Cluster of Excellence CUI: Advanced Imaging of Matter of the Deutsche Forschungsgemeinschaft (DFG) - EXC 2056 - project ID 390715994.
2305.01453
Nonlinear Isocapacitary Concepts of Mass in 3-Manifolds with Nonnegative Scalar Curvature
We deal with suitable nonlinear versions of Jauregui's isocapacitary mass in 3-manifolds with nonnegative scalar curvature and compact outermost minimal boundary. These masses, which depend on a parameter $1<p\leq 2$, interpolate between Jauregui's mass $p=2$ and Huisken's isoperimetric mass, as $p \to 1^+$. We derive positive mass theorems for these masses under mild conditions at infinity, and we show that these masses do coincide with the ADM mass when the latter is defined. We finally work out a nonlinear potential theoretic proof of the Penrose inequality in the optimal asymptotic regime.
Luca Benatti, Mattia Fogagnolo, Lorenzo Mazzieri
2023-05-02T14:33:01Z
http://arxiv.org/abs/2305.01453v2
# Nonlinear isocapacitary concepts of mass in nonnegative scalar curvature ###### Abstract. We deal with suitable nonlinear versions of Jauregui's Isocapacitary mass in \(3\)-manifolds with nonnegative scalar curvature and compact outermost minimal boundary. These masses, which depend on a parameter \(1<p\leq 2\), interpolate between Jauregui's mass \(p=2\) and Huisken's Isoperimetric mass, as \(p\to 1^{+}\). We derive Positive Mass Theorems for these masses under mild conditions at infinity, and we show that these masses do coincide with the ADM mass when the latter is defined. We finally work out a nonlinear potential theoretic proof of the Penrose Inequality in the optimal asymptotic regime. Msc (2020): 83C99, 35B40,35A16,31C15, 53C21. Keywords:Penrose inequality, positive mass theorem, isoperimetric mass, isocapacitary mass, nonlinear potential theory, geometric inequalities. _Dedicated to Jean-Pierre Bourguignon on the occasion of his 75th birthday_ ## 1. Introduction The Isoperimetric concept of mass was introduced by Huisken [14] to study manifolds with nonnegative scalar curvature where asymptotic assumptions on the metric are not strong enough to define the classic ADM mass [1]. Given a manifold \((M,g)\), the Isoperimetric mass is indeed defined as \[\mathfrak{m}_{\text{iso}}=\sup_{(\Omega_{j})_{j\in\mathbb{N}}}\limsup_{j\to+ \infty}\mathfrak{m}_{\text{iso}}(\Omega_{j}), \tag{1.1}\] where the supremum is taken among all exhaustions \((\Omega_{j})_{j\in\mathbb{N}}\) consisting of domains with \(\mathscr{C}^{1}\)-boundary and \[\mathfrak{m}_{\text{iso}}(\Omega)=\frac{2}{|\partial\Omega|}\left(|\Omega|- \frac{|\partial\Omega|^{\frac{3}{2}}}{6\sqrt{\pi}}\right).\] Unlike the ADM mass, which is defined on an asymptotically flat chart at infinity as \[\mathfrak{m}_{\text{ADM}}=\lim_{j\to+\infty}\frac{1}{16\pi}\int_{\partial \Omega_{j}}g^{ij}(\partial_{k}g_{ij}-\partial_{i}g_{kj})\nu^{k}\,\mathrm{d} \sigma\,,\] the Isoperimetric mass does not require passing to a chart to be defined. Rather, it is based on the geometric concepts of volume and perimeter, making it well-defined even when there is limited information on the asymptotic behaviour of the metric. Inspired by this observation, in [1], we proved a Riemannian Penrose Inequality [14, 15] for the Isoperimetric mass in the class of _strongly \(1\)-nonparabolic_ Riemannian manifolds with nonnegative scalar curvature. With the locution _strongly \(1\)-nonparabolic_ manifolds, we denote manifolds on which any bounded \(\Omega\subset M\), whose boundary is homologous to \(\partial M\), admits a proper locally Lipschitz weak Inverse Mean Curvature Flow (IMCF for short), that is a solution \(w_{1}\) to the problem \[\left\{\begin{aligned} \operatorname{div}\left(\frac{\operatorname{D}w_{1 }}{|\operatorname{D}w_{1}|}\right)&=|\operatorname{D}w_{1}|& \text{on }M\smallsetminus\Omega,\\ w_{1}&=0&\text{on }\partial\Omega,\\ w_{1}&\to+\infty&\text{as }\operatorname{d}(x, \partial\Omega)\to+\infty,\end{aligned}\right. \tag{1.2}\] according to the definition introduced in [14]. The analysis leading to the isoperimetric Riemannian Penrose Inequality in [1] was carried out using a new asymptotic comparison between the Hawking mass (see (2.9) below) and the Isoperimetric mass along the level sets of the weak IMCF. In the present paper, we are going to develop a similar theory, in the case where the weak IMCF (1.2) is replaced by the level set flow of weak solutions \(w_{p}\in\mathscr{C}^{1,\beta}_{\operatorname{loc}}(M\smallsetminus\Omega)\) to the boundary value problem \[\left\{\begin{aligned} \Delta_{p}w_{p}&=\,| \operatorname{D}w_{p}|^{p}&\text{on }M\smallsetminus\operatorname{ Int}\Omega,\\ w_{p}&=0&\text{on }\partial\Omega,\\ w_{p}&\to+\infty&\text{as }\operatorname{d}(x, \partial\Omega)\to+\infty.\end{aligned}\right. \tag{1.3}\] The link between the above problem and the weak IMCF (1.2) relies on the fact that \(w_{p}\to w_{1}\) as \(p\to 1^{+}\) locally uniformly on \(M\)[16, 17, 18, 19], provided some natural global requirements are met by the manifold \((M,g)\). On the other hand, the solutions to problem (1.3) are deeply connected to the notion of \(p\)-capacitary potential of a compact body \(\Omega\). In fact, setting \(w_{p}=-(p-1)\log u_{p}\) implies that \(u_{p}\) is \(p\)-harmonic, that is \(\Delta_{p}u_{p}=0\). These relationships have been instrumental in demonstrating a series of geometric inequalities by means of Monotonicity Formulas, holding along the level sets of solutions to equation (1.3). As \(p\to 1^{+}\), these inequalities become increasingly close to the desired result. This machinery, firstly introduced in the case \(p=2\) for harmonic functions in [1, 1], has proven to be powerful enough to produce an enhanced version of the Minkowski Inequality [10, 1], later extended to Riemannian manifolds with nonegative Ricci curvature [1] as well as to the anisotropic setting [11]. In [1] and subsequently in [1], the authors used this approach on \(3\)-manifolds with nonnegative scalar curvature to prove the Riemannian Penrose Inequality for a single black hole, based on the monotonic behaviour of a suitable \(p\)-harmonic version of the Hawking mass (see (2.10) below). _Throughout the manuscript, Riemannian manifolds are assumed to be connected, with one single end._ The main object of interest in the present paper is the following nonlinear potential theoretic version of the Huisken's Isoperimetric mass (1.1), that we call the _\(p\)-Isocapacitary mass_. **Definition 1.1** (\(p\)-Isocapacitary mass).: _Let \((M,g)\) be a noncompact \(3\)-dimensional Riemannian manifold, and let \(1<p<3\). Given a closed bounded subset \(\Omega\subset M\) containing \(\partial M\) with \(\mathscr{C}^{1}\)-boundary the quasi-local \(p\)-Isocapacitary mass of \(\Omega\) is defined as_ \[\mathfrak{m}^{(p)}_{\operatorname{iso}}(\Omega)=\frac{1}{2p\pi\mathfrak{c}_{p}( \partial\Omega)^{\frac{2}{3-p}}}\left(|\Omega|-\frac{4\pi}{3}\mathfrak{c}_{p}( \partial\Omega)^{\frac{3}{3-p}}\right).\] _The \(p\)-Isocapacitary mass \(\mathfrak{m}^{(p)}_{\operatorname{iso}}\) of \((M,g)\) is defined as_ \[\mathfrak{m}^{(p)}_{\operatorname{iso}}=\sup_{(\Omega_{j})_{j\in\mathbb{N}}} \limsup_{j\to+\infty}\mathfrak{m}^{(p)}_{\operatorname{iso}}(\partial\Omega _{j}), \tag{1.4}\] _where the supremum is taken among all exhaustion \(\left\{\Omega_{j}\right\}_{j\in\mathbb{N}}\)._ The special and particularly relevant case of the \(2\)-isocapacitary mass has been recently introduced and studied by Jauregui [14]. A first natural and fundamental question about the newly introduced quantities, i.e., the \(p\)-Isocapacitary masses, is whether they are nonnegative, on the class of \(3\)-manifolds with nonnegative scalar curvature, where a solution to (1.3) exists for any bounded \(\Omega\) with regular boundary. The latter mentioned property will be referred to as the strong \(p\)-nonparabolicity of the manifold \((M,g)\), a terminology that interpolates between the notion of strong nonparabolicity introduced by Ni [10] and the notion of strong \(1\)-nonparabolicity, which was employed in [1]. Our first main result is a nonlinear potential-theoretic version of the Riemannian Penrose Inequality, that, although not sharp, implies the Positive Mass Theorem for the \(p\)-Isocapacitary mass, with its associated rigidity statement. We prove its validity under the following asymptotic integral gradient estimate: 1. Given any \(\Omega\subset M\) closed bounded with smooth and connected boundary homologous to \(\partial M\), the function \(w_{p}\in\mathscr{C}^{1}_{\mathrm{loc}}(M\smallsetminus\overline{\Omega})\) solution to (1.3) satisfies \[\int_{\partial\Omega_{t}}\left|\mathrm{D}w_{p}\right|^{2}\mathrm{d}\sigma=o( \mathrm{e}^{t/(p-1)})\] as \(t\to+\infty\) where \(\Omega_{t}=\{w_{p}\leq t\}\). **Theorem 1.2** (\(p\)-Isocapacitary Riemannian Penrose Inequality).: _Let \((M,g)\) be a complete, strongly \(p\)-nonparabolic noncompact Riemannian \(3\)-manifold fulfilling \((\dagger)\) for some \(1<p<3\) with nonnegative scalar curvature and with smooth, compact, connected, minimal, possibly empty boundary. Assume also that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\). Then,_ \[\mathfrak{c}_{p}(\partial M)^{\frac{1}{3-p}}\leq 2\mathfrak{m}^{(p)}_{ \mathrm{iso}}.\] _In particular \(\mathfrak{m}^{(p)}_{\mathrm{iso}}\geq 0\) and it vanishes if and only if \((M,g)\) is isometric to the flat \(3\)-dimensional Euclidean space._ The condition \((\dagger)\) is actually very mild. As we are going to detail in Remark 3.3, it is always fulfilled on manifolds that are merely \(\mathscr{C}^{0}\)-asymptotically flat, provided a suitable Ricci lower bound is also satisfied. \(\mathscr{C}^{1}\)-Asymptotically Flat Riemannian manifolds are also fulfilling condition \((\dagger)\) for \(1<p\leq 2\), as proved in Lemma 2.6. This latter class of manifolds is particularly natural in the framework of Mathematical General Relativity, as the works of Bartnik [14] and Chrusciel [15] showed that the ADM mass is well defined on \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat Riemannian manifolds, with \(\tau>1/2\). In fact, our second main result shows that on \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat Riemannian \(3\)-manifolds with nonnegative scalar curvature the \(p\)-Isocapacitary masses do coincide with the ADM mass for any \(1\leq p\leq 2\). This fact was previously known only for \(p=2\) and only for harmonically flat manifolds, as proven by Jauregui in the insightful paper [14] see Corollary 8 there. **Theorem 1.3**.: _Let \((M,g)\) be a complete, \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature, \(\tau>1/2\), with (possibly empty) smooth, compact, minimal boundary. Then,_ \[\mathfrak{m}^{(p)}_{\mathrm{iso}}=\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}_{ \mathrm{ADM}}\] _for all \(1<p\leq 2\)._ In the proof of Theorem 1.3, the inequality \(\mathfrak{m}^{(p)}_{\mathrm{iso}}\geq\mathfrak{m}_{\mathrm{iso}}\) is deduced substantially arguing as in [14, Theorem 5], combining some of the computations in [11] together with an extension of the main estimate in [1] to the case \(p\neq 2\) (see Proposition 2.14). The reverse inequality, for which the harmonically flat condition was invoked in [11], is instead proven by integrating the sharp Isoperimetric Inequality in [12, Corollary C.4] to obtain a sharp \(p\)-Isocapacitary Inequality in terms of the Isoperimetric mass, Theorem 5.5. This last step is inspired by the classical derivation of the sharp \(p\)-Isocapacitary Inequality from the sharp Isoperimetric inequality, as in [1, Theorem 4.1]. The identification with the ADM mass finally follows from [1, Theorem 4.13], where it was showed to coincide with the Isoperimetric mass in the above optimal regime, sharpening [1, Theorem 3]. It is natural to conjecture that the equivalence among \(p\)-Isocapacitary masses also holds under weaker asymptotic assumptions, where the ADM mass may not even be well defined. In this direction, in the generality of \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian manifold satisfying \((\dagger)\) for \(1<p\leq 2\), we will prove the following two-sided estimate (see Lemma 5.1 and Theorem 5.6). \[\mathfrak{m}_{\mathrm{iso}}^{(p)}\leq\mathfrak{m}_{\mathrm{iso}}\leq\left( \frac{2^{2p-1}\pi^{\frac{p-1}{2}}p^{p}\mathrm{C}_{S}^{\frac{3}{2}(p-1)}}{(3-p) (p-1)^{p-1}}\right)^{\frac{1}{3-p}}\mathfrak{m}_{\mathrm{iso}}^{(p)},\] where \(\mathrm{C}_{S}\) is the global Sobolev constant on \((M,g)\). The lower bound is optimal, while the upper bound sharpens as \(p\to 1^{+}\). As a consequence, in this generality \(\mathfrak{m}_{\mathrm{iso}}\) can at least be recovered as the limit of its \(p\)-capacitary versions as \(p\to 1^{+}\). In conclusion, we propose an alternative proof of the Riemannian Penrose Inequality in the sharp asymptotic regime given in [1], that is for \(\mathscr{C}_{\tau}^{1}\)-Asymptotically Flat Riemannian \(3\)-manifolds, with \(\tau>1/2\). In this previous work, we exploited the better asymptotic behaviour of harmonic functions and the monotonicity of the \(2\)-Hawking mass discovered in [1] to improve the original argument by Huisken and Ilmanen [13] based on the IMCF, as far as the asymptotic analysis at infinity is concerned. Replacing the IMCF with the level sets flow of the solutions \(w_{p}\) to (1.3), we obtain a nonsharp family of \(p\)-Penrose Inequalities in terms of the \(p\)-capacity of the horizon. These then provide the optimal and classical Riemannian Penrose Inequality in the limit as \(p\to 1^{+}\). Recalling that a minimal boundary \(\partial M\) is outermost if no other closed minimal surface homologous to \(\partial M\) is contained in \(M\), we can now state the last main result of the paper. **Theorem 1.4**.: _Let \((M,g)\) be a complete \(\mathscr{C}_{\tau}^{1}\)-Asymptotically Flat Riemannian \(3\)-manifold, \(\tau>1/2\), with nonnegative scalar curvature and smooth, compact, minimal, connected and outermost boundary. Then,_ \[\mathfrak{c}_{p}(\partial M)^{\frac{1}{3-p}}\leq 2\mathfrak{m}_{\mathrm{ADM}} \tag{1.5}\] _for any \(1<p\leq 2\). Letting \(p\to 1^{+}\), we get_ \[\sqrt{\frac{|\partial M|}{16\pi}}\leq\mathfrak{m}_{\mathrm{ADM}}. \tag{1.6}\] ### Outline of the paper In Section 2 we gather some basic facts about \(p\)-harmonic potentials. The content of this section is substantially well known. In Section 3 we work out the main asymptotic comparison at infinity between the \(p\)-Hawking mass and the quasi-local \(p\)-Isocapacitary mass, see Lemma 3.2. We deduce the nonsharp Riemannian Penrose inequality Theorem 1.2 for the \(p\)-Isocapacitary mass. In Section 5 we show relations among the \(p\)-Isocapacitary masses for the various values of \(p\), in turn obtaining Theorem 1.3. Finally, in Appendix A we include a proof of the full monotonicity of the \(p\)-Hawking mass, since the original [1, Theorem 1.1] actually yields such result only along regular values. We will also relate such quantity with a similar one considered in [1] that will naturally appear in the asymptotic comparison argument ruling our main results. ### Acknowledgements Part of this work has been carried out during the authors' attendance to the _Thematic Program on Nonsmooth Riemannian and Lorentzian Geometry_ that took place at the Fields Institute in Toronto. The authors warmly thank the staff, the organizers and the colleagues for the wonderful atmosphere and the excellent working conditions set up there. L.B. is supported by the European Research Council's (ERC) project n.853404 ERC VaReg - _Variational approach to the regularity of the free boundaries_, financed by the program Horizon 2020, by PRA_2022_11 and by PRA_2022_14. M.F. has been supported by the European Union - NextGenerationEU and by the University of Padova under the 2021 STARS Grants@Unipd programme "QuASAR". The authors are members of Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA), which is part of the Istituto Nazionale di Alta Matematica (INdAM), and are partially funded by the GNAMPA project "Problemi al bordo e applicazioni geometriche". The authors are grateful to S. Hirsch and F. Oronzio for their interest in the work and for pleasureful and useful conversations on the subject. ## 2. Preliminaries in Nonlinear Potential Theory As far as basic principles and regularity for \(p\)-harmonic functions are concerned, we just refer the reader to [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], and [11, 12] for the theory of \(p\)-capacitary potentials in exterior domains (see also [1, Chapter 1] and reference therein), including existence issues. Since \(w_{p}=-(p-1)\log u_{p}\) where \(u_{p}\in\mathscr{C}^{1,\beta}_{\mathrm{loc}}(M\smallsetminus\operatorname{Int} \Omega)\) is the solution to \[\begin{cases}\begin{aligned} \Delta^{(p)}_{g}u_{p}& =\,0\qquad\text{ on }M\smallsetminus\operatorname{Int}\Omega,\\ u_{p}&=\,1\qquad\text{ on }\partial\Omega,\\ u_{p}&\to 0\qquad\text{ as }\mathrm{d}(x,\partial\Omega) \to+\infty,\end{aligned}\end{cases} \tag{2.1}\] the following definition of strongly \(p\)-nonparabolic Riemannian manifold is consistent with that of strong nonparabolicity [15] and with limit case of strong \(1\)-nonparabolicity [10]. **Definition 2.1** (strongly \(p\)-nonparabolic).: _We say that \((M,g)\) with (possibly empty) compact boundary is strongly \(p\)-nonparabolic, \(1<p<3\), if there exists a solution to (1.3) for some \(\Omega\subseteq M\) closed bounded with smooth boundary homologous to \(\partial M\)._ **Remark 2.2**.: _By the Maximum Principle, in a strongly \(p\)-nonparabolic manifold every \(\Omega\) with \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) admits a solution to (1.3)._ This definition naturally comes with the notion of the \(p\)-capacity of a compact subset \(K\subset M\), which is \[\mathfrak{c}_{p}(K)=\inf\Biggl{\{}\frac{1}{4\pi}\left(\frac{p-1}{3-p}\right)^ {p-1}\int_{M\smallsetminus K}\left|\mathrm{D}v\right|^{p}\mathrm{d}\mu\,\Bigg{|} \,v\in\mathscr{C}^{\infty}_{c}(M),\,v\geq 1\text{ on }K\Biggr{\}}.\] The \(p\)-capacity of the level sets of solutions to (1.3) exponentially grows. This is completely analogous to the exponential growth of the area along the IMCF. We recall this useful property in the following lemma. **Lemma 2.3**.: _Let \((M,g)\) be a \(3\)-dimensional Riemannian manifold with (possibly empty) boundary \(\partial M\). Let \(\Omega\subseteq M\) be a closed bounded subset with \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) and let \(w_{p}\) be the solution to (1.3). Then, denoting \(\Omega_{t}=\{w_{p}\leq t\}\) we have_ \[\mathfrak{c}_{p}(\partial\Omega_{t})=\mathrm{e}^{t}\,\mathfrak{c}_{p}( \partial\Omega)=\frac{1}{4\pi}\int_{\partial\Omega_{t}}\left(\frac{\left| \mathrm{D}w_{p}\right|}{3-p}\right)^{p-1}\,\mathrm{d}\sigma.\] ### Estimates on Asymptotically Flat Riemannian manifolds We now give the precise definition of Asymptotically Flat \(3\)-manifolds. **Definition 2.4** (Asymptotically Flat Riemannian manifolds).: _A \(3\)-dimensional Riemannian manifold \((M,g)\) with (possibly empty) boundary is said to be \(\mathscr{C}^{k}_{\tau}\)-Asymptotically Flat, \(k\in\mathbb{N}\) and \(\tau>0\) (\(\tau=0\) resp.) if the following conditions are satisfied._ 1. _There exists a compact set_ \(K\subseteq M\) _such that_ \(M\smallsetminus K\) _is diffemorphic to_ \(\mathbb{R}^{3}\smallsetminus\{|x|\leq R\}\)_, through a map_ \((x^{1},x^{2},x^{3})\) _whose component are called_ asymptotically flat coordinates_._ 2. _In the chart_ \((M\smallsetminus K,(x^{1},x^{2},x^{3}))\) _the metric tensor is expressed as_ \[g=g_{ij}\mathrm{d}x^{i}\otimes\mathrm{d}x^{j}=(\delta_{ij}+\eta_{ij})\mathrm{d }x^{i}\otimes\mathrm{d}x^{j}\] _with_ \[\sum_{i,j=1}^{3}\sum_{|\beta|=0}^{k}|x|^{|\beta|+\tau}|\partial_{\beta}\eta_{ ij}|=O(1)\text{ ($=o(1)$ resp.)}\qquad\qquad\text{as $|x|\to+\infty$}.\] \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian manifolds are in particular strongly \(p\)-nonparabolic. This is a consequence of [14, Theorem 3.6], implying it under the mere existence of a global (even weighted) Sobolev inequality. **Remark 2.5**.: _The content of this Section can be extended in any dimension \(n\geq 3\) with obvious modifications._ We first point out a decay estimate for the gradient of \(u_{p}\) holding on \(\mathscr{C}^{1}\)-Asymptotically Flat Riemannian manifolds. It is immediately obtained as a consequence of the Cheng-Yau inequality for \(p\)-harmonic functions [13] when the Ricci curvature is quadratically asymptotically nonnegative (see also [1, Proposition 2.27]). However, as the proof presented in [13] is purely integral, integrating by parts the term containing the Ricci curvature and exploiting the \(\mathscr{C}^{1}\) decay of the metric leads to the following. **Lemma 2.6**.: _Let \((M,g)\) be a \(\mathscr{C}^{1}\)-Asymptotically Flat \(3\)-manifold. Let \(\Omega\subset M\) be a closed bounded set with smooth boundary, and let \(u_{p}\) be the solution to (2.1), for \(1<p<3\). Then, there exists \(\mathrm{C}>0\) and \(R>0\) such that_ \[|\mathrm{D}u_{p}|(x)\leq\mathrm{C}\frac{u_{p}(x)}{|x|} \tag{2.2}\] _on \(\{|x|\geq R\}\)._ Proof.: We drop the subscript \(p\). Let \(|x|\geq R\) for some \(R\) large enough. We explain how to modify the proof of the Cheng-Yau inequality in the ball \(B=B_{|x|/2}(x)\) presented in [13] in order to exploit the \(\mathscr{C}^{1}\)-knowledge of the coefficients of the metric in place of the Ricci lower bound. The proof begins with integrating a version of the Bochner formula and obtaining \[\begin{split}\int_{B}\mathscr{L}(f)\psi\,\mathrm{d}\mu=2\int_{B}f ^{\frac{p}{2}-1}|\mathrm{D}\mathrm{D}u|^{2}\psi\,\mathrm{d}\mu+\Big{(}\frac{p }{2}-1\Big{)}\int_{B}|\mathrm{D}f|^{2}f^{\frac{p}{2}-2}\psi\,\mathrm{d}\mu\\ +\int_{B}f^{\frac{p}{2}-1}\operatorname{Ric}(\mathrm{D}u, \mathrm{D}u)\psi\,\mathrm{d}\mu\end{split} \tag{2.3}\] for any \(\psi\in\mathscr{C}^{\infty}_{c}(B)\), where \(f=|\mathrm{D}u|^{2}\) and \[\mathscr{L}(f)=\operatorname{div}\left[f^{p/2-1}\left(\mathrm{D}f+(p-2) \langle\mathrm{D}u\,|\,\mathrm{D}f\rangle\frac{\mathrm{D}u}{|\mathrm{D}u|^{2} }\right)\right]-pf^{p/2-1}\langle\mathrm{D}u\,|\,\mathrm{D}f\rangle.\] The only term containing the second derivatives of the metric is the one involving the Ricci curvature tensor. It can be written as \[\int_{B}f^{\frac{p}{2}-1}\operatorname{Ric}(\operatorname{D}u,\operatorname{D}u) \psi\operatorname{d}\!\mu=\int_{B}(\partial_{k}\Gamma_{ij}^{k}-\partial_{i} \Gamma_{kj}^{k}+\Gamma_{ij}^{k}\Gamma_{km}^{m}-\Gamma_{ik}^{m}\Gamma_{jm}^{k}) \operatorname{D}^{i}u\operatorname{D}^{j}u\,f^{\frac{p}{2}-1}\psi\operatorname{ d}\!\mu. \tag{2.4}\] The terms containing products of Christoffel symbols can be estimated using the \(\mathscr{C}^{1}\)-asymptotic behaviour of the metric. Indeed, \[\int_{B}(\Gamma_{ij}^{k}\Gamma_{km}^{m}-\Gamma_{ik}^{m}\Gamma_{jm}^{k}) \operatorname{D}^{i}u\operatorname{D}^{j}u\,f^{\frac{p}{2}-1}\psi\operatorname {d}\!\mu\geq-\frac{\operatorname{C}}{\left|x\right|^{2}}\int_{B}f^{\frac{p}{2} }\psi\operatorname{d}\!\mu. \tag{2.5}\] On the other hand, using integration by parts we have \[\int_{B}\partial_{k}\Gamma_{ij}^{k}\operatorname{D}^{i}u\operatorname {D}^{j}u\,f^{\frac{p}{2}-1}\psi\operatorname{d}\!\mu =-\int_{B}\Gamma_{ij}^{k}\partial_{k}(\operatorname{D}^{i}u \operatorname{D}^{j}u\,f^{\frac{p}{2}-1}\psi\sqrt{\det g})\operatorname{d}\! \mu_{\delta} \tag{2.6}\] \[\geq-\operatorname{C}\int_{B}\frac{1}{\left|x\right|}| \operatorname{DD}u||\operatorname{D}u|^{f^{\frac{p}{2}-1}}\psi+f^{\frac{p}{2} }\left(\frac{\psi}{\left|x\right|^{2}}+\frac{\left|\operatorname{D}\!\psi \right|}{\left|x\right|}\right)\,\operatorname{d}\!\mu\] for any \(\varepsilon>0\), where we employed Young's inequality in the last step. Observe that we used the \(\mathscr{C}^{1}\)-asymptotic behaviour of the metric not only to control \(\left|\partial g\right|\), but also to estimate \(\left|\partial\partial u\right|^{2}\) in terms of \(\left|\operatorname{DD}u\right|^{2}\) and \(\left|x\right|^{-2}\left|\operatorname{D}\!u\right|^{2}\). We can deal with the remaining term in the same way. Combining (2.4), (2.5) and (2.6), we finally get \[\int_{B}f^{\frac{p}{2}-1}\operatorname{Ric}(\operatorname{D}u,\operatorname{D }u)\psi\operatorname{d}\!\mu\geq-\operatorname{C}\left[\int_{B}\varepsilon^{2 }|\operatorname{DD}u|^{2}f^{\frac{p}{2}-1}\psi+f^{\frac{p}{2}}\left(\frac{\psi} {\varepsilon^{2}|x|^{2}}+\frac{\left|\operatorname{D}\!\psi\right|}{\left|x \right|}\right)\,\operatorname{d}\!\mu\right]. \tag{2.7}\] Following the argument in [23, Theorem 1.1], one can chose \(\psi=f^{b}\eta^{2}\) for some \(b>1\) and with \(\left|\operatorname{D}\!\eta\right|\leq\operatorname{C}\eta/\left|x\right|\). With this specification, the integrand of the last term in (2.7) is pointwise estimated by \[\frac{\left|\operatorname{D}\!\psi\right|}{\left|x\right|}f^{\frac{p}{2}}\leq \operatorname{C}\!\psi\left(\varepsilon^{2}|\operatorname{DD}u|^{2}f^{\frac{p} {2}-1}+\frac{1}{\varepsilon^{2}|x|^{2}}f^{\frac{p}{2}}\psi\right).\] The last term in (2.7) can be absorbed in the others. Plugging this into (2.3), we deduce \[\int_{B}\mathscr{L}(f)\psi\operatorname{d}\!\mu\geq(2-\varepsilon ^{2})\int_{B}f^{\frac{p}{2}-1}|\operatorname{DD}u|^{2}\psi\operatorname{d}\!\mu +\Big{(}\frac{p}{2}-1\Big{)} \int_{B}\left|\operatorname{D}\!f\right|^{2}f^{\frac{p}{2}-2} \psi\operatorname{d}\!\mu \tag{2.8}\] \[-\frac{\operatorname{C}}{\varepsilon^{2}|x|^{2}}\int_{B}f^{\frac{p }{2}}\psi\operatorname{d}\!\mu\] for any \(\varepsilon>0\). Plugging now the last displayed identity at the bottom of [23, p. 763] into (2.8), we get, choosing \(\varepsilon>0\) small enough (depending only on \(p\)), the inequality [23, (2.3)], with \(\kappa\) given by a suitable uniform constant multiplying \(\left|x\right|^{2}\). From this point on, the proof can be followed line by line, and yields the Cheng-Yau inequality of [23, Theorem 1.1] in terms of \(\kappa\) above in the ball \(B_{\left|x\right|/4}(x)\). This is exactly the claimed (2.2). **Remark 2.7**.: _We can rewrite the above estimate in terms of \(w_{p}=-(p-1)\log u_{p}\). It reads_ \[\left|\operatorname{D}\!w_{p}(x)\right|\leq\frac{\operatorname{C}}{\left|x \right|}\] _on \(\left\{\left|x\right|\geq R\right\}\), for \(R\) large enough and some positive constant \(\operatorname{C}>0\) depending on \(p\)._ The following is a double-sided control on the solution \(u_{p}\) to (2.1) of a bounded \(\Omega\subset M\) with smooth boundary with respect to the Euclidean distance. **Lemma 2.8**.: _Let \((M,g)\) be a \(\mathscr{C}^{1}\)-Asymptotically Flat \(3\)-manifold. Let \(\Omega\subset M\) be a closed bounded set with smooth boundary, and let \(u_{p}\) be the solution to (2.1), for \(1<p<3\). Then, there exists \(\mathrm{C}>0\) and \(R>0\) such that_ \[\mathrm{C}^{-1}|x|^{-\frac{3-p}{p-1}}\leq u_{p}\leq\mathrm{C}|x|^{-\frac{3-p}{ p-1}}\] _on \(\{|x|\geq R\}\)._ Proof.: The rightmost inequality follows by [14, Theorem 3.6], since having a positive Isoperimetric constant is equivalent to having a global Sobolev inequality. Integrating Lemma 2.6 as in [13] we have a Harnack Inequality holding on large coordinate spheres \[\max_{\{|x|=r\}}u_{p}\leq\mathrm{C}_{p}\min_{\{|x|=r\}}u_{p},\] where \(\mathrm{C}_{p}\) does not depend on \(r\). We are now committed to proving that \[\max_{\{|x|=r\}}u_{p}\geq\mathrm{C}r^{-\frac{3-p}{p-1}},\] which concludes the proof. Let \(m=\max\{u(x)\,|\,|x|=r\}\). Then \[\mathfrak{c}_{p}(\{|x|\leq r\})\geq\mathfrak{c}_{p}(\{u\geq m\})=m^{-(p-1)} \mathfrak{c}_{p}(\partial\Omega).\] Hence, using [11, Theorem 2.6] (see also [10, Proposition 5.9]) we have \[m\,\mathfrak{c}_{p}(\partial\Omega)^{-\frac{1}{p-1}} \geq\mathfrak{c}_{p}(\{|x|\leq r\})^{-\frac{1}{p-1}}\geq\sum_{j=0 }^{+\infty}(\mathfrak{c}_{p}(\{|x|\leq 2^{j}r\},\{|x|\leq 2^{j+1}r\}))^{\frac{1}{p-1}}\] \[\geq\frac{p-1}{3-p}(4\pi)^{-\frac{1}{p-1}}\sum_{j=0}^{+\infty} \left(\frac{(2^{j}r)^{p}}{|\{|x|\leq 2^{j+1}r\}|}\right)^{\frac{1}{p-1}}\] \[\geq\frac{p-1}{3-p}(4\pi)^{-\frac{1}{p-1}}\int_{2r}^{+\infty} \left(\frac{t}{|\{|x|\leq t\}|}\right)^{\frac{1}{p-1}}\,\mathrm{d}t.\] Since \(|\{|x|\leq t\}|=t^{3}(4\pi/3+o(1))\) as \(t\to+\infty\), we can choose \(R\) such that \(|\{|x|\leq t\}|\leq\mathrm{C}t^{3}\) for every \(r\geq R\), then \[m\geq\mathrm{C}\int_{2r}^{+\infty}\left(\frac{t}{|\{|x|\leq t\}|}\right)^{ \frac{1}{p-1}}\,\mathrm{d}t\geq\mathrm{C}r^{-\frac{3-p}{p-1}},\] which concludes the proof. **Corollary 2.9**.: _We can rewrite the above estimates in terms of \(w_{p}=-(p-1)\log u_{p}\). They read_ \[(3-p)\log|x|-\mathrm{C}^{-1}\leq w_{p}\leq(3-p)\log|x|+\mathrm{C}\] _on \(\{|x|\geq R\}\), for \(R\) large enough and some positive constant \(\mathrm{C}>0\) depending on \(p\)._ We conclude by resuming some basic asymptotic expansions for \(w_{p}\), substantially worked out in [10]. We specialise in the case of \(\mathscr{C}^{1}\)-Asymptotically Flat Riemannian \(3\)-manifolds, and take advantage of the above observations in order to get rid of any Ricci curvature assumption. **Remark 2.10**.: _We point out a minor flaw in [10], consisting in Lemma 2.19 that is wrong. The assumption on the Ricci curvature, namely \(\mathrm{Ric}\geq-\kappa^{2}/(1+\mathrm{d}(\cdot\,,o))^{2}\), \(\kappa\in\mathbb{R}\), is not enough to ensure that the limit cone has nonnegative Ricci curvature. Theorems 1.1 and 1.2 hold with the additional assumption of having an asymptotic cone with nonnegative Ricci curvature, which in particular holds in the asymptotically flat case._ **Lemma 2.11**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{1}\)-Asymptotically Flat Riemannian \(3\)-manifold with (possibly empty) boundary. Let \(\Omega\subset M\) be closed, bounded with smooth boundary homologous to \(\partial M\). Fix \(1<p<3\) and let \(\Omega_{t}=\{w_{p}\leq t\}\), where \(w_{p}\) is the solution to (1.3) starting at \(\Omega\). Then, for every \(1<q<3\)_ 1. \(w_{p}=(3-p)\log|x|+o(1)\quad\text{as }|x|\to+\infty,\)__ 2. \(\mathrm{D}^{i}w_{p}=(3-p)\frac{x^{i}}{|x|^{2}}(1+o(1))\quad\text{as }|x|\to+\infty,\)__ 3. \(\lim_{t\to+\infty}\mathrm{e}^{-\frac{3-q}{3-p}t}\mathfrak{c}_{q}(\partial \Omega_{t})=1,\)__ 4. \(\lim_{t\to+\infty}\mathrm{e}^{-\frac{2}{3-p}t}\left|\partial\Omega_{t}\right| =4\pi.\)__ Proof.: (1) and (2) follow with the same strategy of [1, Theorem 3.1], replacing [1, Corollary 2.25 and Proposition 2.27] with Lemmas 2.6 and 2.8 respectively. Having (1), (3) follows at once. Indeed, for every \(\varepsilon>0\) \[\{(3-p)\log|x|\leq t-\varepsilon\}\subset\Omega_{t}\subset\{(3-p)\log|x|\leq t +\varepsilon\},\] for sufficiently large \(t\). Hence, by monotonicity of the \(q\)-capacity we obtain \[\mathfrak{c}_{q}(\{(3-p)\log|x|=t-\varepsilon\})\leq\mathfrak{c}_{q}(\partial \Omega_{t})\leq\mathfrak{c}_{q}(\{(3-p)\log|x|=t+\varepsilon\}).\] Dividing both sides by \(\mathrm{e}^{-(3-q)t/(3-p)}\) and passing to the limit as \(t\to+\infty\), in virtue of [1, Lemma 2.21] we get \[\mathrm{e}^{-\frac{3-q}{3-p}\varepsilon}\leq\lim_{t\to+\infty}\mathrm{e}^{- \frac{3-q}{3-p}}\,\mathfrak{c}_{q}(\partial\Omega_{t})\leq\mathrm{e}^{\frac{3 -q}{3-p}\varepsilon},\] from which we infer (3) sending \(\varepsilon\to 0^{+}\). (4) follows as [1, Proposition 3.4] replacing [1, Theorem 1.1 and Theorem 3.1] with (1) and (2) respectively. ### Concepts of mass in Nonlinear Potential Theory The classical Hawking mass \(\mathfrak{m}_{H}\), that is \[\mathfrak{m}_{H}(\partial\Omega)=\frac{|\partial\Omega|^{\frac{1}{2}}}{16\pi^ {\frac{3}{2}}}\left(4\pi-\int_{\partial\Omega}\frac{\mathrm{H}^{2}}{4}\, \mathrm{d}\sigma\right), \tag{2.9}\] for \(\Omega\subset M\), monotonically increases along the level sets of the weak IMCF [11]. Such a property is clearly not preserved in general when one replaces the weak IMCF with solutions \(w_{p}\) to (1.3). For this reason, we need to introduce a new family of quasi-local masses. We will call \(p\)_-Hawking mass_ the quantity \[\mathfrak{m}_{H}^{(p)}(\partial\Omega)=\frac{\mathfrak{c}_{p}(\partial\Omega )^{\frac{1}{3-p}}}{8\pi}\left[4\pi+\int_{\partial\Omega}\frac{|\mathrm{D}w_{p }|^{2}}{(3-p)^{2}}\,\mathrm{d}\sigma-\int_{\partial\Omega}\frac{|\mathrm{D}w_{ p}|}{(3-p)}\,\mathrm{H}\,\,\mathrm{d}\sigma\right] \tag{2.10}\] for \(\partial\Omega\in\mathscr{C}^{1}\) with weak second fundamental form in \(L^{2}(\partial\Omega)\). This should be thought of as a \(p\)-version of the classical Hawking mass. In fact, it is immediately seen that the \(p\)-Hawking mass formally converges to the Hawking mass as \(p\to 1^{+}\), having in mind that along the weak IMCF \(w_{1}\) we have \(|\mathrm{D}w_{1}|=\mathrm{H}\) and that the \(p\)-capacity of an outward minimizing set recovers the perimeter in such limit [11, Theorem 1.2]. Crucially, as the Hawking mass is monotone along the weak IMCF [11, Geroch Monotonicity Formula 5.8], so the function \(t\mapsto\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\), for \(\Omega_{t}=\{w_{p}\leq t\}\), is monotone nondecreasing, as proven in [1] (see actually Appendix A for the _full_ monotonicity result). **Theorem 2.12**.: _Let \((M,g)\) be a complete \(3\)-dimensional, strongly \(p\)-nonparabolic Riemannian manifold with nonnegative scalar curvature and with (possibly empty) smooth, compact and connected boundary \(\partial M\). Assume that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\). Let \(\Omega\subseteq M\) with connected \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) with \(\mathrm{h}\in L^{2}(\partial\Omega)\) the solution to \((\ref{eq:H2})\) starting at \(\Omega\). Then, denoting \(\Omega_{t}=\{w_{p}\leq t\}\), the function \(t\mapsto\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\) defined in \((\ref{eq:H2})\) admits a monotone nondecreasing \(\mathrm{BV}_{\mathrm{loc}}(0,+\infty)\) representative and_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathfrak{m}_{H}^{(p)}(\partial \Omega_{t})=\frac{\mathfrak{c}_{p}(\partial\Omega_{t})^{\frac{1}{3-p}}}{(3-p) 8\pi}\left(4\pi-\int_{\partial\Omega_{t}}\frac{\mathrm{R}^{\top}}{2}\,\mathrm{ d}\sigma+\int_{\partial\Omega_{t}}\frac{\mathring{\mathrm{h}}|^{2}}{2}+ \frac{\mathrm{R}}{2}+\frac{\left|\mathrm{D}^{\top}\right|\mathrm{D}w_{p} \right|^{2}}{\left|\mathrm{D}w_{p}\right|^{2}}\,\mathrm{d}\sigma\right. \tag{2.11}\] \[\left.+\int_{\partial\Omega_{t}}\frac{5-p}{p-1}\left(\frac{ \left|\mathrm{D}w_{p}\right|}{3-p}-\frac{\mathrm{H}}{2}\right)^{2}\,\mathrm{d}\sigma\right)\] _holds at every \(t\) regular for \(w_{p}\)._ The \(p\)-Hawking mass has the very useful feature of dominating the Hawking mass times a constant involving the global Sobolev constant of the underlying Riemannian manifold. **Lemma 2.13**.: _Let \((M,g)\) be a complete Riemannian manifold with (possibly empty) smooth and compact boundary. Assume that the Sobolev constant \(\mathrm{C}_{S}\) of \((M,g)\) is positive. Then for every outward minimising \(\Omega\subset M\) with \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) with \(\mathrm{h}\in L^{2}(\partial\Omega)\) we have_ \[\mathfrak{m}_{H}^{(p)}(\partial\Omega)\geq\left(\frac{(3-p)(p-1)^{p-1}}{2^{2p -1}\pi^{\frac{p-1}{2}}p^{p}\mathrm{C}_{S}^{\frac{3}{2}(p-1)}}\right)^{\frac{ 1}{3-p}}\mathfrak{m}_{H}(\partial\Omega).\] Proof.: Observe that \[\int_{\partial\Omega}\frac{\left|\mathrm{D}w_{p}\right|}{(3-p)}\,\mathrm{H}\, \,\mathrm{d}\sigma-\int_{\partial\Omega}\frac{\left|\mathrm{D}w_{p}\right|^{2 }}{(3-p)^{2}}\,\mathrm{d}\sigma=\int_{\partial\Omega}\frac{\mathrm{H}^{2}}{4} \,\mathrm{d}\sigma-\int_{\partial\Omega}\left(\frac{\mathrm{H}}{2}-\frac{ \left|\mathrm{D}w_{p}\right|}{3-p}\right)^{2}\,\mathrm{d}\sigma\leq\int_{ \partial\Omega}\frac{\mathrm{H}^{2}}{4}\,\mathrm{d}\sigma.\] Is then enough to proceed as in the proof of [13, Theorem 1.3] to prove that \[\mathfrak{c}_{p}(\partial\Omega)^{\frac{1}{3-p}}\geq\left(\frac{(3-p)(p-1)^{p -1}}{4\pi p^{p}\mathrm{C}_{S}^{\frac{3}{2}(p-1)}}\right)^{\frac{1}{3-p}}\,| \partial\Omega|^{\frac{1}{2}}.\qed\] In [1, Thoerem 2] (see also the new proof proposed in [1]), the authors prove an upper bound for the capacity in terms of the area and the Willmore deficit. Observe that in their proof the Asymptotically Flat condition is assumed only to grant the existence of an IMCF starting at some \(\Omega\). Here we assume the existence of the IMCF by requiring that \((M,g)\) is strongly \(1\)-nonparabolic (see [1]) and extend their result to all \(p\neq 2\). We only sketch the proof and refer to [1] for the details. **Proposition 2.14** (Nonlinear version of Bray-Miao's estimate).: _Let \((M,g)\) be complete strongly \(1\)-nonparabolic Riemannian \(3\)-manifold with nonnegative scalar curvature and with (possibly empty) smooth and compact boundary. Assume that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\). Let \(\Omega\subset M\) be closed, bounded with connected \(\mathscr{C}^{1}\)-boundary with \(\mathrm{h}\in L^{2}(\partial\Omega)\). Then,_ \[\mathfrak{c}_{p}(\partial\Omega)\leq\left(\frac{|\partial\Omega|}{4\pi} \right)^{\frac{3-p}{2}}{}_{2}F_{1}\left(\frac{1}{2},\frac{3-p}{p-1},\frac{2}{p -1};1-\frac{1}{16\pi}\int_{\partial\Omega}\mathrm{H}^{2}\,\,\mathrm{d}\sigma \right)^{-(p-1)},\] _where \({}_{2}F_{1}\) is the hypergeometric function._ Proof.: Following the same lines of [1, Theorem 2] we can prove that \[4\pi\left(\frac{3-p}{p-1}\right)^{p-1}\mathfrak{c}_{p}(\partial\Omega)\leq\int_{M _{S}\smallsetminus B_{r_{0}}}\left|\mathrm{D}u_{p}^{S}\right|^{p}\mathrm{d}\mu, \tag{2.12}\] where \(M_{S}\) is the Schwarzschild metric of mass \(\mathfrak{m}_{0}=\mathfrak{m}_{H}(\partial\Omega^{*})\), \(r_{0}=\sqrt{|\partial\Omega^{*}|/4\pi}\) and \(u_{p}^{S}\) is the \(p\)-capacitary potential of \(\{|x|=r_{0}\}\subset M_{S}\). Straightforward computations give that \(u_{p}^{S}\) satisfies \[\left\{\begin{array}{rcl}\frac{\mathrm{d}}{\mathrm{d}r}u_{p}^{S}(r)&=&-r^{- \frac{2}{p-1}}\left(1-\frac{2\mathfrak{m}_{0}}{r}\right)^{-\frac{1}{2}}&\text{ for }r\in[r_{0},+\infty)\\ u_{p}^{S}&=&1&\text{ at }r=r_{0}\\ u_{p}^{S}(r)&\to&0&\text{ as }r\to+\infty\end{array}\right.\] Plugging the solution into (2.12) we get \[\mathfrak{c}_{p}(\partial\Omega) \leq r_{0}^{3-p}{}_{2}F_{1}\left(\frac{1}{2},\frac{3-p}{p-1}, \frac{2}{p-1};\frac{2\mathfrak{m}_{0}}{r_{0}}\right)^{-(p-1)}\] \[\leq\left(\frac{|\partial\Omega^{*}|}{4\pi}\right)^{\frac{3-p}{2} }{}_{2}F_{1}\left(\frac{1}{2},\frac{3-p}{p-1},\frac{2}{p-1};1-\int_{\partial \Omega^{*}}\mathrm{H}^{2}\ \mathrm{d}\sigma\right)^{-(p-1)}.\] We conclude, since \(|\partial\Omega^{*}|\leq|\partial\Omega|\) and \(\int_{\partial\Omega^{*}}\mathrm{H}^{2}\ \mathrm{d}\sigma\leq\int_{\partial\Omega} \mathrm{H}^{2}\ \mathrm{d}\sigma\). Combining the previous proposition, the minimality of \(\partial M\) and the Isoperimetric Riemannian Penrose Inequality [1, Theorem 1.3], we obtain a sharp Penrose-type Inequality for the \(p\)-capacity of the boundary, for every \(1<p<3\), extending [1, Theorem 4]. **Theorem 2.15**.: _Let \((M,g)\) be a complete, strongly \(1\)-nonparabolic Riemannian manifold with nonnegative scalar curvature and with (possibly empty) smooth, compact, connected, minimal boundary. Assume that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\). Then, for every \(1<p<3\) it holds_ \[\mathfrak{c}_{p}(\partial M)^{\frac{1}{3-p}}\leq 2\left(\sqrt{\pi}\frac{ \Gamma(\frac{2}{p-1})}{\Gamma(\frac{2}{p-1}-\frac{1}{2})}\right)^{-\frac{p-1} {3-p}}\mathfrak{m}_{\mathrm{iso}}, \tag{2.13}\] _where \(\Gamma\) is the gamma function. Moreover, the equality holds in (2.13) if and only if \((M,g)\) is isometric to_ \[\left(\mathbb{R}^{n}\smallsetminus\{|x|<2\mathfrak{m}_{\mathrm{iso}}\},\left(1 +\frac{\mathfrak{m}_{\mathrm{iso}}}{2|x|}\right)^{4}(\delta_{ij}\mathrm{d}x^{i }\otimes\mathrm{d}x^{j})\right).\] Proof.: Since the boundary \(\partial M\) is minimal, by Proposition 2.14 we have \[\mathfrak{c}_{p}(\partial M)^{\frac{1}{3-p}}\leq 2\left(\sqrt{\pi}\frac{ \Gamma(\frac{2}{p-1})}{\Gamma(\frac{2}{p-1}-\frac{1}{2})}\right)^{-\frac{p-1} {3-p}}\sqrt{\frac{|\partial M|}{16\pi}}.\] Then, (2.13) follows from [1, Theorem 1.3]. The equality in (2.13) implies the equality in [1, Theorem 1.3] yielding the rigidity statement. ## 3. \(p\)-isocapacitary Riemannian Penrose Inequality In establishing the asymptotic comparison between the \(p\)-Hawking mass (2.10) and the \(p\)-Isocapacitary mass, the quantity \[\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega)=\frac{\mathfrak{c}_{p}(\partial \Omega)^{\frac{1}{3-p}}}{4\pi(3-p)}\left(4\pi-\int_{\partial\Omega}\frac{| \mathrm{D}w_{p}|^{2}}{(3-p)^{2}}\,\mathrm{d}\sigma\right) \tag{3.1}\] will naturally appear. Its monotonicity has been studied in [14] for the \(p\)-Green's function in the Asymptotically Flat regime. We will revisit this property in connection with the \(p\)-Hawking mass in Appendix A (see Theorem A.1). **Lemma 3.1**.: _Let \((M,g)\) be a complete, noncompact Riemannian manifold with nonnegative scalar curvature and with (possibly empty) smooth and compact boundary. Assume also that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\) and \((M,g)\) satisfies \((\dagger)\) for some \(1<p<3\). Let \(\Omega\subseteq M\) be homologous to \(\partial M\) with connected \(\mathscr{C}^{1}\)-boundary and \(\mathrm{h}\in L^{2}(\partial\Omega)\), \(w_{p}\) the solution to (1.3) starting at \(\Omega\) and \(\Omega_{t}=\left\{w_{p}\leq t\right\}\). Then the function \(t\mapsto\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t})\) belongs to \(W_{\mathrm{loc}}^{1,1}(0,+\infty)\) is monotone nondecreasing. Moreover, we have_ \[\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\leq\tilde{\mathfrak{m}}_{H}^{(p)}( \partial\Omega_{t}) \tag{3.2}\] _for every \(t\in[0,+\infty)\), and_ \[\lim_{t\to+\infty}\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})=\lim_{t\to+ \infty}\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t}). \tag{3.3}\] Proof.: By Theorem A.1 the function \(t\mapsto\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t})\) belongs to \(W_{\mathrm{loc}}^{1,1}(0,+\infty)\). Moreover, since \(\int_{\partial\Omega_{t}}\left|\mathrm{D}w_{p}\right|^{2}\mathrm{d}\sigma=o( \mathrm{e}^{t/(p-1)})\), we clearly have that \[\lim_{t\to+\infty}\mathfrak{c}_{p}(\partial\Omega_{t})^{-\frac{1}{p-1}}\left( 4\pi-\int_{\partial\Omega_{t}}\frac{\left|\mathrm{D}w_{p}\right|^{2}}{(3-p)^{ 2}}\,\mathrm{d}\sigma\right)=0.\] If we denote \[N(t)=\mathfrak{c}_{p}(\partial\Omega_{t})^{-\frac{1}{p-1}}\left(4\pi-\int_{ \partial\Omega_{t}}\frac{\left|\mathrm{D}w_{p}\right|^{2}}{(3-p)^{2}}\, \mathrm{d}\sigma\right),\hskip 28.452756ptD(t)=\mathfrak{c}_{p}(\partial \Omega_{t})^{-\frac{2}{(3-p)(p-1)}},\] we have that \[\mathfrak{c}_{p}(\partial\Omega_{t})^{\frac{1}{3-p}}\left(4\pi-\int_{\partial \Omega_{t}}\frac{\left|\mathrm{D}w_{p}\right|^{2}}{(3-p)^{2}}\,\mathrm{d} \sigma\right)=\frac{N(t)}{D(t)}.\] By Theorem A.1 and Theorem 2.12, \(N^{\prime}(t)/D^{\prime}(t)=4\pi(3-p)\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\) is nondecreasing. Since \(N(t),D(t)\to 0\) as \(t\to+\infty\), we have that the function in \(t\mapsto\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t})\) is nondecreasing as well. To prove (3.2) is enough to observe that \[0\leq\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{N(t)}{D(t)}\right)=\frac{N^{ \prime}(t)D(t)-N(t)D^{\prime}(t)}{D(t)^{2}}=\frac{8\pi}{(p-1)}\left(-\mathfrak{ m}_{H}^{(p)}(\partial\Omega_{t})+\tilde{\mathfrak{m}}_{H}^{(p)}(\partial \Omega_{t})\right).\] It then remains to prove (3.3). On the other hand, by de L'Hopital rule (see [1, Theorem A.1]) \[\lim_{t\to+\infty}4\pi(3-p)\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t}) =\lim_{t\to+\infty}\frac{N(t)}{D(t)}\leq\lim_{t\to+\infty}\frac{N^{\prime}(t) }{D^{\prime}(t)}=\lim_{t\to+\infty}4\pi(3-p)\mathfrak{m}_{H}^{(p)}(\partial \Omega_{t}).\] The reverse inequality easily follows by (3.2). The following result gives the \(p\)-capacitary counterpart of [1, Lemma 2.7], that asymptotically controls the Hawking mass with the quasi-local Isoperimetric mass of the evolving sets. **Lemma 3.2** (Asymptotic comparison lemma).: _Let \((M,g)\) be a complete, noncompact Riemannian manifold with nonnegative scalar curvature and with (possibly empty) smooth and compact boundary. Assume that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\) and \((M,g)\) satisfies \((\dagger)\) for some \(1<p<3\). Let \(\Omega\subseteq M\) be closed bounded with connected \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) with \(\mathrm{h}\in L^{2}(\partial\Omega)\) and \(w_{p}\) the solution to (1.3) starting at \(\Omega\). Then_ \[\lim_{t\to+\infty}\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})=\lim_{t\to+\infty }\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t})\leq\liminf_{t\to+\infty} \mathfrak{m}_{\mathrm{iso}}^{(p)}(\Omega_{t}), \tag{3.4}\] _where \(\Omega_{t}=\{w_{p}\leq t\}\)._ Proof.: Assume that the right-hand side of (3.4) is finite, otherwise there is nothing to prove. As before, the function \(t\mapsto|\Omega_{t}\smallsetminus\mathrm{Crit}\,w_{p}|\) is monotone continuous in \([0,+\infty)\), hence it is absolutely continuous. The generalised de L'Hopital rule gives \[\begin{split}\liminf_{t\to+\infty}\mathfrak{m}_{\mathrm{iso}}^{ (p)}(\Omega_{t})&\geq\liminf_{t\to+\infty}\frac{1}{2p\pi \mathfrak{c}_{p}(\partial\Omega_{t})^{\frac{2}{3-p}}}\left(|\Omega_{t} \smallsetminus\mathrm{Crit}\,w_{p}|-\frac{4\pi}{3}\mathfrak{c}_{p}(\partial \Omega_{t})^{\frac{3}{3-p}}\right)\\ &\geq\liminf_{t\to+\infty}\frac{(3-p)}{4p\pi\mathfrak{c}_{p}( \partial\Omega_{t})^{\frac{2}{3-p}}}\left(\,\int_{\partial\Omega_{t}}\frac{1} {|\mathrm{D}w_{p}|}\,\mathrm{d}\sigma-\frac{4\pi}{3-p}\mathfrak{c}_{p}( \partial\Omega_{t})^{\frac{3}{3-p}}\right).\end{split} \tag{3.5}\] By Holder inequality we have that \[\int_{\partial\Omega_{t}}\frac{1}{|\mathrm{D}w_{p}|}\,\mathrm{d}\sigma\geq \left(\,\int_{\partial\Omega_{t}}|\mathrm{D}w_{p}|^{2}\,\mathrm{d}\sigma \right)^{-\frac{p}{3-p}}\left(4\pi(3-p)^{p-1}\mathfrak{c}_{p}(\partial\Omega_ {t})\right)^{\frac{3}{3-p}}.\] Plugging it in (3.5) we obtain \[\liminf_{t\to+\infty}\mathfrak{m}_{\mathrm{iso}}^{(p)}(\Omega_{t})\geq\liminf _{t\to+\infty}\frac{\mathfrak{c}_{p}(\partial\Omega_{t})^{\frac{1}{3-p}}}{p \left(\int_{\partial\Omega_{t}}\frac{|\mathrm{D}w_{p}|^{2}}{(3-p)^{2}}\, \mathrm{d}\sigma\right)^{\frac{p}{3-p}}}\left[(4\pi)^{\frac{p}{3-p}}-\left( \,\int_{\partial\Omega_{t}}\frac{|\mathrm{D}w_{p}|^{2}}{(3-p)^{2}}\,\mathrm{d }\sigma\right)^{\frac{p}{3-p}}\right].\] To simplify the notation, denote \(f(z)=z^{p/(3-p)}\) and \(z(t)=\int_{\partial\Omega_{t}}{|\mathrm{D}w_{p}|}^{2}/(3-p)^{2}\,\mathrm{d}\sigma\). Since \(z(t)\leq 4\pi\) in virtue of our assumptions, by Lemma 3.1 \[\begin{split}\liminf_{t\to+\infty}\mathfrak{m}_{\mathrm{iso}}^{(p )}(\Omega_{t})&\geq\liminf_{t\to+\infty}\frac{1}{pf(z(t))}\frac{ f(4\pi)-f(z(t))}{4\pi-z(t)}\mathfrak{c}_{p}(\partial\Omega_{t})^{\frac{1}{3-p}}(4 \pi-z(t))\\ &=\liminf_{t\to+\infty}\frac{4\pi(3-p)}{pf(z(t))}\frac{f(4\pi)-f( z(t))}{4\pi-z(t)}\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t})\\ &\geq\liminf_{t\to+\infty}\frac{4\pi(3-p)}{pf(z(t))}\frac{f(4\pi )-f(z(t))}{4\pi-z(t)}\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t}).\end{split} \tag{3.6}\] The theorem follows once we prove the following claim. **Claim**. _There exists a divergent increasing sequence \((t_{n})_{n\in\mathbb{N}}\) realising the rightmost limit inferior of (3.6) and such that \(z(t_{n})\to 4\pi\) as \(n\to+\infty\)._ Indeed, we would have \[\lim_{n\to+\infty}\frac{4\pi}{f(z(t_{n}))}=(4\pi)^{\frac{3-2p}{3-p}},\qquad\lim _{n\to+\infty}\frac{f(4\pi)-f(z(t_{n}))}{4\pi-z(t_{n})}=f^{\prime}(4\pi)=\frac {p}{3-p}(4\pi)^{\frac{2p-3}{3-p}},\] that plugged into (3.6), gives (3.4) in virtue of Theorem 2.12 and Lemma 3.1. Let \(t_{n}\) be a divergent increasing sequence \((t_{n})_{n\in\mathbb{N}}\) realising the rightmost limit inferior of (3.6). By Lemma 3.1 we have two possible cases: 1. there exists \(T>0\) such that \(\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t_{n}})\geq 0\) for all \(t_{n}\geq T\), or 2. \(\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t_{n}})<0\) for all \(n\in\mathbb{N}\). _Case 1._ Since \(\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t_{n}})\geq 0\), \(z(t_{n})\leq 4\pi\) for every \(t_{n}\geq T\). By contradiction, suppose there exists \(\varepsilon>0\) such that \(z(t_{n})\leq 4\pi-\varepsilon\) for every \(n\) sufficiently large. Then, by (3.6), there exists \(\mathrm{C}(p,\varepsilon)>0\) such that \[+\infty>\liminf_{t\to+\infty}\mathfrak{m}_{\mathrm{iso}}^{(p)}(\Omega_{t}) \geq\lim_{n\to+\infty}\mathrm{C}(p,\varepsilon)\mathfrak{c}_{p}(\partial \Omega_{t_{n}})^{\frac{1}{3-p}},\] which is clearly a contradiction. Hence, up to a not relabeled subsequence, \(z(t_{n})\to 4\pi\) as \(n\to+\infty\). This proves the claim in this case. _Case 2._ Since \(\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t_{n}})<0\), \(z(t_{n})\geq 4\pi\) for every \(n\in\mathbb{N}\). Suppose by contradiction \(z(t_{n})\geq 4\pi+\varepsilon\) for some \(\varepsilon>0\). Then, by Theorem 2.12, there exists \(\mathrm{C}(p,\varepsilon)>0\) such that \[\tilde{\mathfrak{m}}_{H}^{(p)}(\partial\Omega)\leq\lim_{t\to+\infty}\tilde{ \mathfrak{m}}_{H}^{(p)}(\partial\Omega_{t})\leq-\mathrm{C}(p,\varepsilon) \lim_{n\to+\infty}\mathfrak{c}_{p}(\partial\Omega_{t_{n}})^{\frac{1}{3-p}}=-\infty,\] which is a contradiction since \(|\mathrm{D}w_{p}|\in\mathscr{C}^{0}(\partial\Omega)\), proving the claim also in this case. Differently from [1, Lemma 2.7], here we assumed \((\dagger)\). We already mentioned in the Introduction that this condition is very mild. In the following remark, we better specify our assertion. **Remark 3.3**.: _First of all, observe that_ \[\int_{\partial\Omega_{t}}\frac{|\mathrm{D}w_{p}|^{2}}{(3-p)^{2}}\,\mathrm{d} \sigma\leq 4\pi\,\mathrm{e}^{t}\,\mathfrak{c}_{p}(\partial\Omega)\sup_{ \partial\Omega_{t}}\frac{|\mathrm{D}w_{p}|^{3-p}}{(3-p)^{3-p}}. \tag{3.7}\] _If \(\mathrm{Ric}(x)\geq-2\kappa^{2}\) for some \(\kappa\in\mathbb{R}\) and every \(x\in M\), by [11, Theorem 1.1] we have that \(|\mathrm{D}w_{p}|\leq\mathrm{C}_{1}\) for some constant depending on \(p\) and \(\kappa\). In particular, for \(1<p<2\), \((\dagger)\) is fulfilled. The case \(p=2\) may be treated as in [15, Corollary 1.1]. For the same reason, if \((M,g)\) is \(\mathscr{C}^{1}\)-Asymptotically Flat Riemannian manifold \((\dagger)\) is implied for every \(1<p\leq 2\) by Lemma 2.6 (see also Remark 2.7)._ _Alternatively, assuming that \((M,g)\) is \(\mathscr{C}^{0}\)-Asymptotically Flat and the Ricci tensor satisfies \(\mathrm{Ric}(x)\geq-2\kappa^{2}/(1+\mathrm{d}(x,o))^{2}\) for some \(\kappa\in\mathbb{R}\) a fixed \(o\in M\) and for every \(x\in M\), one can cover the whole range \(1<p<3\). Indeed, by [11, Theorem 1.1] and [1, Theorem 1.1] \(|\mathrm{D}w_{p}|\leq\mathrm{C}_{3}\,\mathrm{e}^{-t}\) for some positive constat \(\mathrm{C}_{3}\) depending only on \(p\), \(\kappa\) and \(\Omega\). Plugging it into (3.7), we infer that \(\int_{\partial\Omega_{t}}|\mathrm{D}w_{p}|^{2}\,\mathrm{d}\sigma\leq\mathrm{ C}_{4}\) for a positive constant \(\mathrm{C}_{4}\)._ We establish a nonsharp Penrose inequality for the \(p\)-Isocapacitary mass in the generality of Theorem 1.2. Proof of Theorem 1.2.: Assume first that \(\partial M=\varnothing\). Let \(q\in M\) and \(B_{r}(q)\) the closed geodesic ball centred at \(q\) of radius \(r>0\). By the asymptotic development of the \(p\)-Green function at the pole (see [14, Theorem 2.4]), we have \[\lim_{r\to 0}\mathfrak{m}_{H}^{(p)}(\partial B_{r}(q))=0.\] In particular, for every \(\varepsilon>0\) there exists \(r>0\) such that \(\mathfrak{m}_{H}^{(p)}(\partial B_{r}(q))\geq-\varepsilon\). Applying Lemma 3.2, we deduce that \(\mathfrak{m}_{\mathrm{iso}}^{(p)}\geq-\varepsilon\) for every \(\varepsilon>0\). Hence, \(\mathfrak{m}_{\mathrm{iso}}^{(p)}\geq 0\), as claimed. We do now treat the case \(\partial M\neq\varnothing\). Let \(w_{p}\) the solution to (1.3) and define \(\Omega_{t}=\{w_{p}\leq t\}\). Then we are in position to apply Lemma 3.2 and Theorem 2.12 to obtain \[\mathfrak{c}_{p}(\partial M)^{\frac{1}{(3-p)}}\leq\lim_{t\to+\infty}2\mathfrak{ m}_{H}^{(p)}(\partial\Omega_{t})\leq\liminf_{t\to+\infty}2\mathfrak{m}_{ \mathrm{iso}}^{(p)}(\Omega_{t})\leq 2\mathfrak{m}_{\mathrm{iso}}^{(p)}.\] Finally, we just have to discuss the equality case in the Positive Mass Theorem. Observe that in this case \(\mathfrak{m}_{H}^{(p)}\) is constant along \(\Omega_{t}\). In particular, the right-hand side of (2.11) constantly vanishes along the flow. The isometry with flat \(\mathbb{R}^{n}\) then follows through very classical computations, that can be performed following the lines of [10, Proof of Main Theorem 2.]. ## 4. Proof of Theorem 1.4 The proof of Theorem 1.4 follows from an asymptotic equivalence of \(p\)-Hawking masses. As one can expect, the \(p\)-Hawking mass has a better behaviour along the level set flow of \(w_{p}\), which is the solution to (1.3). But interestingly, under the right assumption on the asymptotic flatness, it tends to coincide with (the superior limit of) the Hawking mass on large sets. Moreover, it is asymptotically controlled by the other \(q\)-Hawking mass for \(1<q<3\). Here we employ both the monotonicity of the mass \(\mathfrak{m}_{H}^{(p)}\) and the better asymptotic behaviour of \(\tilde{\mathfrak{m}}_{H}^{(p)}\) defined in (3.1). Indeed, we will use the latter one to ensure that \(\mathrm{e}^{-t/(3-p)}\,\mathfrak{m}_{H}^{(p)}(\partial\{w_{p}\leq t\})=o(1)\) as \(t\to+\infty\), which permits to trigger the computations in [1, 1]. **Proposition 4.1**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{1}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature and (possibly empty) smooth, compact, minimal and outward minimising boundary. Assume that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\). Let \(\Omega\subset M\) be closed, bounded with connected \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) with \(\mathrm{h}\in L^{2}(\partial\Omega)\). Fix \(1<p<3\) and let \(\Omega_{t}=\{w_{p}\leq t\}\), where \(w_{p}\) is the solution to (1.3) starting at \(\Omega\). Then,_ \[\lim_{t\to+\infty}\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})=\limsup_{t\to+ \infty}\mathfrak{m}_{H}(\partial\Omega_{t})\leq\limsup_{t\to+\infty} \mathfrak{m}_{H}^{(q)}(\partial\Omega_{t}) \tag{4.1}\] _for every \(1<q<3\)._ Proof.: The inequality appearing in (4.1) is obtained arguing as in Lemma 2.13. Indeed, we get \[\limsup_{t\to+\infty}\mathfrak{m}_{H}(\partial\Omega_{t})\leq\limsup_{t\to+ \infty}\mathfrak{c}_{q}(\partial\Omega_{t})^{-\frac{1}{3-q}}\sqrt{\frac{| \partial\Omega_{t}|}{4\pi}}\mathfrak{m}_{H}^{(q)}(\partial\Omega_{t})=\limsup _{p\to+\infty}\mathfrak{m}_{H}^{(q)}(\partial\Omega_{t}), \tag{4.2}\] where the last identity follows by Lemma 2.11(3) and (4). In order to show the identity appearing in (4.1), we are thus left to show the inequality \[\lim_{t\to+\infty}\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\leq\limsup_{t \to+\infty}\mathfrak{m}_{H}(\partial\Omega_{t}), \tag{4.3}\] the reverse one consisting in (4.2) with \(p=q\). To do so, we claim that \[\left[4\pi+\int_{\partial\Omega}\frac{|\mathrm{D}w_{p}|^{2}}{(3-p)^{2}}\, \mathrm{d}\sigma-\int_{\partial\Omega}\frac{|\mathrm{D}w_{p}|}{(3-p)}\, \mathrm{H}\,\,\mathrm{d}\sigma\right]=o(1) \tag{4.4}\] as \(t\to+\infty\). Indeed, if this happens, we can follow the chain of inequalities in [1, 1] (see also [1, Theorem 4.11]) and obtain \[\lim_{t\to+\infty}\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\leq\limsup_{t \to+\infty}\mathfrak{c}_{p}(\partial\Omega_{t})^{\frac{1}{3-p}}\sqrt{\frac{ 4\pi}{|\partial\Omega_{t}|}}\mathfrak{m}_{H}(\partial\Omega_{t})=\limsup_{t \to+\infty}\mathfrak{m}_{H}(\partial\Omega_{t}),\] where again we applied Lemma 2.11(3) and (4), proving (4.3). We then proceed to prove (4.4). If \(\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})<0\) for every \(t\in[0,+\infty)\), arguing as in the Case 2 of the proof of Lemma 3.2, we deduce that (4.4) must hold. Otherwise, we would contradict the Monotonicity Formulas in Theorem 2.12. Conversely, appealing again to the Monotonicity Formulas in Theorem 2.12, \(t\mapsto\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\) must be definitely nonnegative. Observe that by Lemma 2.11(1) and (2) we have \[4\pi-\int_{\partial\Omega_{t}}\frac{\left|\mathrm{D}w_{p}\right|^{2}}{(3-p)^{2} }\,\mathrm{d}\sigma=o(1)\] as \(t\to+\infty\). Hence, Lemma 3.1 implies \[0\leq\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\leq\tilde{\mathfrak{m}}_{H}^{ (p)}(\partial\Omega_{t})=o(\mathrm{e}^{t})\] as \(t\to+\infty\). Dividing both sides by \(\mathfrak{c}_{p}(\partial\Omega_{t})^{1/(3-p)}\) we get (4.4). Conclusion of the proof of Theorem 1.4.: Differently from the case \(p=1\), corresponding to the classical Hawking mass, here we assume connectedness of the boundary of the manifold. In fact, it is not clear to us how to adapt the argument employed in [11, Section 6], where the authors took advantage of the horizons being minimal and outward minimizing in order to prescribe a jump that maintains the monotonicity of the Hawking mass. The difficulties when dealing with the \(p\)-Hawking mass arise in connection with the gradient of \(w_{p}\) appearing in its expression. Assuming \(\partial M\) to be connected, we can consider the solution \(w_{p}\) to (1.3) starting at \(\Omega=\partial M\) and \(\Omega_{t}=\left\{w_{p}\leq t\right\}\). The boundary of \(M\) being outermost implies that \(H_{2}(M,\partial M;\mathbb{Z})=\left\{0\right\}\) (see [11, Lemma 4.1], or the alternative argument in the proof of [1, Lemma 2.8]). Applying Proposition 4.1 for \(q=2\) we have \[\frac{\mathfrak{c}_{p}(\partial M)^{\frac{1}{3-p}}}{2}\leq\lim_{t\to+\infty} \mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\leq\limsup_{t\to+\infty}\mathfrak{ m}_{H}^{(2)}(\partial\Omega_{t}).\] Since by (2), \(\partial\Omega_{t}\) is regular for any \(t\) large enough, we can use [1, Theorem 4.11] to control the right hand side with \(\mathfrak{m}_{\mathrm{ADM}}\), concluding the proof of (1.5). Observe now that an outermost minimal boundary is outward minimising. If this were not the case, the outward minimising hull [11, 12] would be a closed minimal surface homologous to it and, by the Maximum Principle, disjoint from \(\partial M\). Then, letting \(p\to 1^{+}\) and appealing to [1, Theorem 1.2] recovers the sharp Penrose inequality (1.6). ## 5. Relation between the Isoperimetric mass and the \(p\)-isocapacitary mass We now employ the explicit control of the Hawking mass in terms of the \(p\)-Hawking mass Lemma 2.13 to produce an upper bound on the Isoperimetric mass in terms of \(p\)-Isocapacitary mass. This bound is not sharp but sharpens as \(p\to 1^{+}\). **Lemma 5.1**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature and (possibly empty) smooth, compact, minimal boundary. Assume that \((M,g)\) satisfies \((\dagger)\) for some \(1<p<3\). Then,_ \[\mathfrak{m}_{\mathrm{iso}}\leq\left(\frac{2^{2p-1}\pi^{\frac{p-1}{2}}p^{p} \mathrm{C}_{S}^{\frac{3}{2}(p-1)}}{(3-p)(p-1)^{p-1}}\right)^{\frac{1}{3-p}} \mathfrak{m}_{\mathrm{iso}}^{(p)},\] _where \(\mathrm{C}_{S}\) is the global Sobolev constant of \((M,g)\)._ Proof.: By the topological description of manifolds like these, reworked in [1, Lemma 2.8], we can assume that our Riemannian manifold has a (possibly empty) minimal boundary such that \(H_{2}(M,\partial M;\mathbb{Z})=\left\{0\right\}\). Let \(E\subset M\) be a closed bounded subset with smooth boundary such that any connected component of \(\partial M\) is either contained in \(E\) or disjoint from \(E\). Using [11, Theorem 6.1] (see also [1, Proposition 2.5]), we can find a subset \(\Omega\) closed bounded with \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) and with \(\mathrm{h}\in L^{2}(\partial\Omega)\) such that \(\mathfrak{m}_{H}(\partial E)\leq\mathfrak{m}_{H}(\partial\Omega)\). Let the solution to (1.3) starting at \(\Omega\) and \(\Omega_{t}=\{w_{p}\leq t\}\). By Lemma 2.13 and Lemma 3.2 we now have \[\mathfrak{m}_{H}(\partial E)\leq\left(\frac{2^{2p-1}\pi^{\frac{p-1}{2}}p^{p} \mathrm{C}_{S}^{\frac{3}{2}(p-1)}}{(3-p)(p-1)^{p-1}}\right)^{\frac{1}{3-p}} \limsup_{t\to+\infty}\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\leq\left( \frac{2^{2p-1}\pi^{\frac{p-1}{2}}p^{p}\mathrm{C}_{S}^{\frac{3}{2}(p-1)}}{(3-p) (p-1)^{p-1}}\right)^{\frac{1}{3-p}}\mathfrak{m}_{\mathrm{iso}}^{(p)}.\] Since we have a control on the Hawking mass of every \(E\), we can apply [11] (see [1, Theorem 2.6] for the precise statement and remarks) to control the Isoperimetric mass with the same quantity. We prove a family of equivalent formulations for the \(p\)-Isocapacitary masses, as well as for the Isoperimetric one. The proofs will follow the one given in [11, Lemma 10] for the \(2\)-isocapacitary mass. **Proposition 5.2**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with (possibly empty) compact smooth boundary. Then, for \(1<p<3\), we have_ \[\mathfrak{m}_{\mathrm{iso}}^{(p)}=\sup_{(\Omega_{j})_{j\in\mathbb{N}}}\limsup_ {j\to+\infty}\frac{2\mathfrak{c}_{p}(\partial\Omega)^{\frac{1-3\alpha}{3-p}}} {3p\alpha}\left(\left(\frac{3|\Omega_{j}|}{4\pi}\right)^{\alpha}-\mathfrak{c} _{p}(\partial\Omega_{j})^{\frac{3\alpha}{3-p}}\right), \tag{5.1}\] _for every \(\alpha\geq 1/3\)._ The main computation performed in order to prove the result above is the following one, that we isolate for future reference. **Lemma 5.3**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with (possibly empty) compact smooth boundary, and \(1<p<3\). Let \((\Omega_{j})_{j\in\mathbb{N}}\) be an exhaustion of \(M\) such that_ \[\lim_{j\to+\infty}\frac{|\Omega_{j}|}{\mathfrak{c}_{p}(\partial\Omega_{j})^{ \frac{3}{3-p}}}=\frac{4\pi}{3}. \tag{5.2}\] _Then, we have_ \[\limsup_{j\to+\infty}\mathfrak{m}_{\mathrm{iso}}^{(p)}(\Omega_{j})=\limsup_{ j\to+\infty}\frac{2\mathfrak{c}_{p}(\partial\Omega_{j})^{\frac{1-3\alpha}{3-p}}} {3p\alpha}\left(\left(\frac{3|\Omega_{j}|}{4\pi}\right)^{\alpha}-\mathfrak{c} _{p}(\partial\Omega_{j})^{\frac{3\alpha}{3-p}}\right)\] _for any \(\alpha\geq 1/3\)._ Proof.: We are left to prove the equivalence (5.1). Let \((\Omega_{j})_{j\in\mathbb{N}}\) be a sequence such that (5.2) holds. Up to considering a subsequence, we can assume that \((\Omega_{j})_{j\in\mathbb{N}}\) realises the superior limit. Denote \(f(z)=z^{\alpha}\) and \(z_{j}=|\Omega_{j}|/\mathfrak{c}_{p}(\partial\Omega_{j})^{3/(3-p)}\), we have that \[\limsup_{j\to+\infty}\mathfrak{m}_{\mathrm{iso}}^{(p)}(\Omega_{j})=\lim_{j\to+ \infty}\frac{\mathfrak{c}_{p}(\partial\Omega_{j})^{\frac{1}{3-p}}}{2p\pi} \frac{z_{j}-4\pi/3}{f(z_{j})-f(4\pi/3)}(f(z_{j})-f(4\pi/3)). \tag{5.3}\] Since \(f\) is differentiable at \(4\pi/3\) and \(z_{j}\to 4\pi/3\neq 0\) as \(j\to+\infty\) by (5.2), we have \[\lim_{j\to+\infty}\frac{z_{j}-4\pi/3}{f(z_{j})-f(4\pi/3)}=\frac{1}{f^{\prime}( 4\pi/3)}=\frac{3^{\alpha-1}}{\alpha(4\pi)^{\alpha-1}}.\] Plugging this into (5.3) we conclude. Proof of Proposition 5.2.: We claim that it is enough to prove the equivalence on sequences such that (5.2) holds, so that Proposition 5.2 follows from Lemma 5.3. Let then \((\Omega_{j})_{j\in\mathbb{N}}\) be an exhaustion. By the \(p\)-Isocapacitary Inequality we have that \[\limsup_{j\to+\infty}\frac{|\Omega_{j}|}{\mathfrak{c}_{p}(\partial\Omega_{j})^{ \frac{3}{3-p}}}\leq\frac{4\pi}{3}.\] Indeed, the metric \(g\) becomes uniformly equivalent to the flat Euclidean metric on \(M\smallsetminus\Omega_{j}\) as \(j\to+\infty\). Moreover, for sufficiently large \(j\) there exists a unique \(r_{j}>0\) such that the coordinate ball \(B_{r_{j}}\) has the same volume of \(\Omega_{j}\). Define \[\Omega_{j}^{\prime}=\left\{\begin{matrix}\Omega_{j}&\text{ if }\operatorname{Cap}_{ p}(\Omega_{j})\leq\operatorname{Cap}_{p}(B_{r_{j}}),\\ B_{r_{j}}&\text{ if }\operatorname{Cap}_{p}(\Omega_{j})>\operatorname{Cap}_{p}(B_{r_{j}}). \end{matrix}\right.\] The sequence \((\Omega_{j}^{\prime})_{j\in\mathbb{N}}\) is an exhaustion of \(M\) and \[\liminf_{j\to+\infty}\frac{|\Omega_{j}^{\prime}|}{\mathfrak{c}_{p}(\partial \Omega_{j}^{\prime})^{\frac{3}{3-p}}}\geq\liminf_{j\to+\infty}\frac{|B_{r_{j}} |}{\mathfrak{c}_{p}(\partial B_{r_{j}})^{\frac{3}{3-p}}}=\frac{4\pi}{3},\] where the right-hand side is computed using the asymptotic flatness. In particular, the sequence \((\Omega_{j}^{\prime})_{j\in\mathbb{N}}\) fulfils (5.2), \(|\Omega_{j}^{\prime}|=|\Omega_{j}|\) and \(\mathfrak{c}_{p}(\partial\Omega_{j}^{\prime})\leq\mathfrak{c}_{p}(\partial \Omega_{j})\). Then \((\Omega_{j}^{\prime})_{j\in\mathbb{N}}\) is a better competitor both for \(\mathfrak{m}_{\text{iso}}^{p}\) as in the definition of \(p\)-Isocapacitary mass (1.4) and for the right-hand side of (5.1). This completes the proof. Completely analogous results hold for the perimeter and the Isoperimetric mass. We gather them in the following statement. **Proposition 5.4**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with (possibly empty) compact smooth boundary. Let \((\Omega_{j})_{j}\) be an exhaustion of \(M\) such that_ \[\lim_{j\to+\infty}\frac{|\Omega_{j}|}{|\partial\Omega_{j}|^{\frac{3}{2}}}= \frac{1}{6\sqrt{\pi}}.\] _Then,_ \[\limsup_{j\to+\infty}\frac{2}{|\partial\Omega_{j}|}\left(|\Omega_{j}|-\frac{ |\partial\Omega_{j}|^{\frac{3}{2}}}{6\sqrt{\pi}}\right)=\limsup_{j\to+\infty} \frac{|\partial\Omega_{j}|^{\frac{1-3\alpha}{2}}}{3\alpha\sqrt{\pi}}\left((6 \sqrt{\pi}|\Omega_{j}|)^{\alpha}-|\partial\Omega_{j}|^{\frac{3\alpha}{2}}\right) \tag{5.4}\] _holds for any \(\alpha\in\mathbb{R}\smallsetminus\{0\}\). As a consequence, we have_ \[\mathfrak{m}_{\text{iso}}=\sup_{(\Omega_{j})_{j\in\mathbb{N}}}\limsup_{j\to+ \infty}\frac{|\partial\Omega_{j}|^{\frac{1-3\alpha}{2}}}{3\alpha\sqrt{\pi}} \left((6\sqrt{\pi}|\Omega_{j}|)^{\alpha}-|\partial\Omega_{j}|^{\frac{3\alpha }{2}}\right)\] _for every \(\alpha\geq 1/3\)._ The inequality \(\mathfrak{m}_{\text{iso}}^{(p)}\leq\mathfrak{m}_{\text{iso}}\) will substantially be a consequence of the following \(p\)-Isocapacitary Inequality for sets with volume going to infinity. Its isoperimetric version was pointed out in [11, Corollary C.3]. **Theorem 5.5** (Sharp asymptotic \(p\)-Isocapacitary Inequality).: _Let \((M,g)\) be a \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian manifold with (possibly empty) compact smooth boundary \(\partial M\). Then, for every \(1<p<3\) we have that_ \[|\Omega|^{\frac{3-p}{3}}\leq\left(\frac{4\pi}{3}\right)^{\frac{3-p}{3}} \mathfrak{c}_{p}(\partial\Omega)+\frac{p(3-p)}{2}\mathfrak{m}_{\text{iso}} \left(\frac{4\pi}{3}\right)^{\frac{3-p}{3}}\mathfrak{c}_{p}(\partial\Omega)^ {\frac{2-p}{3-p}}(1+o(1)) \tag{5.5}\] _as \(|\Omega|\to+\infty\) with \(\Omega\) closed and bounded with \(\mathscr{C}^{1,\alpha}\)-boundary containing \(\partial M\)._ Proof.: Assume that \(\mathfrak{m}_{\rm iso}<+\infty\), otherwise there is nothing to prove. We claim that for \(\alpha=2p/3\geq 1/3\), for every \(\varepsilon>0\) there exists \(V_{\varepsilon}>\varepsilon^{-3}\) such that \[(6\sqrt{\pi}|\Omega|)^{\frac{2p}{3}}\leq|\partial\Omega|^{p}+2p\sqrt{\pi}( \mathfrak{m}_{\rm iso}+\varepsilon)|\partial\Omega|^{\frac{2p-1}{2}} \tag{5.6}\] for every \(\Omega\subseteq M\) such that \(|\Omega|\geq V_{\varepsilon}\). Indeed, if this were not the case, we would find a sequence \((\Omega_{j})_{j\in\mathbb{N}}\) with \(|\Omega_{j}|\to+\infty\) such that the right-hand side, and thus the left-hand side, of (5.4), is strictly bigger than \(\mathfrak{m}_{\rm iso}\). Since, by the Isoperimetric Inequality, the perimeters of the \(\Omega_{j}\)'s diverge at infinity too, this would contradict [19, Proposition 37], stating that one can relax the competitors in the definition of the Isoperimetric mass in order to include any sequence of bounded sets containing \(\partial M\) with diverging perimeters. We can now assume that \[|\Omega|^{\frac{3-p}{2p}}\geq\left(\frac{4\pi}{3}\right)^{\frac{3-p}{2p}} \mathfrak{c}_{p}(\partial\Omega)^{\frac{3}{2p}}(1-\varepsilon)^{\frac{3-p}{2p }}, \tag{5.7}\] otherwise (5.5) is trivially satisfied. Let \(w_{p}:M\smallsetminus\Omega\to\mathbb{R}\) be the solution to (1.3) starting at \(\Omega\), \(w_{p}=-(p-1)\log u_{p}\) and let \(\Omega_{t}=\{u_{p}\geq t\}\cup\Omega\) and \(V(t)=|\Omega_{t}|\geq V_{\varepsilon}\) for every \(t\in(0,1)\). The Holder's Inequality with exponents \(a=p\) and \(b=p/(p-1)\) gives \[|\partial\Omega_{t}|^{p}\leq\bigg{(}\int_{\partial\Omega_{t}}|\mathrm{D}u_{p} |^{p-1}\,\mathrm{d}\sigma\bigg{)}\left(\int_{\partial\Omega_{t}}\frac{1}{| \mathrm{D}u_{p}|}\,\mathrm{d}\sigma\right)^{p-1}=4\pi\mathfrak{c}_{p}\left( \frac{3-p}{p-1}\right)^{p-1}[-V^{\prime}(t)]^{p-1} \tag{5.8}\] for almost every \(t\in(0,1]\), where \(\mathfrak{c}_{p}=\mathfrak{c}_{p}(\partial\Omega)\). Plugging it into (5.6), we have \[\frac{[6\sqrt{\pi}V(t)]^{\frac{2p}{3}}}{(-V^{\prime}(t))^{p-1}}\leq 4\pi \mathfrak{c}_{p}\left(\frac{3-p}{p-1}\right)^{p-1}\!\!\!+\left[4\pi\mathfrak{ c}_{p}\left(\frac{3-p}{p-1}\right)^{p-1}\right]^{\frac{2p-1}{2p}}\frac{2p\sqrt{\pi}( \mathfrak{m}_{\rm iso}+\varepsilon)}{(-V^{\prime}(t))^{\frac{p-1}{2p}}}.\] Integrating both sides on \((0,1)\), we obtain \[\int_{0}^{1}\frac{[6\sqrt{\pi}V(t)]^{\frac{2p}{3}}}{(-V^{\prime}(t))^{p-1}} \,\mathrm{d}t\leq 4\pi\mathfrak{c}_{p}\left(\frac{3-p}{p-1}\right)^{p-1}\!\!\!+ \left[4\pi\mathfrak{c}_{p}\left(\frac{3-p}{p-1}\right)^{p-1}\right]^{\frac{2p- 1}{2p}}\!\int_{0}^{1}\frac{2p\sqrt{\pi}(\mathfrak{m}_{\rm iso}+\varepsilon)}{ (-V^{\prime}(t))^{\frac{p-1}{2p}}}\,\mathrm{d}t. \tag{5.9}\] By (5.6) and (5.8) we have that \[[-V^{\prime}(t)]^{p-1}\geq\left(\frac{p-1}{3-p}\right)^{p-1}\frac{[6\sqrt{\pi }V(t)]^{\frac{2p}{3}}}{4\pi\mathfrak{c}_{p}}\left(1+\frac{\mathrm{C}}{V(t)^{ \frac{1}{3}}}\right)^{-1}, \tag{5.10}\] where \(\mathrm{C}\) depends only on \(\mathfrak{m}_{\rm iso}\), \(p\) and the global Isoperimetric constant. Hence, (5.10) and (5.7) yield \[\begin{split}\int_{0}^{1}(-V^{\prime}(t))^{-\frac{p-1}{2p}}\, \mathrm{d}t&=-\int_{0}^{1}(-V^{\prime}(t))^{-\frac{3p-1}{2p}}V^{ \prime}(t)\,\mathrm{d}t\\ &\leq\left[\left(\frac{3-p}{p-1}\right)^{p-1}\frac{(4\pi)^{ \frac{3-p}{3}}}{3^{\frac{2p}{3}}}\mathfrak{c}_{p}(1+\mathrm{C}\varepsilon) \right]^{\frac{3p-1}{2p(p-1)}}\int_{|\Omega|}^{+\infty}V^{-\frac{3p-1}{3(p-1)} }\,\mathrm{d}V\\ &\leq\frac{3-p}{2}\left[\left(\frac{3-p}{p-1}\right)^{\frac{(p-1 )^{2}}{3p-1}}\frac{(4\pi)^{\frac{3-p}{3}}}{3^{\frac{4p}{3(3p-1)}}}\mathfrak{ c}_{p}(1+\mathrm{C}\varepsilon)\right]^{\frac{3p-1}{2p(p-1)}}|\Omega|^{-\frac{2}{3(p-1)}} \\ &\leq\frac{3-p}{2}(4\pi)^{-\frac{p-1}{2p}}\mathfrak{c}_{p}^{-\frac {3(p-1)}{2p(3-p)}}\left(\frac{3-p}{p-1}\right)^{\frac{p-1}{2p}}\frac{(1+ \mathrm{C}\varepsilon)^{\frac{3p-1}{2p(p-1)}}}{(1-\varepsilon)^{\frac{2}{3(p-1 )}}},\end{split} \tag{5.11}\] where we used \(V(t)\geq|\Omega|\geq\varepsilon^{-3}\). On the other hand, let \(v:\{|x|\geq R(1)\}\subset\mathbb{R}^{n}\to(0,1]\) be the function such that \(\{v=t\}=\{|x|=R(t)\}\) and \(|\Omega_{t}|=4\pi R(t)^{3}/3\). Since by construction \(|\mathrm{D}v|=-4\pi R(t)^{2}/V^{\prime}(t)\), the function \(v\) is locally Lipschitz. By coarea formula, we have \[\begin{split}\int\limits_{0}^{1}\frac{V(t)^{\frac{2p}{3}}}{(-V^{ \prime}(t))^{p-1}}\,\mathrm{d}t&=\frac{1}{(36\pi)^{\frac{p}{3}}} \int\limits_{0}^{1}\int\limits_{\{v=t\}}\!\!\left|\mathrm{D}v\right|^{p-1} \mathrm{d}\sigma\,\mathrm{d}t=\frac{1}{(36\pi)^{\frac{p}{3}}}\int\limits_{\{|x |\geq R(1)\}}\!\!\left|\mathrm{D}v\right|^{p}\mathrm{d}x\\ &\geq\frac{(4\pi)^{\frac{3-p}{3}}}{3^{\frac{2p}{3}}}\left(\frac{3 -p}{p-1}\right)^{p-1}\mathfrak{c}_{p}(\{|x|=R(1)\})=\frac{1}{3^{p-1}}\left( \frac{3-p}{p-1}\right)^{p-1}|\Omega|^{\frac{3-p}{3}}.\end{split} \tag{5.12}\] Plugging (5.11) and (5.12) into (5.9), we conclude the proof by arbitrariness of \(\varepsilon\). We are ready to prove the claimed upper bound of the \(p\)-Isocapacitary mass in terms of the Isoperimetric mass. **Theorem 5.6**.: _Let \((M,g)\) a \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian manifold. Then, for every \(1<p\leq 2\) we have that_ \[\mathfrak{m}_{\mathrm{iso}}^{(p)}\leq\mathfrak{m}_{\mathrm{iso}}.\] Proof.: Let \((\Omega_{j})_{j\in\mathbb{N}}\) be an exhaustion of \((M,g)\), then \(|\Omega_{j}|\to+\infty\) as \(j\to+\infty\). In particular, by Theorem 5.5 we have that \[\frac{1}{\mathfrak{c}_{p}(\partial\Omega_{j})^{\frac{2-p}{3-p}}}\left[\left( \frac{3|\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{3}}-\mathfrak{c}_{p}(\partial \Omega_{j})\right]\leq\frac{p(3-p)}{2}\mathfrak{m}_{\mathrm{iso}}(1+o(1))\] as \(j\to+\infty\). Hence \[\limsup_{j\to+\infty}\frac{1}{\mathfrak{c}_{p}(\partial\Omega_{j})^{\frac{2- p}{3-p}}}\left[\left(\frac{3|\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{3}}- \mathfrak{c}_{p}(\partial\Omega_{j})\right]\leq\frac{p(3-p)}{2}\mathfrak{m}_ {\mathrm{iso}}.\] Taking the supremum among all exhaustions \((\Omega_{j})_{j\in\mathbb{N}}\) we conclude employing Proposition 5.2 for \(\alpha=(3-p)/3\). Combining Lemma 5.1 and Theorem 5.6 we directly get the convergence of the \(p\)-Isocapacitary masses to the Isoperimetric mass as \(p\to 1^{+}\). **Corollary 5.7**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature and (possibly empty) smooth, compact, minimal boundary. Assume that \((M,g)\) satisfies \((\dagger)\) for \(1<p<3\). Then_ \[\lim_{p\to 1^{+}}\mathfrak{m}_{\mathrm{iso}}^{(p)}=\mathfrak{m}_{\mathrm{iso}}.\] We are ready to prove, in the stronger \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat assumptions, \(\tau>1/2\), that the \(p\)-Isocapacitary masses do actually coincide with each other. Proof of Theorem 1.3.: The first inequality, under these assumptions, is the content of [1, Theorem 4.13]. Following the same lines of [1, Proposition 14] (based on computations contained in [10], which in fact only relies on the \(\mathscr{C}^{1}\)-character of the metric), we have \[\frac{1}{16\pi}\int_{\partial B_{r}}\mathrm{H}^{2}\ \mathrm{d}\sigma =1-\frac{2\mathfrak{m}_{\mathrm{ADM}}}{r}+o(r^{-1}),\] \[|\partial B_{r}| =4\pi r^{2}+4\pi\eta(r)+o(r)\] \[\frac{3|B_{r}|}{4\pi} =r^{3}+\frac{3\mathfrak{m}_{\mathrm{ADM}}}{2}r^{2}+\frac{3}{2} \eta(r)r+o(r^{2})\] as \(r\to+\infty\), where \(B_{r}=\{|x|\leq r\}\) and \(|\eta(r)|\leq\mathrm{C}r^{2-\tau}\). Employing Proposition 2.14 and using Taylor's expansion of \({}_{2}F_{1}\) around \(0\) we have \[\mathfrak{c}_{p}(\partial B_{r}) \leq\left(r^{2}+\eta(r)+o(r)\right)^{\frac{3-p}{2}}{}_{2}F_{1} \left(\frac{1}{2},\frac{3-p}{p-1},\frac{2}{p-1};\frac{2\mathfrak{m}_{\mathrm{ ADM}}}{r}+o(r^{-1})\right)^{-(p-1)}\] \[\leq\left(r^{2}+\eta(r)+o(r)\right)^{\frac{3-p}{2}}\left(1+\frac {3-p}{2r}\mathfrak{m}_{\mathrm{ADM}}+o(r^{-1})\right)^{-(p-1)}\] \[=r^{3-p}\left(1+\frac{3-p}{2r^{2}}\eta(r)+o(r^{-1})\right)\left( 1-\frac{(3-p)(p-1)}{2r}\eta(r)+o(r^{-1})\right)\] \[=r^{3-p}+\frac{3-p}{2}\eta(r)r^{1-p}-\frac{(3-p)(p-1)}{2}r^{2-p} \mathfrak{m}_{\mathrm{ADM}}+o(r^{2-p}).\] Proposition 5.2 for \(\alpha=(3-p)/3\) gives \[\mathfrak{m}_{\mathrm{iso}}^{(p)} \geq\limsup_{r\to+\infty}\frac{2\mathfrak{c}_{p}(\partial B_{r}) ^{\frac{p-2}{3-p}}}{p(3-p)}\left(\left(\frac{3|B_{r}|}{4\pi}\right)^{\frac{3-p }{3}}-\mathfrak{c}_{p}(\partial B_{r})\right)\] \[\geq\limsup_{r\to+\infty}\frac{2\mathfrak{c}_{p}(\partial B_{r}) ^{\frac{p-2}{3-p}}}{p(3-p)}\left(\frac{p(3-p)}{2}r^{2-p}\mathfrak{m}_{\mathrm{ ADM}}+o(r^{2-p})\right)=\mathfrak{m}_{\mathrm{ADM}},\] where the last identity is given by [1, Lemma 2.21]. The conclusion follows by Theorem 5.6, since \(\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}_{\mathrm{ADM}}\). ## Appendix A Monotonicities along the \(p\)-capacitary potential Here we slightly improve the monotonicity results in [1, 2]. Inspired by these two works, we are approximating the \(p\)-capacitary potential with a family of smooth functions. To enter more in detail, consider \((M,g)\) a strongly \(p\)-nonparabolic Riemannian manifold with (possibly empty) boundary. Let \(\Omega\subset M\) be homologous to \(\partial M\) and \(u_{p}\) is the solution to (1.3) starting at \(\Omega\). For every \(T>1\) let \(\Omega_{T}\) be strictly homologous to \(\partial M\) with connected boundary and containing \(\{u_{p}>\alpha_{p}(T)\}\), where \(\alpha_{p}(T)=T^{-(3-p)/(p-1)}\). Then, we define \(u_{p}^{\varepsilon}\) as the solution to the following boundary value problem \[\left\{\begin{aligned} \Delta_{p}^{\varepsilon}u_{p}^{ \varepsilon}&=0&&\text{ on }\operatorname{Int}\Omega_{T}\smallsetminus\Omega,\\ u_{p}^{\varepsilon}&=1&&\text{ on }\partial \Omega,\\ u_{p}^{\varepsilon}&=u&&\text{ on }\partial \Omega_{T},\end{aligned}\right.\] (A.1) where \[\Delta_{p}^{\varepsilon}f=\operatorname{div}\left(|\mathrm{D}f|_{\varepsilon}^{ p-2}\mathrm{D}f\right)\qquad\qquad\text{ and }\qquad\qquad|\,\cdot\,|_{\varepsilon}=\sqrt{|\,\cdot\,|^{2}+ \varepsilon^{2}}.\] The function \(u_{p}^{\varepsilon}\) is smooth away from the exterior boundary and converges in \(\mathscr{C}^{1,\beta}_{\mathrm{loc}}\) to the \(p\)-capacitary potential \(u_{p}\) as \(\varepsilon\to 0^{+}\). Indeed, this family was used in [1] to prove \(\mathscr{C}^{1,\beta}_{\mathrm{loc}}\)-regularity of \(p\)-harmonic functions. Moreover, looking more carefully at the proof of [1, 1], Lemma 2.1], \(|{\rm D}u_{p}^{\varepsilon}|^{p-1}\) is uniformly bounded in \(W^{1,2}_{\rm loc}\). Hence, up to a not relabeled subsequence, we can always assume that \(|{\rm D}u_{p}^{\varepsilon}|^{p-1}\) weakly converges in \(W^{1,2}_{\rm loc}\). Moreover, since \(|{\rm D}u_{p}^{\varepsilon}|\) converges uniformly to \(|{\rm D}u_{p}|\), the weak limit of \({\rm D}|{\rm D}u_{p}^{\varepsilon}|^{p-1}\) must be \({\rm D}|{\rm D}u_{p}|^{p-1}\). We are now ready to prove Theorem 2.12. Proof of Theorem 2.12.: For ease of computations we rewrite the function \(t\mapsto\mathfrak{m}_{H}^{(p)}(\partial\Omega_{t})\) in terms of the \(p\)-capacitary potential that is \[U_{p}(t)=4\pi t+\frac{(p-1)^{2}}{(3-p)^{2}}t^{\frac{5-p}{p-1}}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Indeed, if that is the case, for every nonnegative test function \(\varphi\in\mathscr{C}_{c}^{\infty}(1,T)\), we obtain that \[-\int\limits_{1}^{T}\varphi^{\prime}(t)F_{p}(t)\,\mathrm{d}t =-\lim_{\varepsilon\to 0}\int\limits_{1}^{T}\varphi^{\prime}(t)F_{ \varepsilon}^{p}(t)\,\mathrm{d}t\geq-\frac{(p+1)^{2}}{(3-p)^{2}}\lim_{ \varepsilon\to 0}\varepsilon\int\limits_{1}^{T}\varphi(t)t^{\frac{2(3-p)}{p-1}} \int\limits_{\big{\{}u_{p}^{\varepsilon}=\alpha_{p}(t)\big{\}}}\big{|}\, \mathrm{d}\sigma\,\mathrm{d}t\] \[=-\frac{(p+1)^{2}}{(3-p)^{2}}\lim_{\varepsilon\to 0}\varepsilon\int \limits_{\Omega_{T}\smallsetminus\Omega}\varphi(\alpha_{p}^{-1}(u_{p}^{ \varepsilon}))\frac{|\mathrm{D}u_{p}^{\varepsilon}|^{2}}{(u_{p}^{\varepsilon })^{\frac{3-p}{p-1}+3}}\,\mathrm{d}\mu=0,\] since \(u_{p}^{\varepsilon}\) converges to \(u_{p}\) in \(\mathscr{C}_{\mathrm{loc}}^{1,\beta}\). This shows that \(F_{p}\) has nonnegative first derivative in the sense of distributions, proving its monotonicity. We now turn to prove the claim. Consider any \(\varphi\in\mathscr{C}_{c}^{\infty}(0,+\infty)\). The first term is independent of \(\varepsilon\). As far as the second term is concerned, by coarea formula we have that \[\int\limits_{1}^{T}\varphi(t)t^{\frac{5-p}{p-1}}\int\limits_{\big{\{}u_{p}^{ \varepsilon}=\alpha_{p}(t)\big{\}}}\big{|}\mathrm{D}u_{p}^{\varepsilon}\big{|} ^{2}\,\mathrm{d}\sigma\,\mathrm{d}t=\frac{(p-1)}{(3-p)}\int\limits_{\Omega_{ T}\smallsetminus\Omega}(u_{p}^{\varepsilon})^{-\frac{7-p}{3-p}}\varphi(\alpha_{p}^{-1}(u_{p}^{ \varepsilon}))\big{|}\mathrm{D}u_{p}^{\varepsilon}\big{|}^{3}\,\mathrm{d}\mu.\] Since the function \(\varphi\) is smooth with compact support in \((1,T)\) and \(u_{p}^{\varepsilon}\) converges to \(u_{p}\) in \(\mathscr{C}_{\mathrm{loc}}^{1,\beta}\), the right hand side converges to \[\frac{(p-1)}{(3-p)}\int\limits_{\Omega_{T}\smallsetminus\Omega}\!\!\!\!\!\!\!u_{p }^{-\frac{7-p}{3-p}}\varphi(\alpha_{p}^{-1}(u_{p}^{\varepsilon}))|\mathrm{D}u _{p}|^{3}\,\mathrm{d}\mu=\int\limits_{1}^{T}\varphi(t)t^{\frac{5-p}{p-1}}\int \limits_{\big{\{}u_{p}^{\varepsilon}=\alpha_{p}(t)\big{\}}}|\mathrm{D}u_{p}|^ {2}\,\mathrm{d}\sigma\,\mathrm{d}t,\] where the identity follows by coarea formula. The last term is a little trickier since it involves second derivatives of the function \(u_{p}^{\varepsilon}\) that are not converging uniformly as \(\varepsilon\to 0\) to the corresponding ones for \(u_{p}\). Employing again the coarea formula and by straightforward computations, we then have that \[\int\limits_{1}^{T}\varphi(t)t^{\frac{2}{p-1}}\int\limits_{\big{\{}u_{p}^{ \varepsilon}=\alpha_{p}(t)\big{\}}}\big{|}\mathrm{D}u_{p}^{\varepsilon}\big{|} \,\mathrm{H}_{\varepsilon}\,\,\mathrm{d}\sigma\,\mathrm{d}t=\int\limits_{1}^{ T}\varphi(t)t^{\frac{2}{p-1}}\int\limits_{\big{\{}u_{p}^{\varepsilon}=\alpha_{p}(t) \big{\}}}\frac{\langle\mathrm{D}|\mathrm{D}u_{p}^{\varepsilon}|^{p-1}\,|\, \mathrm{D}u_{p}^{\varepsilon}\rangle}{|\mathrm{D}u_{p}^{\varepsilon}|^{p-1}} \left(1-\frac{(p-2)}{(p-1)}\frac{\varepsilon^{2}}{|\mathrm{D}u_{p}^{ \varepsilon}|^{2}}\right)\,\mathrm{d}\sigma\,\mathrm{d}t\\ =\frac{(p-1)}{(3-p)}\int\limits_{\Omega_{T}\smallsetminus\Omega}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Moreover, the remaining term vanishes. Holder's Inequality and equi-boundedness of \(|\mathrm{D}|\mathrm{D}u_{p}^{\varepsilon}|^{p-1}|\) yield \[\int_{K}\big{|}\mathrm{D}u_{p}^{\varepsilon}\big{|}^{3-p}\Big{|}\mathrm{D}\big{|} \mathrm{D}u_{p}^{\varepsilon}\big{|}^{p-1}\Big{|}\frac{\varepsilon^{2}}{| \mathrm{D}u_{p}^{\varepsilon}|^{2}_{\varepsilon}}\,\mathrm{d}\mu\leq\mathrm{C} _{1}\left(\,\int_{K}\frac{\varepsilon^{4}}{|\mathrm{D}u_{p}^{\varepsilon}|^{4} _{\varepsilon}}\big{|}\mathrm{D}u_{p}^{\varepsilon}\big{|}^{6-2p}\,\mathrm{d} \mu\right)^{\frac{1}{2}}\] (A.2) for every \(K\) compactly contained in \(\big{\{}\alpha_{p}(T)<u_{p}^{\varepsilon}<1\big{\}}\) and for some positive constant \(\mathrm{C}_{1}\). Observe that \[\big{|}\mathrm{D}u_{p}^{\varepsilon}\big{|}^{6-2p}\frac{\varepsilon^{4}}{| \mathrm{D}u_{p}^{\varepsilon}|^{4}_{\varepsilon}}\leq\big{|}\mathrm{D}u_{p}^{ \varepsilon}\big{|}^{6-2p}\leq\mathrm{C}_{2},\] since the function \(|\mathrm{D}u_{\varepsilon}^{p}|\) converges locally uniformly and \(1<p<3\). The left-hand side converges almost everywhere to \(0\). Indeed, if a point belongs to the critical set of \(u_{p}\), \(|\mathrm{D}u_{\varepsilon}^{p}|^{6-2p}\to 0\) as \(\varepsilon\to 0\). Otherwise, \(|\mathrm{D}u_{\varepsilon}^{p}|\) is definitely bounded away from \(0\), then \(|\mathrm{D}u_{p}^{\varepsilon}|^{4}_{\varepsilon}\) is not vanishing, thus the left-hand side is controlled by \(\varepsilon^{4}\) up to a constant. By Dominated Convergence Theorem the right-hand side in (A.2) approaches \(0\) as \(\varepsilon\to 0\), so does the left-hand side, concluding the step. We use this theorem to prove an analogous of [14, Theorem 1.2] along the level set of the \(p\)-capacitary potential. Indeed, this happens to be monotone nonincreasing exactly when the \(p\)-Hawking mass of the evolving hypersurfaces is nonnegative. Relations among these quantities have been considered also in [11]. **Theorem A.1**.: _Let \((M,g)\) be a complete, strongly \(p\)-nonparabolic Riemannian manifold with nonnegative scalar curvature and with smooth, compact and connected (possibly empty) boundary \(\partial M\). Assume that \(H_{2}(M,\partial M;\mathbb{Z})=\{0\}\). Let \(\Omega\subseteq M\) be bounded closed with \(\mathscr{C}^{1}\)-boundary homologous to \(\partial M\) and with \(\mathrm{h}\in L^{2}(\partial\Omega)\). Let \(w_{p}\) be the solution to (1.3) starting at \(\Omega\). Then, denoting \(\Omega_{t}=\{w_{p}\leq t\}\), the function_ \[t\mapsto\frac{\mathfrak{c}_{p}(\partial\Omega_{t})^{-\frac{1}{p-1}}}{4\pi(3-p )}\left(4\pi-\int_{\partial\Omega_{t}}\frac{|\mathrm{D}w_{p}|^{2}}{(3-p)^{2}} \,\mathrm{d}\mu\right)\] _belongs to \(W^{1,1}_{\mathrm{loc}}(0,+\infty)\) and_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left[\mathfrak{c}_{p}(\partial\Omega_{t})^{- \frac{1}{p-1}}\left(4\pi-\int_{\partial\Omega_{t}}\frac{|\mathrm{D}w_{p}|^{2}}{ (3-p)^{2}}\,\mathrm{d}\sigma\right)\right]=-\frac{8\pi}{p-1}\mathfrak{c}_{p}( \partial\Omega_{t})^{-\frac{2}{(3-p)(p-1)}}\mathfrak{m}_{H}^{(p)}(\partial \Omega_{t}),\] _for almost every \(t\in[0,+\infty)\)._ Proof.: Fix \(T>1\) and let \(u_{p}^{\varepsilon}\) the solution to the problem (A.1). Consider any \(\varphi\in\mathscr{C}_{c}^{\infty}(0,T)\). Employing coarea formula and integration by parts we have that \[\int\limits_{1}^{T}\varphi^{\prime}(t)\int\big{|}\mathrm{D}u_{p}^{ \varepsilon}\big{|}^{2}\,\mathrm{d}\sigma\,\mathrm{d}t =\frac{(p-1)}{(3-p)}\int\limits_{\Omega_{T}\cap\Omega}\!\!\! \varphi^{\prime}\left((u_{p}^{\varepsilon})^{-\frac{p-1}{3-p}}\right)(u_{p}^{ \varepsilon})^{-\frac{2}{3-p}}\big{|}\mathrm{D}u_{p}^{\varepsilon}\big{|}^{3} \,\mathrm{d}\mu\] (A.3) \[=-\int\limits_{\Omega_{T}\smallsetminus\Omega}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Clearly, employing \(\mathscr{C}^{1,\beta}_{\rm loc}\) convergence of \(u^{\varepsilon}_{p}\to u_{p}\) as \(\varepsilon\to 0\), we obtain that Moreover, a straightforward computation leads to \[\operatorname{div}\left(\left|{\rm D}u^{\varepsilon}_{p}\right|{\rm D}u^{ \varepsilon}_{p}\right)=\frac{\left|{\rm D}u^{\varepsilon}_{p}\right|^{2-p}}{ \left(p-1\right)}\left((3-p)\frac{\left|{\rm D}u^{\varepsilon}_{p}\right|^{2} _{\varepsilon}}{\left|{\rm D}u^{\varepsilon}_{p}\right|^{2}_{\varepsilon}}- \frac{\varepsilon^{2}}{\left|{\rm D}u^{\varepsilon}_{p}\right|^{2}_{ \varepsilon}}\right)\Big{\langle}{\rm D}\big{|}{\rm D}u^{\varepsilon}_{p} \big{|}^{p-1}\,\Big{|}\,{\rm D}u^{\varepsilon}_{p}\Big{\rangle}.\] Arguing as in the previous theorem, since \(\left|{\rm D}u^{\varepsilon}_{p}\right|\to\left|{\rm D}u_{p}\right|\) locally uniformly and \({\rm D}\big{|}{\rm D}u^{\varepsilon}_{p}\big{|}^{p-1}\rightharpoonup{\rm D} \big{|}{\rm D}u_{p}\big{|}^{p-1}\) weakly \(L^{2}_{\rm loc}\) as \(\varepsilon\to 0\), one gets \[\lim_{\begin{subarray}{c}\varepsilon\to 0\\ \Omega_{T}\smallsetminus\Omega\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
2307.08568
Congestion and Scalability in Robot Swarms: a Study on Collective Decision Making
One of the most important promises of decentralized systems is scalability, which is often assumed to be present in robot swarm systems without being contested. Simple limitations, such as movement congestion and communication conflicts, can drastically affect scalability. In this work, we study the effects of congestion in a binary collective decision-making task. We evaluate the impact of two types of congestion (communication and movement) when using three different techniques for the task: Honey Bee inspired, Stigmergy based, and Division of Labor. We deploy up to 150 robots in a physics-based simulator performing a sampling mission in an arena with variable levels of robot density, applying the three techniques. Our results suggest that applying Division of Labor coupled with versioned local communication helps to scale the system by minimizing congestion.
Karthik Soma, Vivek Shankar Vardharajan, Heiko Hamann, Giovanni Beltrame
2023-07-17T15:36:05Z
http://arxiv.org/abs/2307.08568v1
# Congestion and Scalability in Robot Swarms: a Study on Collective Decision Making ###### Abstract One of the most important promises of decentralized systems is scalability, which is often assumed to be present in robot swarm systems without being contested. Simple limitations, such as movement congestion and communication conflicts, can drastically affect scalability. In this work, we study the effects of congestion in a binary collective decision-making task. We evaluate the impact of two types of congestion (communication and movement) when using three different techniques for the task: Honey Bee inspired, Stigmergy based, and Division of Labor. We deploy up to 150 robots in a physics-based simulator performing a sampling mission in an arena with variable levels of robot density, applying the three techniques. Our results suggest that applying Division of Labor coupled with versioned local communication helps to scale the system by minimizing congestion. ## I Introduction Swarm robotics takes inspiration from natural swarms to design coordinated behaviors. Since natural swarms exhibit properties like scalability, fault tolerance, robustness, and parallelism, it is often assumed that these would also be present in artificial systems like robot swarms [1]. Designing robot swarms with local control rules to attain a global swarm behavior through emergence alone might not be sufficient to ensure scalability. Practical constraints such as crowding and communication issues hinder the scalability of these systems and affect the deployment of robot swarms in real-world scenarios [2]. In general, when robots in a swarm share access to a resource (whether a communication medium or physical space), it often gives rise to congestion. Consequently, designing and deploying robot swarms involves choosing local communication and coordination strategies and adapting to a swarm size that will limit congestion. Making the swarm size too large could conversely affect the task performance, giving rise to an optimal swarm size to maximize performance [3]. In some application scenarios, the swarm size could not be chosen, and the system must perform reasonably even when congested. We believe it is fundamental to understand the role of congestion to address and design strategies to achieve optimal performance for robot swarms. We investigate the effect of congestion on a binary decision-making problem where the robots assess the quality of two sites via sampling and collectively determine the superior location (see fig. 1). The robots share an arena of a given size (the "space medium"), a limited communication medium, have a collision prevention behavior, and a belief propagation mechanism through local communication. We identify two types of congestion: movement congestion, which happens when robots hinder each other's movements and is proportional to the arena occupancy and the robot behavior; and communication congestion, which is caused by belief propagation conflicts that depend on the recency of the belief, communication range and the accuracy of the belief. We answer three research questions: 1. What are the effects of movement and communication congestion w.r.t media occupancy? 2. What could be the essential factors that contribute to congestion? 3. Does introducing additional coordination mechanisms reduce congestion? The remainder of the paper is organized into the following sections: we discuss some related works in II, explain the problem setting in III, explain the strategies mentioned above in IV, report the results V and draw some conclusions in VI. ## II Related work _Collective decision-making:_ There is a vast literature of self-organizing discrete collective decision-making (DCDM) strategies inspired by the house-hunting behavior [4, 5] and positive feedback modulation [6] from the waggle dance of honey bees, where the task of the swarm is to find the best of two discrete options spatially segregated into zones (see Figure 1). Each agent assesses the qualities of sites, advertises their opinions proportionally to the quality of zone Fig. 1: Diagram of a typical binary collective decision-making scenario. and applies a voter based [7] or majority [8] based decision rule. This problem is extended to dynamic site qualities in [9]. In a slightly different setting, the swarm is tasked to find the frequency of features spread all over the environment for a single feature [10], with noise [11], and multiple features [12]. Further Bayesian approaches were formulated and studied for static [13, 14] and dynamic environments [15]. Continuous collective decision-making (CCDM), on the other hand, deals with finding consensus on some environmental feature (e.g., intensity [16], environmental edge [17] and tile density [18]). None of the above decision-making strategies address the movement and communication congestion arising from increasing system size. In this work, we adapt the existing static, discrete collective decision-making setting and strategy from [7, 8] coupled with feature-like distribution limited to the zones [10], combining the Nest site selection and collective perception from swarm robotics literature. _Congestion prediction or mitigation:_ Ants [19] and humans [20] form self-organizing lanes that help avoid congestion. In artificial systems, some measures used to quantify movement congestion are throughput and collisions. Throughput encodes the ability of multiple robots to reach a given target, and Dos Passos et al. [21] use throughput to compare congestion of various strategies. Yu and Wold [22] deploy ConvLSTMs to predict delays caused by congestion in a centralized warehouse management system and increase throughput. Proximity encounters and collisions are often used as a measure for congestion: A strategy to avoid head-on collisions between two groups of swarms was proposed in [23], and Wu et al. [24] propose collision-aware task assignment to minimize congestion. Communication congestion is often correlated to a degraded medium offering lower bandwidths [25, 26]. In robot swarms, propagating beliefs with an increasing number of robots can generate conflicts on top of these bandwidth concerns. We use communication conflicts as a metric to quantify communication congestion. _Divison of labor:_ A taxonomy of heterogenous robot swarms includes two high-level classes: behaviorally (software) and physically (hardware) different swarm members [27]. Behaviourally distinct swarm members often have uniform hardware with role-specific behavior as in [28], where agents specialize to become collectors or droppers in a food transporting task. Behavioral variations can be dynamically triggered based on environmental features [29] or could be static to divide tasks, as in shepherding [30]. Swarms of physically distinct robots can benefit from traversing parts of the environment with aerial and ground robots [31] or collaboratively mapping the environment with various sensors [32]. Having physically and behaviorally different swarm members can offer efficient task completion during a collaborative mapping task [33]. A variety of missions have demonstrated the benefits of using physically and behaviorally heterogeneous swarms in missions like search and retrieval task [34] and formation control [35]. In this work, we use a physically uniform and behaviorally distinct swarm to study decision-making in the Division of Labor technique. ## III Problem Setting We consider an arena of size \(U\times V\) subdivided into three zones: A, B, and Nest. Each sampling zone (A,B) is composed of a uniform distribution of a fill ratio comprising of white and black tiles representing the quality of the site \(\rho\in[0,1]\), where 0 represents complete black and 1 represents complete white. A swarm composed of \(N\) Khepera IV robots (modeled as \(\dot{x_{i}}=u_{i}\), where \(x_{i}\in\mathbb{R}^{2}\) is the position of the robot, with a circular communication model of range \(R\), and with a ground footprint of 0.045 \(m^{2}\)), equipped with 4 ground (\(G_{i}=\{G_{i}^{0},..,G_{i}^{3}\}\)), 8 proximity (\(P_{i}=\{P_{i}^{0},..,P_{i}^{7}\}\)), and 8 light sensors (\(L_{i}=\{L_{i}^{0},..,L_{i}^{7}\}\)). Each robot has to individually collect \(S_{T}\) samples using the ground sensors, calculate and communicate its belief state (\(0\leq bel_{i}\leq 1\)), and avoid collisions. The swarm collectively decides the highest quality zone (A or B). There are five beacon robots placed at the boundary of both the sampling zones that constantly broadcast zone option messages (i.e, A or B) to help robots situate themselves inside the sampling zones. If a robot receives no broadcast message it is considered to be in the Nest zone. To help robots move between the zones there is a light placed above zone A, following the light gradient using the light sensors (_PT_ - Phototaxis) leads the robots to zone A while doing the opposite (\(!PT\) - Antiphototaxis) leads the robots away from zone A to the zone B. ## IV Approach We consider three state-machines outlined in fig. 2: Honey Bee, Stigmergy, and Division of Labor decision-making strategies. These state machines are made of robot behaviors such as Diffusion (_DF_), Collision Avoidance (_CA_), Phototaxis (_PT_), and AntiPhototaxis (\(!PT\)). _Collision Avoidance (CA):_ To avoid obstacles and other robots, every robot uses the proximity sensors \(P_{i}\). An obstacle vector is constructed as \(V_{i}^{o}=\frac{\sum_{i=0}^{I}\rho_{i}^{d}}{||P_{i}||}\) and applied as a control input to the robot as shown below, where \(S_{o}\) is a scaling factor and \(O_{lt}\) and \(O_{at}\) are threshold parameters for obstacle avoidance. \[\dot{x}_{i}=\frac{-S_{o}V_{i}^{o}}{||V_{i}^{o}||},\quad||V_{i}^{o}||\geq O_{ tt}\ \&\ -O_{at}\geq\angle V_{i}^{o}\geq O_{at} \tag{1}\] This behavior moves the robot in the opposite direction of the aggregated obstacle vector (\(V_{i}^{o}\)), hence locally avoiding collisions. _Phototaxis and AntiPhototaxis (PT and \(!PT\)):_ To move between zones robots use the light sensors \(L_{i}\) whose readings are defined by the equation \(||L_{i}^{i}||=(I/x)^{2}\), where \(I\) is the reference intensity and \(x\) the distance between the light and the sensor. A Light vector is constructed as \(V_{i}^{l}=\frac{\sum_{i=0}^{I}L_{i}^{l}}{||L_{i}||}\) and applied as a control input to the robot as shown below unless a collision is detected, where \(S_{l}\) is a scaling factor. \[\dot{x_{i}}=\begin{cases}\frac{S_{l}V_{i}^{l}}{||V_{i}^{l}||}\,\quad PT\\ \frac{-S_{l}V_{i}^{l}}{||V_{i}^{l}||}\,\quad 1PT\end{cases} \tag{2}\] _Diffusion (\(Df\))_: When the robot needs to explore the sampling zones to collect samples or mix with other agents for efficient information propagation while advertising the beliefs in the Nest zone, it uses diffusion, where the robot just moves forward in the local frame with the maximum speed (\(u_{i}^{x}=M_{s}\), \(u_{i}^{y}=0\)), unless a collision is detected. Collisions with other robots and obstacles helps the robot diffuse. ### _Honey Bee_ In this decision-making strategy, the robots are first initialized in a random distribution in the Nest zone. Robots with even/odd IDs are assigned (\(Z_{i}\)) to sample the zone (A/B) respectively. To reach the zone (A/B) the robots perform \(PT/PT+CA\). When they reach the zone, they will receive a broadcast from the A/B zone beacons. Upon reaching the zone, the robots diffuse (\(DF+CA\)) and start collecting \(S_{T}\) no of samples from their ground sensors, where each sample is \(bel_{i}(t)=\frac{\sum_{i=0}^{3}G_{i}^{i}}{||G_{i}||}\). After this, they come back to the Nest zone to disseminate their averaged beliefs by executing the opposite behavior \(!PT/PT+CA\) used to reach the zones A/B. Upon reaching the Nest zone, robots broadcast their averaged individual beliefs calculated as \(avg_{i}^{bel}=\frac{\sum_{i=1}^{S_{T}}bel_{i}(t)}{S_{T}}\) while diffusing (\(DF+CA\)) for a period of time (\(W_{T}\propto avg_{i}^{bel}\)). This positive modulation of belief dissemination is done to influence more robots to choose the best site. Before the end of this period of time (\(W_{T}\)), robots start collecting their local neighbors (\(n_{i}\)) beliefs. Robots further divide \(n_{i}\) into two sets \(nA/B:=\{j|j\in n_{i}\)_and_\(Z_{j}=A/B\}\). Along with their own beliefs, robots calculate two aggregated averages one each for zone \(Z_{i}\) and \(!Z_{i}\) \[agg_{i}^{Z_{i}}=\frac{\sum_{j=1}^{|nZ_{i}|}avg_{j}^{bel}+avg_{i}^{bel}}{|nZ_{i }|+1} \tag{3}\] \[agg_{i}^{!Z_{i}}=\begin{cases}\frac{\sum_{j=1}^{|nZ_{i}|}avg_{j}^{bel}}{|nZ_{i }|}\\ 0.0&|n!Z_{i}|=0\end{cases} \tag{4}\] if \(agg_{i}^{!Z_{i}}>agg_{i}^{!Z_{i}}\), \(Z_{i}\) is updated to \(!Z_{i}\) (positive modulation recruiting more robots towards the higher quality site, represented by blue lines in the left of fig. 2), otherwise it remains the same and the cycle is continued. The experiment is continued until all the robots form the same opinion. This differs from the approaches used in the [7, 8] in two ways. The qualities of the zone aren't directly broadcasted when the robots enter the zone, the robots calculate them by using their ground sensors. This change was done to make robots explore the zone, which is more realistic than the scenarios considered in [7, 8] and this approach further emphasizes the effect of movement flexibility in collective decision-making, as robots now have to move within zones. The second change is that the individual averaged beliefs (not opinions) are broadcasted, to have easier decision-making during tie-breaks and have a belief consensus with virtual stigmergy. This change requires very little communication overhead. ### _Stigmergy_ In this decision-making strategy, we adopt the same state machine from the Honey Bee approach but instead of using a local communication broadcasts, we use a versioned local communication approach (virtual stigmergy [36]) to store the aggregated beliefs of both zones in separate entries (\(agg^{A/B}\)). Virtual stigmergy creates a shared tuple memory among the robots, where each entry contains a key identifier, Lamport clock (version number), robot id modifying the value and the value to be stored. Robots in the swarm are allowed to read and write to the local memory of the tuple value. Each access to the local memory creates a message to be broadcast in the local neighborhood. Whenever a robot receives a more recent update to the tuple, it updates the local memory and broadcasts the entry, allowing for more recent entries to be propagated. The entries in the virtual stigmergy are synchronized as long the robots are connected [36], i.e., a communication path exists between any two connected robots. With virtual Fig. 2: The state machines illustrate the behavioral states of robots during the three strategies Honey bee, Stigmergy, and Division of Labor SM(1-3). Every robot in the swarm deployed a corresponding state machine during the evaluation runs. stigmergy, robots can communicate with other robots even with movement congestion. With this property, it doesn't make sense for the robots to spend time advertising their averaged beliefs proportional to the average belief (\(avg_{i}^{bel}\)). Therefore \(W_{T}\) is constant irrespective of the quality of the site. \(W_{T}\) has to be still non-zero as mixing robots is still essential for synchronizing entries. At the beginning of this period (\(W_{T}\)) the robots read the entry of the zone they are assigned (\(Z_{i}\)) and update it using the equation 5. \[agg^{Z_{i}}=agg^{Z_{i}}+w(avg_{i}^{bel}-agg^{Z_{i}}) \tag{5}\] where \(w\) is the weight parameter. Instead of calculating the \(agg^{A/B}\) like equations 3, 4, the robots use the values from the stigmergy (Note that the subscript \(i\) is dropped for \(agg^{Z_{i}}\) in equation 5). As multiple robots might try to update the stigmergy at the same time (communication conflicts), a conflict resolution manager is used that keeps track of the maximum value for the aggregate belief for all robots. We count the number of conflicts occurring in this manager as the number of communication conflicts. ### _Division of Labor_ It can be seen that every robot in Honey Bee approach pursues two roles: sampling and advertising, this mandates movement of robots between zones. In this approach instead, we assign fixed permanent roles for robots: samplers and networkers hence spatially segregating them into zones (A,B) and Nest respectively. Robots are randomly initialized in the Nest zone. One-third of robots are assigned (\(Z_{i}\)) to sample zone A, they follow the same state machine from the Honey Bee approach until they enter zone A. Similarly one-third of robots are assigned to be zone B samplers. This approach differs from the previous approaches after this point as the robots stay and diffuse in their zones (essentially disabling the positive modulation leading to the recruitment of more and more robots towards the higher quality zone in previous approaches) and after every \(S_{T}\) number of samples collected, they read both the entries to keep track of the best zone opinion and update the aggregated belief in the stigmergy (similar to equation 5). The remaining one-third of robots stay and diffuse in the Nest zone acting as networkers by providing connectivity between the samplers for efficient belief propagation between both the sampling zones. Additionally, they also constantly keep track of the best zone opinion. This is continued until all the robots form one opinion. ## V Results We investigate the scalability of the three approaches using the following metrics: 1. average time spent by every robot avoiding collisions with other robots and obstacles (arena walls), 2. average communication conflicts per robot while updating the virtual stigmergy, and 3. total time taken for all the robots to converge to highest quality opinion. During all the experimental evaluations, we deploy the robots in a fixed arena dimension of \(U=4\ m\), \(V=4\ m\) and a Nest of size \(2m\times 4m\) with site A quality \(\rho_{A}=0.9\) and site B quality \(\rho_{B}=0.1\). We varied the number of robots \(N\in\{2,\ 4,\ 6,\ 10,\ 20,\ 40,\ 60,\ 80,\ 100,\ 120,\ 150\}\) corresponding to a Nest robot density of \(\{1.3,\ 2.6,\ 3.9,\ 6.5,\ 13.1,\ 26.2,\ 39.4,\ 52.5,\ 65.7,\ 78.8,\ 98.5\}\)\(\times\ 10^{-2}\) for Honey Bee and Stigmergy based decision-making strategies. Similarly, for Division of Labor technique, varied the robot numbers \(N\in\{3,\ 6,\ 9,\ 12,\ 24,\ 36,\ 60,\ 81,\ 99,\ 120,\ 150\}\) corresponding to a Nest robot density of \(\{1.9,\ 3.9,\ 5.9,\ 7.8,\ 15.7,\ 23.6,\ 39.4,\ 53.2,\ 65,\ 78.8,\ 98.5\}\)\(\times\ 10^{-2}\). We set the communication range for all three techniques to \(R\in\{0.4\ m,\ 0.8\ m,\ 1.2\ m\}\) and repeated each configuration 30 times with randomized robot placement following a normal distribution in the Nest zone. To further understand the effects of the movement congestion, we plot the accumulated stagnation heatmap (defined as a robot spending over \(St_{T}\) seconds in grids of size (\(0.2\times 0.2\))) for an interval \([T_{s},T_{f}]\) in fig. 4 and fig. 5. The averaged movement change gridmap divides the arena into grids of size (\(0.2\times 0.2\)) for an interval \([T_{s},T_{f}]\) in fig. 6. Averaged movement change for each grid cell is calculated by averaging the movement vectors of robots in the grid over consecutive time steps (\(x_{i}(t+1)-x_{i}(t)\)). The stagnation heatmap and movement change gridmap are averaged over all 30 repetitions of a given configuration. The stagnation heatmap shows the congestion in space, while the averaged movement change gridmap shows the movement of robots. _helps to minimize movement congestion._ Fig. 5 shows the stagnation heatmap for the Division of Labor approach. The zone samplers and nest zone networked robots experience minimal stagnation within their respective zones as they are contained within their zones (except T=start, as robots are deployed in the Nest zone). The minimal stagnation in the grids directly reflects on the convergence time in fig. 3, where convergence time and time spent on collisions are minimal compared to the other two approaches. However, communication conflicts are larger than in the Stigmergy approach as more updates to the robot beliefs propagate through the swarm. The Stigmergy approach still suffers from movement congestion, thereby influencing the ability of the robots to move, sample, and update the stigmergy, which results in fewer conflicts. (4) _Longer communication ranges make a positive difference only with the local broadcast approach and make a negative impact with the versioned local communication strategy for a larger number of robots._ Longer communication range combinations used in Honey Bee approach improve the total time and time spent avoiding collisions (ref fig. 3) compared to shorter communication ranges (\(R=0.4~{}m<0.8~{}m<1.2~{}m\)) for any number of robots in the system except (\(N=150,~{}R=~{}0.8~{}m\), which has a slight increase in the time spent avoiding collisions per robots compared to \(N=150,~{}R=~{}0.4~{}m\)). Whereas the number of conflicts arising with the versioned local communication approach increases with a longer communication range and a higher number of robots (\(N>60\)). As the movement congestion doesn't impact the propagation of beliefs with the versioned local communication approach, there is no significant improvement in the total time taken and time spent avoiding collisions for both these approaches compared to shorter communication ranges for a fixed number of robot combination (for \(N>20\), ref fig 3). Fig. 3: Congestion trends for scalability metrics for all three approaches. It would appear that the total time trends for all three approaches have a similar pattern for any decentralized system [3] and the optimal number of robots for our setting would be around \((N\in\{~{}20-60\})\) roughly. All experiments of Stigmergy and Division of Labor converged to the superior quality opinion before 6000 s timeout period, whereas some experiments of \((N=120,~{}R=0.8),(N=2,~{}R=\{0.4,0.8\})\) and all experiments in \((N=120,~{}R=0.4),(N=150,~{}R=\{0.4,0.8\})\) for Honey Bee approach failed to converge to any opinion. ## VI Conclusions Current collective decision-making strategies rarely address congestion-related issues. This will have huge implications when it comes to deploying robot swarm systems in real-world scenarios, as these systems will scale poorly. In this paper, we discuss the impact of movement congestion and belief propagation conflicts on swarm behaviors, specifically collective decision-making. We find that using versioned local communication and Division of Labor mechanisms helps to reduce the impact of movement congestion, despite the increasing trends for communication conflicts. Further research could look into congestion-aware initialization strategies, congestion-aware collision avoidance, and dynamic approaches to switch between different state machines for collective decision-making systems. We believe our results transfer to other areas of swarm robotics such as foraging, task allocation, collective construction etc. and would welcome additional studies in these domains. Fig. 4: Stagnation heatmaps for the positive modulation approaches are shown in this figure for the combination \(Z=A\), \(R=0.4\)\(m\), \(S_{T}\)=1s (10 simulation timesteps). For the Honey bee approach \([T_{x},T_{f}]=[95\%,~{}100\%]\) and Stigmergy approach \([T_{x},T_{f}]=[85\%,~{}100\%]\). The barrier is of higher magnitude near the (zone A-Nest) boundary and has a decreasing radial gradient (in bands) away from zone A (the gradient follows a similar pattern to light intensity from the light centered at the end of zone A). Honey Bee approach is significantly more congested for a fixed combination compared to the Stigmergy approach (Honey Bee row is normalized by 5000 and Stigmergy row is normalized by 1000.) Fig. 5: Stagnation heatmaps for Division of Labor approach is shown in this figure for \(N=150~{}R=0.4\)\(m\), \(S_{T}\)=1s (to simulation timesteps). T = start, represents \([T_{x},T_{f}]=[0\%,~{}15\%]\), T = middle, represents \([T_{x},T_{f}]=[15\%,~{}85\%]\), and T = end, represents \([T_{x},T_{f}]=[85\%,~{}100\%]\). It can be seen that the magnitudes of stagnations are lesser compared to fig 4 and stagnations outside the assigned zones occur only in the starting phases of experiments. (All the rows are normalized by 1000.) Fig. 6: Movement changes for the combination \(N\in\{20,~{}100,~{}150\}\), \(R=0.4\)\(m\), \(Z=A\), \([T_{x},T_{f}]=[85\%,~{}100\%]\). The top row shows the averaged movement change of robots entering Zone A from Nest to sample (Zone A followers in Stigmergy approach fig. 2) and the bottom row shows the averaged movement change of robots entering Nest from Zone A (Nest followers in Stigmergy approach fig. 2).
2306.07572
Clairaut anti-invariant Riemannian maps to trans-Sasakian manifolds
In this article, we introduce Clairaut anti-invariant Riemannian maps from Riemannian manifolds to trans-Sasakian manifolds. We derive necessary and sufficient condition for an anti-invariant map to be Clairaut when base manifold is trans-Sasakian manifold. We discuss the integrability of range\pi_* and (range\pi_*)^\perp. Further, we establish harmonicity of these maps. Finally, we construct nontrivial examples of such maps for justification.
Adeeba Zaidi, Gauree Shanker
2023-06-13T06:49:30Z
http://arxiv.org/abs/2306.07572v1
# Clairaut anti-invariant Riemannian maps to trans-Sasakian manifolds ###### Abstract In this article, we introduce Clairaut anti-invariant Riemannian maps from Riemannian manifolds to trans-Sasakian manifolds. We derive necessary and sufficient condition for an anti-invariant map to be Clairaut when base manifold is trans-Sasakian manifold. We discuss the integrability of \(range\pi_{*}\) and \((range\pi_{*})^{\perp}\). Further, we establish harmonicity of these map. Finally, we construct a nontrivial example of such maps for justification. **Mathematics Subject Classification:** Primary 53C15; Secondary 53C25, 54C05. **Keywords and Phrases:** Contact manifolds, trans-Sasakian manifolds, Riemannian maps, anti-invariant Riemannian maps, Clairaut maps. ## 1 Introduction The concept of Riemannian maps between Riemannian manifolds was firstly introduced by Fischer in 1992 ([7]). He described Riemannian maps as the generalization of isometric immersions, Riemannian submersions and isometries. The interesting part of Riemannian maps is that they satisfy general eikonal equation which is a bridge between geometrical and physical aspects of optics. In [7], Fischer described: let \(\pi:(M,g_{1})\rightarrow(B,g_{2})\) be a smooth map between smooth finite dimensional Riemannian manifolds \((M,g_{1})\) and \((B,g_{2})\) such that \(0<rank\pi<min\{dimM,dimB\}\). Let \(\pi_{*p}:T_{p}M\to T_{\pi(p)}B\) denotes the differential map at \(p\in M\), and \(\pi(p)\in B\). Then \(T_{p}M\) and \(T_{\pi(p)}B\) split orthogonally with respect to \(g_{1}(p)\) and \(g_{2}(\pi(p))\), respectively, as ([7]) \[T_{p}M =ker\pi_{*p}\oplus(ker\pi_{*p})^{\perp},\] \[=\mathcal{V}_{p}\oplus\mathcal{H}_{p},\] \[T_{\pi(p)}B =range\pi_{*p}\oplus(range\pi_{*p})^{\perp},\] where \(\mathcal{V}_{p}=ker\pi_{*p}\) and \(\mathcal{H}_{p}=(ker\pi_{*p})^{\perp}\) are vertical and horizontal parts of \(T_{p}M\) respectively. The map \(\pi\) is called Riemannian map at \(p\in M\), if the horizontal restriction \[(\pi_{*p})^{h}=\pi_{*p}\ |\ _{\mathcal{H}_{p}}:\mathcal{H}_{p}\to range\pi_{*p}\] is a linear isometry between the spaces \((ker\pi_{*p},g_{1}\ |_{ker\pi_{*p}})\) and \((range\pi_{*p},g_{2}|_{(range\pi_{*p})}).\) In other words, \((\pi_{*p})^{h}\) satisfies the equation \[g_{2}(\pi_{*}W,\pi_{*}Z)\ =\ g_{1}(W,Z), \tag{1.1}\] for all vector fields \(W,Z\) tangent to \(\Gamma(ker\pi_{\pi p})^{\perp}\). During the last three decades, many authors have studied Riemannian maps ([12, 14, 15, 22, 14, 1]) and the investigation is still going on. Clairaut's theorem plays an important role in differential geometry, which states that for any geodesic \(\gamma\) on a surface of revolution, the function \(rsin\theta\) is constant, where \(r\) is the distance between a point on the surface and rotation axis, whereas \(\theta\) is an angle between \(\gamma\) and the meridian curve through \(\gamma\). Inspired by this theorem, Bishop ([5]) introduced Clairaut Riemannian submersion and derived the necessary and sufficient conditions for a submersion to be Clairaut Riemannian submersion. Hereafter, Riemannian submersion is investigated broadly in both hermitian geometry as well as contact geometry [17, 18, 9, 8]. Sahin [13, 16] investigated Clairaut conditions on Riemannian map. Later, various types of Clairaut Riemannan maps such as invariant, anti-invariant, semi-invaiant are studied with Kahler structure and cosymplectic structure [11, 19, 20, 21, 16]. The notion of a trans-Sasakian structure \((\psi,\xi,\eta,g,\alpha,\beta)\) on \((2n+1)\)-dimension manifold \(B\), was introduced by Oubina ([10]) which can be seen as a generalization of Sasakian, Kenmotsu and cosymplectic structure on a contact metric manifold, where \(\alpha,\beta\) are smooth functions on \(B\). Generally, a trans-Sasakian manifold \((B,\psi,\xi,\eta,g,\alpha,\beta)\) is called a trans-Sasakian manifold of type \((\alpha,\beta)\) and manifolds of type \((\alpha,0),(0,\beta)\) and \((0,0)\) are called, \(\alpha-\)Sasakian, \(\beta-\)Kenmotsu and cosymplectic manifolds respectively. Since the geometry of Sasakian manifolds is very rich, it would be interesting to study different type of Riemannian maps on this structure. In this paper, we study Clairaut anti-invariant Riemannian maps from Riemannian manifolds to trans-Sasakian manifolds. The paper is organized as follows: In section 2, we give all the basic definitions and terminologies, needed throughout the paper. In section 3, we introduced Clairaut anti-invariant Riemannian maps from Riemannian manifolds to trans-Sasakian manifolds admitting horizontal Reeb vector field. Further, we study necessary and sufficient condition for a curve on base manifold to be geodesic and obtain necessary and sufficient condition for an anti-invariant Riemannian map to be Clairaut when base manifold is trans-Sasakian with horizontal Reeb vector field. We find the integrability condition for distributions of tangent bundle on base manifold. Later, we check the harmonicity of these maps. We also construct some nontrivial examples for such maps. ## 2 Preliminaries In this section, we recall the definitions of contact manifolds, trans-Sasakian manifolds and some important properties related to Riemannian maps. Let \(B\) be a \((2n+1)-\)dimensional differentiable manifold, then \(B\) is said to have an almost-contact structure \((\psi,\xi,\eta)\), if it admits a \((1,1)\) tensor field \(\psi\), a vector field called characteristic vector field or Reeb vector field \(\xi\), and a \(1-\)form \(\eta\), satisfying ([6]) \[\psi^{2}=-I+\eta\otimes\xi,\quad\psi\xi=0,\ \ \eta\circ\psi=0,\ \ \eta(\xi)=1, \tag{2.1}\] where \(I\) is the identity mapping. A Riemannian metric \(g\) on an almost-contact manifold \(B\) is said to be compatible with the almost-contact structure \((\psi,\xi,\eta)\), if for any vector fields \(W,Z\in\Gamma(TB)\), \(g\) satisfies ([6]) \[g(\psi W,\psi Z)\ =\ g(W,Z)-\eta(W)\eta(Z), \tag{2.2}\] \[g(\psi W,Z)=g(W,\psi Z),\ \eta(W)\ =\ g(W,\xi), \tag{2.3}\] the structure \((\psi,\xi,\eta,g)\) is called an almost contact metric structure. The almost contact structure \((\psi,\xi,\eta)\) is said to be normal if \(N+d\eta\otimes\xi=0\), where \(N\) is the Nijenhuis tensor of \(\psi\). If \(d\eta=\Phi\), where \(\Phi(W,Z)=g(\psi W,Z)\) is a tensor field of type \((0,2)\), then an almost contact metric structure is said to be normal contact metric structure. An almost contact metric manifold \(B\) is called a trans-Sasakian manifold of type \((\alpha,\beta)\) ([6]), if it satisfies \[(\nabla_{W}\psi)Z = \alpha(g(W,Z)\xi-\eta(Z)W)+\beta[g(\psi W,Z)\xi-\eta(Z)\psi W], \tag{2.4}\] \[(\nabla_{W}\eta)Z = -\alpha g(\psi W,Z)\xi+\beta g(\psi W,\psi Z),\] (2.5) \[\nabla_{W}\xi = -\alpha\psi W+\beta(W-\eta(W)\xi), \tag{2.6}\] where \(\alpha,\ \beta\) are smooth functions and \(\nabla\) is Levi-Civita connection of \(g\) on \(B\). Further, it can be seen that a trans-Sasakian manifold of type \((\alpha,0)\) is a \(\alpha-\)Sasakian manifold and a trans-Sasakian manifold of type \((0,\beta)\) is a \(\beta-\)Kenmotsu manifold. A trans-Sasakian manifold of type \((0,\ 0)\) is called a cosymplectic manifold. In particular, for \(\alpha=1,\beta=0\); and \(\alpha=0,\beta=1\), a trans-Sasakian manifold will be Sasakian and Kenmotsu manifold respectively. **Example 2.1.**_[6] Let \(B=\{(u,v,w)\in\mathbb{R}^{3},w\neq 0\}\) be a \(3-\)dimensional Riemannian manifold associated with Riemannian metric \(g_{2}\) given by_ \[g_{2}=\frac{1}{4}\begin{pmatrix}1+v^{2}&0&-v\\ 0&1&0\\ -v&0&1\end{pmatrix},\] \(1-\)_form \(\eta=\frac{1}{2}(dw-vdu)\) and linearly independent global frame \(\{E_{1},E_{2},E_{3}\}\) be defined as \(E_{1}=2\frac{\partial}{\partial v},E_{2}=\psi E_{1}=2(\frac{\partial}{\partial u }+v\frac{\partial}{\partial w}),E_{3}=2\frac{\partial}{\partial w}=\xi,\) where \(\xi\) is the characteristic vector field (Reeb vector field) and the \((1,1)-\) tensor field \(\psi\) is given by the matrix_ \[\psi=\begin{pmatrix}0&1&0\\ -1&0&0\\ 0&v&0\end{pmatrix},\] _then \(B\) is a trans-Sasakian manifold of type \((1,0)\)._ Further, let \(\pi:(M^{m},g_{1})\rightarrow(B^{b},g_{2})\) be a smooth map between smooth finite dimensional Riemannian manifolds, then the differential map \(\pi_{*}\) of \(\pi\) can be viewed as a section of bundle \(Hom(TM,\pi^{-1}TB)\to M\), where \(\pi^{-1}TB\) is the pullback bundle whose fibers at \(p\in M\) is \((\pi^{-1}TB)_{p}=T_{\pi(p)}B\). The bundle \(Hom(TM,\pi^{-1}TB)\) has a connection \(\nabla\) induced from the Levi-Civita connection \(\nabla^{M}\) and the pullback connection \(\overset{B}{\nabla}^{\pi}\), then the second fundamental form of \(\pi\) is given by ([13] ) \[(\nabla\pi_{*})(W,Z)=\overset{B}{\nabla}^{\pi}_{W}\pi_{*}Z-\pi_{*}(\nabla^{M }_{W}Z) \tag{2.7}\] for all \(W,Z\ \in\Gamma(TM)\) and \(\overset{B}{\nabla}^{\pi}_{W}\pi_{*}Z\circ\pi=\nabla^{B}_{\pi_{*}W}\pi_{*}Z\). Also, for any vector field \(W\) on \(M\) and any section \(V\) of \((range\pi*)^{\perp}\), we have \(\nabla^{\pi\perp}_{W}V\), the orthogonal projection of \(\nabla^{B}_{W}V\) on \((range\pi_{*})^{\perp}\), where \(\nabla\pi_{*}^{\perp}\) is linear connection on \((range\pi_{*})^{\perp}\) such that \(\nabla^{\pi\perp}g_{2}=0\). Now, for a Riemannian map, we have ([13]) \[\nabla^{B}_{\pi_{*}W}V=-\mathcal{A}_{V}\pi_{*}W+\nabla^{\pi\perp}_{W}V, \tag{2.8}\] where \(\mathcal{A}_{V}\pi_{*}W\) is the tangential component of \(\nabla^{B}_{\pi_{*}W}V\). At \(p\in M\), we have \(\nabla^{B}_{\pi_{*}W}V(p)\in T_{\pi(p)}B\), \(\ \mathcal{A}_{V}\pi_{*}W(p)\in\pi_{*p}(T_{p}M)\) and \(\nabla^{\pi\perp}_{W}V(p)\in(\pi_{*p}(T_{p}M))^{\perp}.\) It is easy to see that \(\mathcal{A}_{V}\pi_{*}W\) is bilinear in \(V\) and \(\pi_{*}W\), and \(\mathcal{A}_{V}\pi_{*}W\) at \(p\) depends only on \(V_{p}\) and \(\pi_{*p}W_{p}.\) By direct computations, we obtain [13] \[g_{2}(\mathcal{A}_{V}\pi_{*}W,\pi_{*}Z)=g_{2}(V,(\nabla\pi_{*})(W,Z)) \tag{2.9}\] for \(W,Z\in\Gamma((ker\pi_{*})^{\perp})\) and \(V\in\Gamma((range\pi_{*})^{\perp})\). Since \((\nabla\pi_{*})\) is symmetric, it follows that \(\mathcal{A}_{V}\) is a symmetric linear transformation of \(range\pi_{*}\). Moreover, Let \(\pi:(M,g_{1})\rightarrow(B,g_{2})\) be a Riemannian map between Riemannian manifolds, then \(\pi\) is umbilical Riemannian map if and only if [13] \[(\nabla\pi_{*})(W,Z)=g_{1}(W,Z)H_{2} \tag{2.10}\] for \(W,Z\in\Gamma((ker\pi_{*})^{\perp}\) and \(H_{2}\) is nowhere zero vector field on \((range\pi_{*})^{\perp}\). Also, the mean curvatures of vertical distribution \(\mathcal{V}\) and horizontal distribution \(\mathcal{H}\) are defined as ([13]) \[\varrho^{\mathcal{V}}=\frac{1}{q}\sum_{i=1}^{q}\mathcal{H}(\nabla_{e_{i}}e_{ i}),\ \ \ \ \varrho^{\mathcal{H}}=\frac{1}{m-q}\sum_{j=1}^{m-q}\mathcal{V}(\nabla_{E_{j}}E_ {j}), \tag{2.11}\] where \(\{e_{i}\}_{i=1}^{q}\) and \(\{E_{j}\}_{j=1}^{m-q}\) are local frames of \(\mathcal{V}\) and \(\mathcal{H}\) respectively. A distribution on \(M\) is said to be minimal, if for each point in \(M\), the mean curvature vanishes. **Lemma 2.2**.: _[_13_]_ _Let \(\pi\) be a Riemannian map from a Riemannian manifold \((M,g_{1})\) to a Riemannian manifold \((B,g_{2})\). Then, \(\forall\)\(W,Y,Z\in\Gamma((ker\pi_{*})^{\perp})\), we have_ \[g_{2}((\nabla\pi_{*})(W,Y),\pi_{*}Z)=0.\] **Definition 2.3**.: _[_13_]_ _A Riemannian map \(\pi:(M,g_{1})\rightarrow(B,g_{2})\) between Riemannian manifolds is called Clairaut Riemannian map if there is a function \(r:B\to R^{+}\) such that for every geodesic \(\Omega\) on \(B\), the function \((r\circ\Omega)sin\theta(s)\) is constant, where, \(\pi_{*}Z\in\Gamma(range\pi_{*})\) and \(V\in\Gamma(range\pi_{*})^{\perp}\)are components of \(\dot{\Omega}(s)\), and \(\theta(s)\) is the angle between \(\dot{\Omega}(s)\) and \(V\)._ **Definition 2.4**.: _[_19_]_ _Let \(\pi:(M,g_{1})\rightarrow(B,g_{2})\) be a Riemannian map between Riemannian manifolds such that \(range\pi_{*}\) is connected and \((range\pi_{*})^{\perp}\) is totally geodesic, and \(\gamma,\Omega=\pi\circ\gamma\) be geodesics on \(M\) and \(B\) respectively. Then, \(\pi\) is a Clairaut Riemannian map with \(r=e^{h}\) if and only if \(\pi\) is umbilical map, and has \(H=-\nabla^{B}h\), where \(h\) is a smooth function on \(B\) and \(H\) is the mean curvature vector field of \(range\pi_{*}\)._ ## 3 Clairaut anti-invariant Riemannian maps to trans-Sasakian manifolds In this section, we define Clairaut anti-invariant Riemannian map from a Riemannian manifold to a trans-Sasakian manifold and discuss the geometry of such maps. Throughout this section, we are considering Reeb vector field in horizontal space \(\mathcal{H}\) of \(TM\) and \((range\pi_{*x})^{\perp}\) as totally geodesic distribution. **Definition 3.1**.: _Let \(\pi\) be a Riemannian map from a Riemannian manifold \((M,g_{1})\) to an almost contact metric manifold \((B,g_{2},\psi,\eta,\xi)\). Then we say that \(\pi\) is an anti-invariant Riemannian map at \(p\in M\) if \(\psi(range\pi_{*p})\subset(range\pi_{*p})^{\perp}\). If \(\pi\) is an anti-invariant Riemannian map for all \(p\in M\), then \(\pi\) is called an anti-invariant Riemannian map._ In this case, the horizontal distribution \((range\pi_{*})^{\perp}\) can be decomposed as \[(range\pi_{*})^{\perp}=\psi range\pi_{*}\oplus\mu, \tag{3.1}\] where \(\mu\) is the orthogonal complementary distribution of \(\psi range\pi_{*}\) in \((range\pi_{*})^{\perp}\) and also invariant with respect to \(\psi\). _pi_ admits vertical Reeb vector field if \(\xi\in range\pi_{*}\) whereas if it admits horizontal Reeb vector field \(\xi\), then \(\xi\in(range\pi_{*})^{\perp}\). It is clear that in case of horizontal Reeb vector fields, \(\xi\in\mu\). For any \(V\in(range\pi_{*})^{\perp}\), we can have \[\psi V=\mathcal{B}V+\mathcal{C}V, \tag{3.2}\] where \(\mathcal{B}V\in\Gamma(range\pi_{*})\) and \(\mathcal{C}V\in\Gamma((range\pi_{*})^{\perp})\). **Definition 3.2**.: _An anti-invariant Riemannian map from a Riemannian manifold to a contact manifold is said to be Clairaut if it satisfies Definition 2.1._ **Theorem 3.3**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g,\psi,\eta,\xi)\) of type \((\alpha,\beta)\) with horizontal Reeb vector field \(\xi\) and \(\gamma:J\subset\mathbb{R}\to M\) be a geodesic curve on \(M\), then the curve \(\Omega=\pi\circ\gamma\) is a geodesic on \(B\) if and only if_ \[-\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W-\mathcal{A}_{CU}\pi_{*}W+\pi_{*}(\mathcal{ H}\nabla^{M}_{W}Z)+\nabla^{B}_{U}\pi_{*}Z+\eta(U)[\alpha\pi_{*}W+\beta\pi_{*}Z]=0, \tag{3.3}\] \[\nabla^{\perp}_{W}\psi(\pi_{*}W)+\nabla^{\perp}_{W}\mathcal{C}Z+ \nabla^{\perp}_{U}\psi(\pi_{*}W)+\nabla^{\perp}_{U}\mathcal{C}U+(\nabla\pi_{* })(W,Z)-\alpha||\dot{\Omega}||^{2}\xi+\eta(U)[\alpha U\\ +\beta(\psi(\pi_{*}W)+\mathcal{C}U)]=0 \tag{3.4}\] _for any \(U\in\Gamma((rangeF_{*})^{\perp})\) and \(W,Z\ \in\Gamma((kerF_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\) with \(\pi_{*}W\) and \(U\) as vertical and horizontal components of \(\dot{\Omega}(s)\)._ Proof.: Let \(U\in\Gamma((rangeF_{*})^{\perp})\) and \(W\ \in\Gamma((kerF_{*})^{\perp})\) such that \(\dot{\Omega}(s)=\pi_{*}W(s)+U(s)\). Since \(B\) is a trans-Sasakian manifold, using (2.4), we get \[\psi\nabla^{B}_{\dot{\Omega}}\dot{\Omega}=\nabla^{B}_{\dot{\Omega}}\psi\dot{ \Omega}-\alpha[g_{2}(\dot{\Omega},\dot{\Omega})\xi-\eta(\dot{\Omega})\dot{ \Omega}]-\beta[g_{2}(\psi\dot{\Omega},\dot{\Omega})\xi-\eta(\dot{\Omega})\psi \dot{\Omega}].\] Since \(\dot{\Omega}=\pi_{*}W+U\), the above equation can be rewritten as \[\psi\nabla^{B}_{\dot{\Omega}}\dot{\Omega}=\nabla^{B}_{\pi_{*}W} \psi\pi_{*}W+\nabla^{B}_{\pi_{*}W}\psi U+\nabla^{B}_{U}\psi\pi_{*}W+\nabla^{ B}_{U}\psi U-\alpha[||\dot{\Omega}||^{2}\xi-\eta(U)\pi_{*}W-\eta(U)U]\\ +\beta[\eta(U)\psi(\pi_{*}W)+\eta(U)\psi U]. \tag{3.5}\] Using (2.8) and (3.2) in (3.5), we get \[\begin{split}\psi\nabla^{B}_{\dot{\Omega}}\dot{\Omega}& =-\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W+\nabla^{\perp}_{W}\psi(\pi_{ *}W)-\mathcal{A}_{CU}\pi_{*}W+\nabla^{\perp}_{W}\mathcal{C}U+\nabla^{B}_{\pi _{*}W}\mathcal{B}U+\nabla^{B}_{U}\psi(\pi_{*}W)\\ &+\nabla^{B}_{U}\mathcal{B}U+\nabla^{B}_{U}\mathcal{C}U-\alpha[ ||\dot{\Omega}||^{2}\xi-\eta(U)\pi_{*}W-\eta(U)U]+\beta[\eta(U)\psi(\pi_{*}W)+ \eta(U)\psi U],\\ &=-\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W-\mathcal{A}_{CU}\pi_{*}W+ \nabla^{\perp}_{W}\psi(\pi_{*}W)+\nabla^{\perp}_{W}\mathcal{C}U+\nabla^{ \perp}_{U}\psi(\pi_{*}W)+\nabla^{\perp}_{U}\mathcal{C}U\\ &+\nabla^{B}_{\pi_{*}W}\mathcal{B}U+\nabla^{B}_{U}\mathcal{B}U- \alpha[||\dot{\Omega}||^{2}\xi-\eta(U)\pi_{*}W-\eta(U)U]+\beta[\eta(U)\psi(\pi _{*}W)+\eta(U)\psi U].\end{split} \tag{3.6}\] Since \(g(\mathcal{B}U,V)=0\) for any \(V\in\Gamma((range\pi_{*})^{\perp})\), therefore \(\nabla^{B}_{U}\mathcal{B}U\in\Gamma(range\pi_{*})\). Let \(Z\in\Gamma((ker\pi)^{\perp})\) such that \(\ \pi_{*}Z=\mathcal{B}U\), then using (2.7) in (3.6), we have \[\begin{split}\psi\nabla^{B}_{\dot{\Omega}}\dot{\Omega}& =-\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W-\mathcal{A}_{CU}\pi_{*}W+\nabla^{\perp}_{W} \psi(\pi_{*}W)+\nabla^{\perp}_{W}\mathcal{C}U+\nabla^{\perp}_{U}\psi(\pi_{*}W) +\nabla^{\perp}_{U}\mathcal{C}U\\ &+(\nabla\pi_{*})(W,Z)+\pi_{*}(\mathcal{H}\nabla^{M}_{W}Z)+\nabla^ {B}_{U}\pi_{*}Z-\alpha[||\dot{\Omega}||^{2}\xi-\eta(U)\pi_{*}W-\eta(U)U]\\ &+\beta\eta(U)\psi(\pi_{*}W)+\beta\eta(U)\psi U.\end{split} \tag{3.7}\] Since, \(\Omega\) is geodesic, \(\nabla^{B}_{\dot{\Omega}}\dot{\Omega}=0\). Separating vertical and horizontal parts of the above equation we get (3.3) and (3.4). **Corollary 3.4**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B,g_{2},\psi,\eta,\xi)\) of type \((\alpha,0)\) with horizontal Reeb vector field \(\xi\) and \(\gamma:J\subset\mathbb{R}\to M\) be a geodesic on \(M\), then the curve \(\Omega=\pi\circ\gamma\) is a geodesic on \(B\) if and only if_ \[-\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W-\mathcal{A}_{CU}\pi_{*}W+\pi_{*}(\mathcal{H }\nabla^{M}_{W}Z)+\nabla^{B}_{U}\pi_{*}Z+\eta(U)\alpha\pi_{*}W=0,\] \[\nabla^{\pi\perp}_{W}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{W}\mathcal{C}Z+\nabla^{\pi \perp}_{U}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{U}\mathcal{C}U+(\nabla\pi_{*})(W,Z) +\alpha[\eta(U)U-||\dot{\Omega}||^{2}\xi]\ =\ 0\] _for any \(U\in\Gamma((rangeF_{*})^{\perp})\) and \(W,Z\ \in\Gamma((kerF_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\) with \(\pi_{*}W\) and \(U\) as vertical and horizontal components of \(\dot{\Omega}(s)\)._ **Corollary 3.5**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B,g_{2},\psi,\eta,\xi)\) of type \((0,\beta)\) with horizontal Reeb vector field \(\xi\) and \(\gamma:J\subset\mathbb{R}\to M\) be a geodesic on \(M\), then the curve \(\Omega=\pi\circ\gamma\) is a geodesic on \(B\) if and only if_ \[-\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W-\mathcal{A}_{\mathcal{C}U}\pi_{*}W+\pi_{*} (\mathcal{H}\nabla^{M}_{W}Z)+\nabla^{B}_{U}\pi_{*}Z+\beta\eta(U)\pi_{*}Z]=0,\] \[\nabla^{\pi\perp}_{W}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{W}\mathcal{C}Z+\nabla^ {\pi\perp}_{U}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{U}\mathcal{C}U+(\nabla\pi_{*}) (W,Z)+\beta\eta(U)[\psi(\pi_{*}W)+\mathcal{C}U]\,=\,0\] _for any \(U\in\Gamma((rangeF_{*})^{\perp})\) and \(W,Z\ \in\Gamma((kerF_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\) with \(\pi_{*}W\) and \(U\) as vertical and horizontal components of \(\dot{\Omega}(s)\)._ **Corollary 3.6**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B,g_{2},\psi,\eta,\xi)\) of type \((0,0)\) with horizontal Reeb vector field \(\xi\) and \(\gamma:J\subset\mathbb{R}\to M\) be a geodesic on \(M\), then the curve \(\Omega=\pi\circ\gamma\) is a geodesic on \(B\) if and only if_ \[-\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W-\mathcal{A}_{CU}\pi_{*}W+\pi_{*}(\mathcal{ H}\nabla^{M}_{W}Z)+\nabla^{B}_{U}\pi_{*}Z=0,\] \[\nabla^{\pi\perp}_{W}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{W}\mathcal{C}Z+\nabla ^{\pi\perp}_{U}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{U}\mathcal{C}U+(\nabla\pi_{ *})(W,Z)=0 \tag{3.8}\] _for any \(U\in\Gamma((rangeF_{*})^{\perp})\) and \(W,Z\ \in\Gamma((kerF_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\) with \(\pi_{*}W\) and \(U\) as vertical and horizontal components of \(\dot{\Omega}(s)\)._ **Theorem 3.7**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) of type \((\alpha,\beta)\) with horizontal Reeb vector field \(\xi\). Let \(\gamma\) and \(\Omega\) be geodesics on \(M\) and \(B\) respectively. Then \(\pi\) is a Clairaut anti-invariant Riemannian map with \(r=e^{h}\) if and only if_ \[g_{2}(\pi_{*}W,\pi_{*}W)\frac{d(h\circ\Omega)}{ds} =g_{2}(\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W,\pi_{*}Z)-g_{2}(\nabla ^{\pi\perp}_{W}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{U}\psi(\pi_{*}W),\mathcal{C}U) \tag{3.9}\] \[-\eta(U)[\alpha g_{1}(W,Z)+\beta||\psi U||^{2}],\] _where \(U\in\Gamma((range\pi_{*})^{\perp})\) and \(W,Z\in\Gamma((ker\pi_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\). Also \(\pi_{*}W\) and \(U\) are vertical and horizontal part of \(\dot{\Omega}(s)\) respectively and \(h\) is a smooth function on \(B\)._ Proof.: Let \(\gamma:J\subset\mathbb{R}\to M\) and \(\Omega=\pi\circ\gamma\) be geodesics on \(M\) and \(B\) respectively such that \(\dot{\Omega}(s)=\pi_{*}W(s)+U(s)\), where \(\pi_{*}W\in\Gamma(range\pi_{*})\) and \(U\in\Gamma((range\pi_{*})^{\perp})\). Considering \(||\dot{\Omega}(s)||^{2}=c\), we have \[g_{2\dot{\Omega}(s)}(U,U) =ccos^{2}\theta(s), \tag{3.10}\] \[g_{2\dot{\Omega}(s)}(\pi_{*}W,\pi_{*}W) =csin^{2}\theta(s), \tag{3.11}\] where \(\theta(s)\in[0,\pi]\) is the angle between \(\dot{\Omega}\) and \(U\). Differentiating (3.10) along \(\Omega\), we get \[\frac{d}{ds}g_{2}(U,U)=2g_{2}(\nabla^{B}_{\dot{\Omega}}U,U)=-2ccos\theta sin \theta\frac{d\theta}{ds}. \tag{3.12}\] Since \(B\) is a trans-Sasakian manifold, (3.12) can be written as \[2g_{2}(\nabla_{\dot{\Omega}}\psi U,\psi U)=2g_{2}(\nabla^{B}_{\pi_{*}W+U}\psi U,\psi U)=-2ccos\theta sin\theta\frac{d\theta}{ds}. \tag{3.13}\] Now, using (3.2) and (2.8) in (3.13), we get \[g_{2}(\nabla^{B}_{\pi_{*}W}\mathcal{B}U-\mathcal{A}_{CU}\pi_{*}W+\nabla^{\pi\perp }_{W}\mathcal{C}U+\nabla^{B}_{U}\mathcal{B}U+\nabla^{\perp}_{U}\mathcal{C}U, \psi U)=-cos\theta sin\theta\frac{d\theta}{ds}. \tag{3.14}\] Let \(Z\in\Gamma((ker\pi_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\), using (2.7) in (3.14) we have \[g_{2}((\nabla^{B}\pi_{*})(W,Z)+\pi_{*}(\nabla^{M}_{W}Z)-\mathcal{A}_{CU}\pi_{* }W+\nabla^{\pi\perp}_{W}\mathcal{C}U+\nabla^{B}_{U}\pi_{*}Z+\nabla^{\perp}_{U }\mathcal{C}U,\psi U)=-cos\theta sin\theta\frac{d\theta}{ds}. \tag{3.15}\] Using (3.3) and (3.4) in (3.15) and simplifying, we obtain \[-cos\theta sin\theta\frac{d\theta}{ds}=g_{2}(\mathcal{A}_{\psi \pi_{*}W}\pi_{*}W,\pi_{*}Z)-g_{2}(\nabla^{\pi\perp}_{W}\psi(\pi_{*}W)+\nabla^ {\pi\perp}_{U}\psi(\pi_{*}W),\mathcal{C}U)\\ -\alpha\eta(U)g_{2}(W,Z)-\beta\eta(U)||\psi U||^{2}. \tag{3.16}\] Further, \(\pi\) is a Clairaut Riemannian map with \(r=e^{h}\) if and only if \[\frac{d}{ds}(e^{h\circ\Omega}sin\theta)=0,\] This implies, \[e^{h\circ\Omega}sin\theta\frac{d(h\circ\Omega)}{ds}+e^{h\circ\Omega}cos\theta \frac{d\theta}{ds}=0,\] \[csin^{2}\theta\frac{d(h\circ\Omega)}{ds}=-csin\theta cos\theta\frac{d\theta}{ ds}. \tag{3.17}\] From (3.11), (3.16) and (3.17), we get (3.9). **Corollary 3.8**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) of type \((\alpha,0)\) with horizontal Reeb vector field \(\xi\xi\). Let \(\gamma\) and \(\Omega\) be geodesic on \(M\) and \(B\) respectively. Therefore \(\pi\) is a Clairaut anti-invariant Riemannian map with \(r=e^{h}\) if and only if_ \[g_{2}(\pi_{*}W,\pi_{*}W)\frac{d(h\circ\Omega)}{ds} =g_{2}(\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W,\pi_{*}Z)-g_{2}(\nabla ^{\pi\perp}_{W}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{U}\psi(\pi_{*}W),\mathcal{C}U)\] \[-\alpha\eta(U)g_{1}(W,Z),\] _where \(U\in\Gamma((range\pi_{*})^{\perp})\) and \(W,Z\in\Gamma((ker\pi_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\). Also \(\pi_{*}W\) and \(U\) are vertical and horizontal part of \(\dot{\Omega}(s)\) respectively and \(h\) is a smooth function on \(B\)._ **Corollary 3.9**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) of type \((0,\beta)\) with horizontal Reeb vector field \(\xi\). Let \(\gamma\) and \(\Omega\) be geodesic on \(M\) and \(B\) respectively. Then \(\pi\) is a Clairaut anti-invariant Riemannian map with \(r=e^{h}\) if and only if_ \[g_{2}(\pi_{*}W,\pi_{*}W)\frac{d(h\circ\Omega)}{ds} =g_{2}(\mathcal{A}_{\psi\pi_{*}W}\pi_{*}W,\pi_{*}Z)-g_{2}(\nabla ^{\pi\perp}_{W}\psi(\pi_{*}W)+\nabla^{\pi\perp}_{U}\psi(\pi_{*}W),\mathcal{C}U)\] \[-\beta\eta(U)||\psi U||^{2},\] _where \(U\in\Gamma((range\pi_{*})^{\perp})\) and \(W,Z\in\Gamma((ker\pi_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\). Also \(\pi_{*}W\) and \(U\) are vertical and horizontal part of \(\dot{\Omega}(s)\) respectively and \(h\) is a smooth function on \(B\)._ **Corollary 3.10**.: _Let \(\pi:M^{m}\to B^{b}\) be an anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) of type \((0,0)\) with horizontal Reeb vector field \(\xi\). Let \(\gamma\) and \(\Omega\) be geodesic on \(M\) and \(B\) respectively. Then \(\pi\) is a Clairaut anti-invariant Riemannian map with \(r=e^{h}\) if and only if_ \[g_{2}(\pi_{*}W,\pi_{*}W)\frac{d(h\circ\Omega)}{ds}=g_{2}(\mathcal{A}_{\psi\pi_ {*}W}\pi_{*}W,\pi_{*}Z)-g_{2}(\nabla^{\perp}_{W}\psi(\pi_{*}W)+\nabla^{\pi_{ *}\perp}_{U}\psi(\pi_{*}W),\mathcal{C}U),\] _where \(U\in\Gamma((range\pi_{*})^{\perp})\) and \(W,Z\in\Gamma((ker\pi_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\). Also \(\pi_{*}W\) and \(U\) are vertical and horizontal part of \(\dot{\Omega}(s)\) respectively and \(h\) is a smooth function on \(B\)._ **Theorem 3.11**.: _Let \(\pi:M^{m}\to B^{b}\) be a Clairaut anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) having horizontal Reeb vector field \(\xi\) with \(r=e^{h}\). Then either \(dim(range\pi_{*})=1\) or \(h\) is constant in \(\psi(range\pi_{*})\)._ Proof.: Since \(\pi\) is a Clairaut anti-invariant Riemannian map admitting horizontal Reeb vector field with \(r=e^{h}\), we have \[(\nabla\pi_{*})(W,Z)=-g(W,Z)\nabla^{B}h \tag{3.18}\] for any \(W,Z\in\Gamma(ker\pi_{*})^{\perp}\). Taking inner product with \(\psi\pi_{*}Y\in\Gamma((range\pi_{*})^{\perp})\) and using (2.7), we get \[g_{2}(\nabla^{\frac{B}{\pi_{*}}}_{W}\pi_{*}Z,\psi\pi_{*}Y)=-g_{1}(W,Z)g_{2}( \nabla^{B}h,\psi\pi_{*}Y).\] Also, from above equation we have \[g_{2}(\nabla^{\frac{B}{\pi_{W}}}_{W}\psi\pi_{*}Y,\pi_{*}Z)=g_{1}(W,Z)g_{2}( \nabla^{B}h,\psi\pi_{*}Y). \tag{3.19}\] Since \(B\) is a trans-Sasakian manifold, using (2.4) in (3.19), we get \[g_{2}(\nabla^{\frac{B}{\pi_{*}}}_{W}\pi_{*}Y,\psi\pi_{*}Z)=-g_{1}(W,Z)g_{2}( \nabla^{B}h,\psi\pi_{*}Y). \tag{3.20}\] Again, using (3.18), we have \[g_{2}(\nabla^{\frac{B}{\pi_{*}}}_{W}\pi_{*}Y,\psi\pi_{*}Z)=-g_{1}(W,Y)g_{2}( \nabla^{B}h,\psi\pi_{*}Z), \tag{3.21}\] equating (3.20) and (3.21), we obtain \[g_{1}(W,Z)g_{2}(\nabla^{B}h,\psi\pi_{*}Y)=g_{1}(W,Y)g_{2}(\nabla^{B}h,\psi\pi _{*}Z).\] Putting \(W=Z\) in above equation, we get \[||W||^{2}g_{2}(\nabla^{B}h,\psi\pi_{*}Y)=g_{1}(W,Y)g_{2}(\nabla^{B}h,\psi\pi_ {*}W). \tag{3.22}\] Interchanging \(W\) and \(Y\) in above equation, we have \[||Y||^{2}g_{2}(\nabla^{B}h,\psi\pi_{*}W)=g_{1}(W,Y)g_{2}(\nabla^{B}h,\psi\pi_ {*}Y). \tag{3.23}\] From (3.22) and (3.23), we obtain \[g_{2}(\nabla^{B}h,\psi\pi_{*}W)\Big{[}1-\frac{g_{1}(W,Y)g_{1}(W,Y)}{||W||^{2}|| Y||^{2}}\Big{]}=0. \tag{3.24}\] From (3.24), we conclude that either \(dim((ker\pi_{*})^{\perp})=1\) or \(h\) is constant in \(\psi\pi_{*}W\). Since there is linear isometry between \((ker\pi_{*})^{\perp}\) and \(range\pi_{*}\). Hence we have the theorem. **Theorem 3.12**.: _Let \(\pi:M^{m}\to B^{b}\) be a Clairaut anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) having horizontal Reeb vector field \(\xi\). If \(dim(range\pi_{*})>1\), then \(range\pi_{*}\) is minimal._ Proof.: Let \(Y\in\Gamma((ker\pi_{*})^{\perp})\), then we have \[(\nabla\pi_{*})(Y,Y)=g(Y,Y)H_{2}. \tag{3.25}\] If \(\pi_{*}Z\in\Gamma((range\pi_{*})^{\perp})\), using (2.7), above eqution can be written as \[g_{2}(\pi_{*}Y,\nabla_{Y}^{\pi}\psi\pi_{*}Z)=-g_{1}(Y,Y)g_{2}(H_{2},\psi\pi_{ *}Z). \tag{3.26}\] Since \(B\) ia a trans-Sasakian manifold, simplifying (3.26) we have \[g_{2}(\psi\pi_{*}Y,\nabla_{Y}^{\pi}\pi_{*}Z)=g_{1}(Y,Y)g_{2}(H_{2},\psi\pi_{* }Z). \tag{3.27}\] Again, from (3.25) and (3.27) we get \[g_{1}(Y,Z)g_{2}(H_{2},\psi\pi_{*}Y)=g_{1}(Y,Y)g_{2}(H_{2},\psi\pi_{*}Z). \tag{3.28}\] Interchanging \(Y\) and \(Z\), we have \[g_{1}(Y,Z)g_{2}(H_{2},\psi\pi_{*})Z=g_{1}(Z,Z)g_{2}(H_{2},\psi\pi_{*}Y). \tag{3.29}\] Since \(dim(range\pi_{*})>1\), from (3.28) and (3.29) we conclude the required result. **Theorem 3.13**.: _Let \(\pi:M^{m}\to B^{b}\) be a Clairaut anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) having horizontal Reeb vector field \(\xi\). If \(range\pi_{*}\) is integrable, then \(g_{2}(\nabla_{W}^{\pi\perp}\psi(\pi_{*}Y)-\nabla_{Y}^{\pi\perp}\psi(\pi_{*}W),\mathcal{C}U)=0,\) where \(W,Y\in\Gamma((ker\pi_{*})^{\perp})\) and \(U\in\Gamma((range\pi_{*})^{\perp}),\)_ Proof.: Let \(W,Y\in\Gamma(ker\pi_{*}^{\perp})\) and \(U\in\Gamma((range\pi_{*})^{\perp})\), we have \[g_{2}([\pi_{*}W,\pi_{*}Y],U)=g_{2}(\nabla_{\pi_{*}W}^{B}\pi_{*}Y-\nabla_{\pi_ {*}Y}^{B}\pi_{*}W,U). \tag{3.30}\] Since \(B\) is a trans-Sasakian manifold, from (3.6),(2.4) and (3.30), we get \[g_{2}([\pi_{*}W,\pi_{*}Y],U)=g_{2}(\nabla_{\pi_{*}W}^{B}\psi(\pi_{*}Y)-\nabla_ {\pi_{*}Y}^{B}\psi(\pi_{*}W),\psi U).\] Using (2.8) and (3.2) in above equation, we obtain \[g_{2}([\pi_{*}W,\pi_{*}Y],U)=-g_{2}(\mathcal{A}_{\psi(\pi_{*}Y)}\pi_{*}W, \mathcal{B}U)+g_{2}(\mathcal{A}_{\psi(\pi_{*}W)}\pi_{*}Y,\mathcal{B}U)+g_{2}( \nabla_{W}^{\pi\perp}\psi(\pi_{*}Y)-\nabla_{Y}^{\pi\perp}\psi(\pi_{*}W), \mathcal{C}U). \tag{3.31}\] Assuming \(Z\in\Gamma((ker\pi_{*})^{\perp})\) such that \(\pi_{*}Z=\mathcal{B}U\) and using (3.4), (3.31) can be rewritten as \[g_{2}([\pi_{*}W,\pi_{*}Y],U) =-g_{2}(\psi(\pi_{*}Y),(\nabla\pi_{*})(W,Z))+g_{2}(\psi(\pi_{*}W),(\nabla\pi_{*})(Y,Z))\] \[+g_{2}(\nabla_{W}^{\pi\perp}\psi(\pi_{*}Y)-\nabla_{Y}^{\pi\perp} \psi(\pi_{*}W),\mathcal{C}U).\] Since \(\pi\) is a Clairaut Riemannian map, using Definition 2.2. in above equation, we get \[g_{2}([\pi_{*}W,\pi_{*}Y],U) =g_{2}(\psi(\pi_{*}Y),\nabla^{B}h)[g_{1}(W,Z)-g_{1}(Y,Z)] \tag{3.32}\] \[+g_{2}(\nabla_{W}^{\pi\perp}\psi(\pi_{*}Y)-\nabla_{Y}^{\pi\perp} \psi(\pi_{*}W),\mathcal{C}U).\] Since \(dim(range\pi_{*})>1\), using Theorem 3.3 in (3.32), we get the required result. **Theorem 3.14**.: _Let \(\pi:M^{m}\to B^{b}\) be a Clairaut anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) having horizontal Reeb vector field \(\xi\). Then \((range\pi_{*})^{\perp}\) is integrable._ Proof.: Let \(U,V\in\Gamma((range\pi_{*})^{\perp})\) and \(W\in\Gamma(range\pi_{*})\), then we can write \[g_{2}([U,V],W)=g_{2}(\nabla_{U}V-\nabla_{V}U,W). \tag{3.33}\] Since, \((range\pi_{*})^{\perp}\) is a totally geodesic distribution so we have the required result. **Theorem 3.15**.: _Let \(\pi:M^{m}\to B^{b}\) be a Clairaut anti-invariant Riemannian map from a Riemannian manifold \((M^{m},g_{1})\) to a trans-Sasakian manifold \((B^{b},g_{2},\psi,\eta,\xi)\) having horizontal Reeb vector field \(\xi\) and \(dim(range\pi_{*})>1\). Then, \(\pi\) is harmonic if and only if \(ker\pi_{*}\) is minimal._ Proof.: Let \(\{Z_{i}\}_{i=1}^{r}\) and \(\{Z_{i}\}_{i=r+1}^{m}\) be orthonormal basis of \(ker\pi_{*}\) and \((ker\pi_{*})^{\perp}\) respectively, then we have \[\begin{split} trace(\nabla\pi_{*})&=\sum_{i=1}^{r}( \nabla\pi_{*})(Z_{i},Z_{i})+\sum_{i=r+1}^{m}(\nabla\pi_{*})(Z_{i},Z_{i})\\ &=\sum_{i=1}^{r}(\nabla\pi_{*})(Z_{i},Z_{i})+\sum_{i=r+1}^{m}g_{2 }((\nabla\pi_{*})(Z_{i},Z_{i}),\pi_{*}Z_{i})\pi_{*}Z_{i}\\ &+\sum_{i=r+1}^{m}\sum_{j=1}^{s}g_{2}((\nabla\pi_{*})(Z_{i},Z_{ i}),\mu_{j})\mu_{j}+\sum_{i=r+1}^{m}g_{2}((\nabla\pi_{*})(Z_{i},Z_{i}),\psi( \pi_{*}Z_{i}))\psi(\pi_{*}Z_{i}),\end{split} \tag{3.34}\] where \(\{\pi_{*}Z_{i}\}_{i=r+1}^{m}\) and \(\{\mu_{j}\}_{j=1}^{s}\) are orthonormal basis of \(\Gamma(range\pi_{*})\) and \(\Gamma(\mu)\) respectively, and \(b=2m+s\). Using lemma 2.1 and (2.7) in (3.34), we get \[\begin{split} trace(\nabla\pi_{*})&=\sum_{i=1}^{r}( \nabla_{Z_{i}}^{x}\pi_{*}Z_{i}-\pi_{*}(\nabla_{Z_{i}}^{M}Z_{i}))+\sum_{i=r+1} ^{m}\sum_{j=1}^{s}g_{2}((\nabla\pi_{*})(Z_{i},Z_{i}),\mu_{j})\mu_{j}\\ &+\sum_{i=r+1}^{m}g_{2}(\nabla_{Z_{i}}^{x}\pi_{*}Z_{i},\psi(\pi_ {*}Z_{i}))\psi(\pi_{*}Z_{i}).\end{split} \tag{3.35}\] Since \(B\) is a trans-Sasakian manifold, using (2.4) in above equation, we have \[\begin{split} trace(\nabla\pi_{*})&=-\sum_{i=1}^{r} \pi_{*}(\nabla_{Z_{i}}^{M}Z_{i}))+\sum_{i=r+1}^{m}\sum_{j=1}^{s}g_{2}((\nabla \pi_{*})(Z_{i},Z_{i}),\mu_{j})\mu_{j}\\ &+\sum_{i=r+1}^{m}[g_{2}(-\psi\nabla_{Z_{i}}^{x}\psi(\pi_{*}Z_{i} ),\psi(\pi_{*}Z_{i}))\psi(\pi_{*}Z_{i})-\beta g_{2}(\pi_{*}Z_{i},\pi_{*}Z_{i})g_ {2}(\xi,\psi(\pi_{*}Z_{i}))\psi(\pi_{*}Z_{i})],\\ &=-\sum_{i=1}^{r}\pi_{*}(\nabla_{Z_{i}}^{M}Z_{i}))+\sum_{i=r+1}^{m }\sum_{j=1}^{s}g_{2}((\nabla\pi_{*})(Z_{i},Z_{i}),\mu_{j})\mu_{j}\\ &+\sum_{i=r+1}^{m}g_{2}(\nabla_{Z_{i}}^{x}\pi_{*}Z_{i},\psi(\pi_{ *}Z_{i}))\psi(\pi_{*}Z_{i}).\end{split} \tag{3.36}\] Further, using (2.10) and (2.11) in (3.36), we get \[\begin{split} trace(\nabla\pi_{*})&=-r\pi_{*}( \varrho^{\nu})+\sum_{i=r+1}^{m}\sum_{j=1}^{s}g_{2}(H_{2}g_{1}(Z_{i},Z_{i}),\mu_ {j})\mu_{j}\\ &+(m-r)\sum_{i=r+1}^{m}g_{2}(H_{2},\psi(\pi_{*}Z_{i}))\psi(\pi_{* }Z_{i}),\end{split} \tag{3.37}\] where, \(\varrho^{\mathcal{V}}\) is mean curvature of \(ker\pi_{*}\). Since \(dim(range\pi_{*})>1\), from theorem 3.4 and above equation, we get \[trace(\nabla\pi_{*})=-r\pi_{*}(\varrho^{\mathcal{V}}). \tag{3.38}\] Thus, \(\pi\) is harmonic if and only if \(ker\pi_{*}\) is minimal. **Example 3.16**.: _Let \(\pi:M\to B\) be a smooth map defined as_ \[\pi(x,y,z)=(0,x+y,0),\] _where \(M=\{(x,y,z)\in\mathbb{R}^{3},x,y,z\neq 0\}\) is a Riemannian manifold with Riemannian metric_ \[g_{1}=\frac{1}{4}\begin{pmatrix}\frac{3}{2}&\frac{1}{2}&0\\ \frac{1}{2}&\frac{3}{2}&0\\ 0&0&1\end{pmatrix}\] _on \(M\) and \(B=\{(x,y,z)\in\mathbb{R}^{3},z\neq 0\}\) is a trans-Sasakian manifold with contact structure given by Example 2.1., then we have_ \[(ker\pi_{*})=span\big{\{}e_{1}-e_{2},e_{3}\big{\}},\] \[(ker\pi_{*})^{\perp}=span\{Z=e_{1}+e_{2}\},\] _where \(e_{i}\) are standard basis vector fields on \(M\). Also, by simple computation it is easy to see that_ \[(range\pi_{*})=span\big{\{}\pi_{*}Z=E_{1}=2\frac{\partial}{\partial v}\big{\}}\] \[(range\pi_{*})^{\perp}=span\big{\{}E_{2}=2(\frac{\partial}{\partial u}+v \frac{\partial}{\partial w}),E_{3}=2\frac{\partial}{\partial w}=\xi\big{\}}\] _and \(\psi E_{1}=E_{2}\) with \(g_{1}(Z,Z)=g_{2}(\pi_{*}Z,\pi_{*}Z)\). Thus, \(\pi\) is an anti-invariant Riemannian map. In order to show that the defined map is Clairaut Riemannian map we find a smooth function \(h\) satisfying equation \((\nabla\pi_{*})(Z,Z)=-g(Z,Z)\nabla^{B}h\). Here, \((\nabla\pi_{*})(Z,Z)=0\), \(g_{1}(Z,Z)=1\), thus by taking constant \(h\), we can verify \((\nabla\pi_{*})(Z,Z)=-g(Z,Z)\nabla^{B}h\). Also from (2.4), we have \(\alpha=1,\beta=0\). Hence \(\pi\) is a Clairaut anti-invariant Riemannian map from Riemannian manifold to trans-Sasakian manifold of type \((1,0)\)._ **Example 3.17**.: _Let \(\pi:M\to B\) be a Riemannian map defined as_ \[\pi(x,y,z)=(0,\frac{x-y}{\sqrt{2}},0),\] _where \(M\) is a Riemannian manifold and \(B\) is a Sasakian manifold. Since, there is a linear isometry between \((ker\pi_{*})^{\perp}\) and \((range\pi_{*})\), we can define a smooth function \(k\) between these distributions and then pullback \(k^{*}\) of that function on \((ker\pi_{*})^{\perp}\) in terms of \(B^{\prime}s\) co-ordinate system such that \(k^{*}((ker\pi_{*})^{\perp})=e^{-w}\), then extend this function on whole \(TM\). Now, we can define a global frame \(\{e_{1},e_{2},e_{3}\}\) with \(e_{1}=e^{-w}\frac{\partial}{\partial x},e_{2}=e^{-w}\frac{\partial}{\partial y },e_{3}=\frac{\partial}{\partial z}\) and a Riemannian metric \(g_{1}\) on \(M\) such that \(g_{1}(x,y,z)=e^{2w}dx^{2}+e^{2w}dy^{2}+dz^{2}\), whereas \(B=\{(u,v,w)\in\mathbb{R}^{3}|v,w\neq 0\}\) is equipped with a contact metric structure \((g_{2},\psi,\eta,\xi)\), given by_ \[g_{2}=(e^{2w}+v^{2})du^{2}+e^{2w}dv^{2}+(-2v)dvdw+dw^{2},\ \ \psi=\begin{pmatrix}0&1&0\\ -1&0&0\\ 0&v&0\end{pmatrix},\ \ \eta=(dw-vdu),\ \ \xi=\frac{\partial}{\partial w}\] _and \(\{E_{1},E_{2},E_{3}\}\) is a global frame on \(B\), defined as \(E_{1}=e^{-w}\frac{\partial}{\partial v},E_{2}=\psi E_{1}=e^{-w}(\frac{ \partial}{\partial u}+v\frac{\partial}{\partial w}),E_{3}=\frac{\partial}{ \partial w}=\xi\). Then, by simple calculation, we get_ \[(ker\pi_{*})^{\perp}=span\big{\{}Z=\frac{1}{\sqrt{2}}(e_{1}-e_{2})=\frac{e^{- w}}{\sqrt{2}}\big{(}\frac{\partial}{\partial x}-\frac{\partial}{\partial y }\big{)}\big{\}},\] \[range\pi_{*}=span\big{\{}\pi_{*}Z=e^{-w}\frac{\partial}{\partial v}=E_{1}\big{\}},\] \[(range\pi_{*})^{\perp}=span\big{\{}E_{2}=e^{-w}\big{(}\frac{\partial}{\partial u }+\frac{\partial}{\partial w}\big{)},E_{3}=\frac{\partial}{\partial w}=\xi \big{\}}.\] _Also, \(g_{1}(Z,Z)=g_{2}(\pi_{*}Z,\pi_{*}Z)=1\) and \(\psi(range\pi_{*})\subset((range\pi_{*})^{\perp})\), therefore \(\pi\) is an anti-invariant Riemannian map. In order to prove that \(\pi\) is a Clairaut map, we must have \(\nabla\pi_{*}(Z,Z)=-g_{1}(Z,Z)\nabla^{B}h.\) Here, by some computation, it is easy to see that \(g_{1}(Z,Z)=1\) and \(\nabla\pi_{*}(Z,Z)=-E_{3}-ve^{-w}E_{2}\), therefore, we have \(\nabla^{B}h=E_{3}+ve^{-w}E_{2}\). For a smooth function \(h\), the value of \(\nabla^{B}h\) with respect to \(g_{2}\) is given by \(\nabla^{B}h=\big{(}e^{-w}\frac{\partial h}{\partial u}+ve^{-w}\frac{\partial h }{\partial w}\big{)}E_{2}+v^{2}\frac{\partial h}{\partial w}E_{3}\), which implies that \(h=\frac{1}{ve^{w}}\). Also from (2.4), we have \(\alpha=\frac{1}{2}e^{-2w},\beta=1\). Hence \(\pi\) is a Clairaut anti-invariant Riemannian map to trans-Sasakian manifold of type \((\frac{1}{2}e^{-2w},1)\)._ ## 4 Acknowledgments The first author is thankful to UGC for providing financial assistance in terms of MANF scholarship vide letter with UGC-Ref. No. 1844/(CSIR-UGC NET JUNE 2019). The second author is thankful to DST Gov. of India for providing financial support in terms of DST-FST label-I grant vide sanction number SR/FST/MS-I/2021/104(C).
2307.10327
Adaptive Trotterization for time-dependent Hamiltonian quantum dynamics using piecewise conservation laws
Digital quantum simulation relies on Trotterization to discretize time evolution into elementary quantum gates. On current quantum processors with notable gate imperfections, there is a critical tradeoff between improved accuracy for finer timesteps, and increased error rate on account of the larger circuit depth. We present an adaptive Trotterization algorithm to cope with time-dependent Hamiltonians, where we propose a concept of piecewise "conserved" quantities to estimate errors in the time evolution between two (nearby) points in time; these allow us to bound the errors accumulated over the full simulation period. They reduce to standard conservation laws in the case of time-independent Hamiltonians, for which we first developed an adaptive Trotterization scheme [PRX Quantum 4, 030319]. We validate the algorithm for a time-dependent quantum spin chain, demonstrating that it can outperform the conventional Trotter algorithm with a fixed step size at a controlled error.
Hongzheng Zhao, Marin Bukov, Markus Heyl, Roderich Moessner
2023-07-19T09:20:02Z
http://arxiv.org/abs/2307.10327v2
# Adaptive Trotterization for time-dependent Hamiltonian quantum dynamics ###### Abstract Digital quantum simulation relies on Trotterization to discretize time evolution into elementary quantum gates. On current quantum processors with notable gate imperfections, there is a critical tradeoff between improved accuracy for finer timesteps, and increased error rate on account of the larger circuit depth. We present an adaptive Trotterization algorithm to cope with time-dependent Hamiltonians, where we propose a concept of _instantaneous "conserved" quantities_ to estimate errors in the time evolution between two (nearby) points in time; these allow us to bound the errors accumulated over the full simulation period. They reduce to standard conservation laws in the case of time-independent Hamiltonians, for which we first developed an adaptive Trotterization scheme. We validate the algorithm for a time-dependent quantum spin chain, demonstrating that it can outperform the conventional Trotter algorithm with a fixed step size at a controlled error. _Introduction.--_ Simulating the time evolution of non-equilibrium quantum many-body systems poses a significant challenge for classical computers due to the exponentially large Hilbert space dimension [1; 2]. The rapid development of quantum processors, e.g., trapped ions [3; 4; 5], superconducting qubits [6; 7; 8], and Rydberg platforms [9; 10; 11], holds the promise of resolving this key problem through digital quantum simulation (DQS). In DQS, the continuous time evolution operator is discretized into a series of elementary few-body quantum gates, a procedure known as Trotterization [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27]. However, due to the noncommutativity of these gates, Trotterization introduces errors, which can accumulate over longer simulation times. While a finer Trotter time step size \(\delta t\) improves simulation precision, it also leads to increased circuit depth. In the current era of noisy intermediate-scale quantum (NISQ) processors, gate imperfections are inevitable, posing a significant challenge in improving the accuracy of DQS [2], especially in the absence of experimentally efficient error-correction schemes [28; 29; 30]. Therefore, it is crucial to identify strategies for minimizing circuit depth while keeping the simulation error under control. In a previous work [31], we introduced a quantum algorithm, called ADA-Trotter, allowing for adaptive step sizes \(\delta t\) to optimize the usage of quantum gates during the time evolution generated by time-_independent_ Hamiltonians. This is achieved through a feedback process based on the properties of the time-evolved states. By measuring the expectation values of energy and energy variance, \(\delta t\) is maximized as long as errors in these conserved quantities are bounded. According to the central limit theorem, ADA-Trotter ensures a correct energy distribution for generic non-integrable many-body systems. Hence, it provides a reliable DQS of the time evolution with a bounded local error even for asymptotically long times. However, extending this formalism to time-dependent Hamiltonians \(H(t)\) is a demanding challenge, since: (i) Energy conservation is absent and hence it is a priori unclear how to define a criterion to adapt \(\delta t\); (ii) the implication of the central limit-theorem, mentioned above, is now elusive. (iii) it is not clear how to control the additional heating generated by the piecewise constant time-dependence of a Trotterized Hamiltonian compared to that of \(H(t)\). In this work, we propose tADA-Trotter - an adaptive algorithm for time-dependent quantum systems [32; 33; 34; 35; 36]. To achieve this, we first discretize the time evolution into small time intervals \([t,t+\delta t]\). In each interval, the time evolution can be generated by an effective Hamiltonian \(H^{t,\delta t}_{[\infty]}\). Note that this Hamiltonian has only a parametric dependence on \(t,\delta t\); it is time-independent once \(t\) and \(\delta t\) are fixed, cf. Fig. 1 (a). Consequently, the expectation values of \(H^{t,\delta t}_{[\infty]}\) and its higher moments coincide at the boundaries of this time window for perfect time evolution, a feature we will refer to as _instantaneous conservation laws_. Trotterization introduces errors in these conservation laws. Our key finding is that, by constraining these errors, the step size can be adapted to reduce circuit depth while maintaining a given simulation accuracy for generic non-integrable many-body systems. We first construct \(H^{t,\delta t}_{[\infty]}\) for generic time-dependent protocols using a perturbative Magnus expansion in the small time step \(\delta t\)[37; 38]. We then discuss how to determine the adaptive step size from measurements using Trotterized time evolution, cf. Fig. 1 (b). Subsequently, we introduce the concept of a _global error_, which represents the accumulation of time-local errors in the instantaneous conservation laws, cf. Fig. 1 (a). To adapt the step size, we propose a feedback procedure that can bound both the local and global errors. Notably, we observe that Trotter-induced heating effects present in a local control scheme, where simulation errors significantly accumulate over time by making suboptimal choices of \(\delta t\), may be efficiently suppressed by imposing constraints on the global errors. For time-independent systems, this global control scheme reduces to the algorithm proposed in Ref. [31], enabling strict error bounds throughout the entire time evolution. To determine the advantages of a globally controlled error, we perform numerical simulations using generic quantum many-body spin chains with a time-dependent driving field. We also demonstrate that tADA-Trotter outperforms the conventional fixed-step Trotter, as depicted in Fig. 2. These findings highlight the superior potential of tADA-Trotter with a global control in minimizing circuit depth for DQS of time-dependent systems. _Instantaneous conservation laws.--_The time evolution operator \(U(t,t^{\prime})\) follows the equation \(\partial_{t}U(t,t^{\prime}){=}-iH(t)U(t,t^{\prime})\), where \(H(t)\) represents the time-dependent Hamiltonian. Its solution is given by the time-ordered exponential \(U(t+\delta t,t){=}\mathcal{T}\exp\left(-i\int_{t}^{t+\delta t}H(s)\mathrm{d}s\right)\), and the exact state evolves as \(|\phi(t{+}\delta t)\rangle{=}U(t{+}\delta t,t)\left|\phi(t)\right\rangle\). The time evolution operator can be formally rewritten as \(U(t{+}\delta t,t){=}\exp\left(-iH_{[\infty]}\delta t\right)\), indicating that the same time evolution can be generated by the static effective Hamiltonian \(H_{[\infty]}\), where we drop its parametric dependence on \(t,\delta t\) for simplicity. Hence, when \(t\) and \(\delta t\) are fixed, the expectation value of \(H_{[\infty]}\), and its higher-order moments, coincide for the states \(|\phi(t{+}\delta t)\rangle\) and \(|\phi(t)\rangle\). We use these _instantaneous conservation laws_ to adapt the Trotter step size \(\delta t\). The instantaneous conserved Hamiltonian can be obtained through a Magnus expansion given by \(H_{[\infty]}=i\delta t^{-1}\sum_{n=1}^{\infty}\Omega_{n}\), where the operator \(\Omega_{n}{\propto}\mathcal{O}(\delta t^{n})\). The explicit form of \(H_{[\infty]}\) can be complicated as higher-order contributions typically involve nested commutators. To eliminate the time-ordered integral in the Magnus expansion, we expand the time-dependence in Legendre polynomials, obtaining the concise expression for terms of lowest orders [38]: \(\Omega_{2m}{=}0\) for all even orders \(2m\), and \[\begin{split}\Omega_{1}{=}A_{1},\ \Omega_{3}{=}&-\frac{1}{6} \left[A_{1},A_{2}\right],\Omega_{5}{=}\frac{1}{60}\left[A_{1},[A_{1},A_{3}] \right]-\\ \frac{1}{60}\left[A_{2},[A_{1},A_{2}]\right]{+}\frac{1}{360} \left[A_{1},[A_{1},[A_{1},A_{2}]]\right]{-}\frac{1}{30}\left[A_{2},A_{3}\right],\end{split} \tag{1}\] where each operator \(A_{n}^{t,\delta t}\) is defined as \(A_{n}^{t,\delta t}=-i(2n{-}1)\delta t\int_{0}^{1}H(t{+}x\delta t)P_{n-1}(x)dx\). Here, \(P_{n-1}\) denotes the shifted Legendre polynomials normalized to \((2n+1)\int_{0}^{1}dsP_{m}(s)P_{n}(s){=}\delta_{mn}\), and \(A_{n}^{t,\delta t}{\propto}\mathcal{O}(\delta t^{n})\). For a sufficiently small time interval \(\delta t\), this Magnus expansion can be truncated at a finite order \(k\), resulting in an approximation of \(H_{[\infty]}\) as \(H_{[k]}{=}i\delta t^{-1}\sum_{n=1}^{k}\Omega_{n}\). _Trotterization.--_ On real digital quantum devices, the exact time evolution operator \(U(t{+}\delta t,t)\), corresponding to a smoothly varying Hamiltonian \(H(t)\), is usually inaccessible. Thus, it becomes necessary to decompose the former into some elementary quantum gates using, for instance, Trotterization. For simplicity, we focus on the time-dependence \(H(t){=}x(t)X{+}z(t)Z\) with smooth functions \(x(t)\) and \(z(t)\), and two generic non-commuting hermitian operators \(X\) and \(Z\). Let us assume that quantum devices admit the exact implementation of unitaries of the form \(\exp(-iC_{1}X)\) or \(\exp(-iC_{2}Z)\) where \(C_{1,2}\) are arbitrary real numbers; however, the implementation of unitary operators generated from linear combinations of Figure 2: Comparison between tADA-Trotter and fixed-step Trotter algorithms. Inset depicts the stepsize that is varying in time. It takes larger values when external driving fields are weak. We use the following Hamiltonian parameters for numerical simulation, \(J_{z}=1,h_{x}=3,h_{z}=0.5,\tau=30,\omega=0.8,L=18,\theta=2,d_{e}^{\prime}=0.03,d_{ g^{2}}^{\prime}=0.1\). Figure 1: Schematics of tADA-Trotter for a time-dependent Hamiltonian. (a) The expectation values of the instantaneous conserved Hamiltonian \(H_{[\infty]}^{t,\delta t}\approx H_{[k]}^{t,\delta t}\) coincide at times \(t\) and \(t+\delta t\). We maximize \(\delta t\) as long as errors in this conservation law are bounded, i.e., deviations in the expectation value of \(H_{[\infty]}^{t,\delta t}\) before (\(\mathcal{E}_{i}\)) and after (\(\mathcal{E}_{f}\)) the trotterized evolution should be small. Global error is defined as the accumulation of local errors from previous steps. (b) We use a Magnus expansion to approximate \(H_{[\infty]}^{t,\delta t}{\approx}H_{[k]}^{t,\delta t}\) and a Trotter decomposition, \(U(t{+}\delta t,t){\approx}U_{[\lambda]}(t{+}\delta t,t)\), for the time evolution. \(X\) and \(Z\), such as \(\exp(-iC_{1}X-iC_{2}Z)\), is not feasible. We aim to approximate the target unitary operator up to a given order \(\lambda\), such that \(U(t{+}\delta t,t)=U_{[\lambda]}(t{+}\delta t,t){+}\mathcal{O}(\delta t^{\lambda})\). A larger \(\lambda\) leads to more accurate time evolution with smaller Trotter errors, but it also increases the circuit depth. In this work we use \(\lambda{=}3\) and the approximation can be obtained by the second-order Trotter formula, also known as the mid-point rule: \[\begin{split}& U_{[3]}(t{+}\delta t,t){=}\exp[-ix(t+\delta t/2)X \delta t/2]\times\\ &\exp[-iz(t+\delta t/2)\delta tZ]\exp[-ix(t+\delta t/2)X\delta t /2].\end{split} \tag{2}\] _Adaptive algorithm.--_ The central concept of the adaptive algorithm is to maximize \(\delta t\) while ensuring that the measurement outcome of the expectation value and variance of \(H_{[\infty]}\) remain within pre-set tolerances. However, \(H_{[\infty]}\) is generically highly non-local when higher order contributions of \(\Omega_{n}\) are involved, introducing a significant measurement overhead. Therefore, depending on the measurement accuracy and efficiency, one may need to truncate \(H_{[\infty]}\) to a finite order \(k\) to make the measurement procedure feasible on quantum computers [39, 40, 41]. Here, we consider a sufficiently large value for \(k\), ensuring that errors in the instantaneous conservation law are subdominant compared to the Trotter error. Below, we first introduce a time-local control scheme to adapt \(\delta t\), which, as we will demonstrate, leads to severe heating effects. Then we propose a global control scheme to bound the accumulated errors and suppress heating. In contrast to time-independent systems where conserved energies solely depend on the initial state \(|\psi(0)\rangle\), the expectation values of \(H_{[\infty]}\) for generic time-dependent systems rely on the time-evolved state \(|\psi(t)\rangle\). As a result, there are no universal (in time) reference expectation values known a priori for the instantaneous conserved Hamiltonians. Nonetheless, we can leverage the capability of quantum processors to measure the expectation values, \(\mathcal{E}_{i}(t,\delta t){=}\langle\psi(t)|H_{[\infty]}|\psi(t)\rangle\), as a reasonable approximation to the conserved quantities for the true quantum state \(|\phi(t)\rangle\). We maximize \(\delta t\) such that the error in the expectation value of \(H_{[\infty]}\) remains below a threshold \(d_{\mathcal{E}}\) (cf. dark blue in Fig. 1(a)), i.e., \[|\mathcal{E}_{f}(t,\delta t)-\mathcal{E}_{i}(t,\delta t)|<d_{\mathcal{E}}, \tag{3}\] where \(\mathcal{E}_{f}(t,\delta t){=}\langle\psi(t{+}\delta t)|H_{[\infty]}|\psi(t{+ }\delta t)\rangle\) represents the expectation value of \(H_{[\infty]}\) after the trotterized evolution, given by \(|\psi(t{+}\delta t){=}U_{[\lambda]}(t,\delta t)|\psi(t)\rangle\). In the ideal case of \(\lambda{\rightarrow}\infty\), we have \(\mathcal{E}_{f}(t,\delta t){-}\mathcal{E}_{i}(t,\delta t){=}0\) by definition. However, for any finite value of \(\lambda\), this error does not vanish. In addition, we also require the error in the variance to be bounded by the tolerance \(d_{\delta\mathcal{E}^{2}}\), i.e., \(|\delta\mathcal{E}^{2}_{f}(t,\delta t){-}\delta\mathcal{E}^{2}_{i}(t,\delta t )|{<}d_{\delta\mathcal{E}^{2}}\), with \[\begin{split}&\delta\mathcal{E}^{2}_{i}(t,\delta t){=}L^{-1} \langle\psi(t)|H^{2}_{[\infty]}(t,\delta t)|\psi(t)\rangle{-}L\mathcal{E}^{2} _{i},\\ &\delta\mathcal{E}^{2}_{f}(t,\delta t){=}L^{-1}\langle\psi(t+ \delta t)|H^{2}_{[\infty]}(t,\delta t)|\psi(t+\delta t)\rangle{-}L\mathcal{E}^ {2}_{f}.\end{split} \tag{4}\] According to the central limit theorem, constraining the errors in the lowest two moments of \(H_{[\infty]}\) is sufficient to ensure the approximate conservation of its higher moments [42, 31]. Therefore, the instantaneous conservation of the Hamiltonian \(H_{[\infty]}\) can be satisfied reasonably well, enabling reliable DQS of dynamics from time \(t\) to \(t{+}\delta t\). It is important to note that Trotter errors can accumulate in the time-evolved wavefunction \(|\psi(t)\rangle\), leading to deviations in the predicted expectation values of \(H_{[\infty]}\), namely, \(\mathcal{E}_{i}\) and \(\delta\mathcal{E}^{2}_{i}\), from the exact instantaneous conservation laws. This effect is also present and becomes evident for time-independent systems, where \(H_{[\infty]}\) simplifies to a static Hamiltonian \(H\). The energy constraint reduces to \(|\langle\psi(t)|H|\psi(t)\rangle{-}\langle\psi(t{+}\delta t)|H|\psi(t{+} \delta t)\rangle{|<}d_{\mathcal{E}}\), which accumulates and cannot be bounded for long simulation times. Consequently, many-body systems have a tendency to heat up (in the sense of the Eigenstate Thermalization Hypothesis [43, 44]) and eventually approach infinite temperature states where all correlations become trivial [45, 46, 47]. This Trotter-induced heating can be much more pronounced in time-dependent systems, leading to unstable DQS of time evolution over long periods. To address this challenge, we propose restrictions on the global errors, representing the accumulation of all local errors from previous steps (light blue in Fig. 1(a)): \[\begin{split}&\left|\sum_{n=1}^{m}\mathcal{E}_{f}(t_{n},\delta t_{n}) -\mathcal{E}_{i}(t_{n},\delta t_{n})\right|<d^{\prime}_{\mathcal{E}},\\ &\left|\sum_{n=1}^{m}\delta\mathcal{E}^{2}_{f}(t_{n},\delta t_{n} )-\delta\mathcal{E}^{2}_{i}(t_{n},\delta t_{n})\right|<d^{\prime}_{\delta \mathcal{E}^{2}}.\end{split} \tag{5}\] These conditions also imply the time-local constraints, e.g., \(|\mathcal{E}_{f}(t_{m},\delta t_{m}){-}\mathcal{E}_{i}(t_{m},\delta t_{m})|{<}2d^ {\prime}_{\mathcal{E}}\)[48], but the converse is not true. Therefore, information from the past time steps is used to select the current step size, such that the algorithm is capable of automatically counteracting any accumulating Trotter-induced heating effects. This global control is necessary to handle the lack of energy conservation in time-dependent systems. Due to the absolute values being placed outside the sums, Eq. (5) is also consistent with ADA-Trotter for time-independent systems [49], guaranteeing a bounded error in the conservation law throughout the entire time evolution. We enforce these constraints via a feedback loop that operates as follows: Initially, a large time step \(\delta t_{m}\) is chosen. We then measure \(\mathcal{E}_{i}\) and \(\delta\mathcal{E}^{2}_{i}\) for the current quantum state \(|\psi(t_{m})\rangle\) and for the selected \(\delta t_{m}\), as a prediction of the instantaneous conserved quantities. In practice, one needs to truncate \(H_{[\infty]}\) to a finite order \(k\) to reduce the measurement cost on quantum computers. We then implement the time evolution \(U_{[\lambda]}(t_{m},\delta t_{m})\) on the quantum processor, yielding a candidate state \(|\bar{\psi}(t_{m}{+}\delta t_{m})\rangle{=}U_{[\lambda]}(t_{m},\delta t_{m})| \psi(t_{m})\rangle\). For this candidate state, we measure \(\tilde{\mathcal{E}}_{f}\) and \(\delta\mathcal{E}^{2}_{f}\). In case the measurement outcome violates the conditions of Eq. (5), a new smaller step size is proposed and this procedure starts over again. We use the bisection search method to find a new suitable \(\delta t_{m}\). This can be efficiently implemented with a few trials whose number does not scale with system size. Once a suitable \(\delta t_{m}\) has been found, we obtain the state \(|\psi(t_{m}{+}\delta t_{m})\rangle\) at the next time, and repeat the procedure. _Numerical simulation.--_ We proceed to numerically compare the local and global control schemes and demonstrate the remarkable performance of tADA-Trotter achieved by the global control. Although this algorithm is applicable to various models and initial states, for concreteness, we start from a product state \(|\psi(0)\rangle\!=\!\exp(-i\theta\sum_{j}\sigma_{j}^{x})\,|\!\downarrow\dots\downarrow\rangle\) and consider a non-integrable time-dependent quantum Ising model, with the target Hamiltonian \(H(t)\!\!=\!\!x(t)H_{x}\!+\!\!z(t)H_{z}\) with \(H_{z}\!\!=\!\!J_{z}\sum_{j}\sigma_{j}^{z}\sigma_{j+1}^{z}\!+\!\!h_{z}\sum_{j} \sigma_{j}^{z},H_{z}\!\!=\!\!h_{x}\sum_{j}\sigma_{j}^{x}\), where \(\sigma_{j}^{x}\) and \(\sigma_{j}^{z}\) are Pauli matrices acting on site \(j\) of a chain consisting of \(L\) sites. We consider a uniform coupling \(J_{z}\), and transverse and longitudinal fields \(h_{x}\) and \(h_{z}\), respectively. Periodic boundary conditions are employed. For simplicity, we assume a static longitudinal field \(z(t)\!\!=\!\!1\) and a time-dependent transverse field \(x(t)\). Specifically, we choose \(x(t)\!\!=\!\!\cos(\omega t)\exp(-t/\tau)\!+\!1\), where the field oscillates around a constant value of unity at a fixed frequency \(\omega\), while the oscillation amplitude decays exponentially over time with a rate given by \(1/\tau\). For times \(t\!\!\gg\!\tau\), the system becomes effectively time-independent. Hence, this protocol contains a number of different timescales that will be reflected in the dynamics of the system, and provides an ideal testbed for our algorithm. We employ Eq. (2) to implement the trtterized dynamics and truncate the instantaneous conserved Hamiltonian to \(H_{[k]}\) with \(k=5\). In Fig. 3, we depict the expectation value of \(H_{[k]}\) with the local and global control schemes in panel (a) and (b), respectively. The exact solution is plotted as the black line which varies at early times and becomes static at later times as expected. The predicted conserved value \(\mathcal{E}_{i}\) is depicted in blue, which at early times closely follows the exact solution. The red dots denote the expectation value \(\mathcal{E}_{f}\) after implementing the Trotterized dynamics, and its deviation from \(\mathcal{E}_{i}\) is tiny in both panels. A crucial difference occurs at later times. In Fig. 3 (a), for \(t\!\!>\!\!5\) in units of the Ising coupling \(J_{z}\), the predicted value \(\mathcal{E}_{i}\) exhibits a noticeable drift towards zero, indicating the accumulation of Trotter errors over time and the system heats up. Note, this Trotter-induced heating is not the same as the energy-non-conservation that goes along with \(H(t)\)[45; 46; 47]. It happens because statistically, a stepsize increasing the system's entropy is more likely. To emphasize this point, we plot the deviation \(\mathcal{E}_{f}-\mathcal{E}_{i}\) at each time in the inset, and clearly, \(\delta t\) is chosen in a way that negative values appear more frequently. By contrast, when constraining the global errors according to Eq. (5), this deviation approximately centers around zero, indicating that heating is now better controlled. Consequently, the overall drifting behavior in \(\mathcal{E}_{i}\) is notably suppressed, and at later times, it becomes strictly prohibited as shown in Fig. 3 (b). By preserving the conservation of \(H_{[k]}\), we ensure the accurate DQS of the dynamics of local observables. This is demonstrated in Fig. 4, where we present the dynamics of the magnetizations \(M_{\alpha}\!=\!\sum_{j}\sigma_{j}^{\alpha}/L\) for \(\alpha=x,z\), in panels (a) and (b), respectively. It is evident that the global control (red) generally yields more accurate simulation results compared to the local constraints (blue) for both cases. Notably, in Fig. 4 (a), significant deviations occur in the blue data for \(t\!\!>\!\!5\), corresponding to the time when notable errors arise in the instantaneous conserved quantities. In contrast, the globally bounded red data closely follows the exact solution for an extended period. We now demonstrate that by constraining the accumulated errors, tADA-Trotter achieves superior simulation precision compared to the fixed-step Trotter when the same total simulation time is reached. In the field \(x(t)\) we select a driving frequency \(\omega\)=0.8 that is comparable to other local energy scales in the system. The characteristic decay timescale is chosen as \(\tau\)=30. With a total number of Trotter steps \(N\)=100, the achievable simulation time is approximately \(t\)\(\sim\)20, during which the significant time-dependence in the Hamiltonian is still present. In Fig. 2, \(M_{x}\!\!=\!\sum_{j}\sigma_{j}^{x}/L\) is depicted with orange circles, which closely reproduces the exact solution (black) with remarkable accuracy. Simulation errors only become visible at later times, e.g., \(t_{m}\)\(>\)11. In contrast, the fixed-step Trotter (\(\delta t_{m}\)=0.2) already introduces sub Figure 4: Global control leads to more stable simulation results than local control. Magnetization in \(x\) (a) and \(z\) (b) direction. We use the same parameters as in Fig. 3. The inset depicts the same dynamics in a narrower time window with a high resolution. stantial errors in the magnetization within a short time. The inset of Fig. 2 illustrates the Trotter step size, which fluctuates within approximately one order of magnitude \(\delta t_{m}\)\(\in\)[0.1, 0.7], highlighting the advantage and the flexibility of tADA-Trotter. Particularly, at early times, when the quantum state undergoes rapid changes under a strong driving field, smaller step sizes are employed (\(\delta t_{m}\)\(\approx\)0.1). Conversely, when the driving \(x(t)\) has relatively smaller values around \(t_{m}\)\(\approx\)4 and 12, the step size automatically increases to \(\delta t\)\(\approx\)0.7 and 0.4, respectively. The accumulated errors are depicted in Fig. 5, demonstrating that they remain bounded below the specified thresholds (grey lines) for the majority of the time evolution. However, it should be noted that due to the tight tolerances at early times, the accumulated errors in the instantaneous conserved quantities may occasionally exceed the bounds. This phenomenon can also cause the dynamics to "freeze", wherein it tends to select the smallest possible step size, which in this case is set as 0.1. _Discussion.--_ We propose tADA-Trotter that enables adaptive Trotter step sizes for generic time-dependent Hamiltonians. The algorithm incorporates a feedback process to constrain errors in the lowest two moments of the instantaneous conserved quantity \(H_{[\infty]}\), allowing us to define a criterion for adapting the step size. We introduced the concept of a time-global (cumulative) error and demonstrated that restricting this error enables tADA-Trotter to counteract the accumulation of Trotter-induced heating effects and enable reliable DQS even at long times. Notably, tADA-Trotter can be applied to more complex time dependence. In the future, it is worth comparing its performance by using higher-order truncation schemes. In a specific example, we demonstrate the superior performance of tADA-Trotter compared to the fixed-step Trotter method. The intricate interplay between external driving and quantum thermalization can result in highly complex many-body dynamics [50, 51, 52]. Therefore, for future investigations, conducting a systematic benchmark of various algorithms across different models, initial states, and time-dependence would be of great value. Furthermore, the extension of instantaneous conservation laws to open quantum systems to enable adaptive Trotter step sizes represents an intriguing open question [53, 54, 55, 56]. Additionally, considering the widespread use of Trotterization in classical numerical algorithms such as the time-evolving block decimation method, the application of ADA-Trotter to enhance the efficiency and accuracy of these methods holds significant potential. _Note added.--_ During the completion of this work, we became aware of a relevant work exploring another adaptive algorithm for DQS of time evolution [36]. _Acknowledgments_ We thank T. Ikeda for enlightening discussions. This work is in part supported by the Deutsche Forschungsgemeinschaft under cluster of excellence ct.qmat (EXC 2147, project-id 390858490). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 853443). MB was supported in part by the International Centre for Theoretical Sciences (ICTS) for participating in the program - Periodically and quasi-periodically driven complex systems (code: ICTS/pdcs2023/6).
2304.06015
An Improved Heart Disease Prediction Using Stacked Ensemble Method
Heart disorder has just overtaken cancer as the world's biggest cause of mortality. Several cardiac failures, heart disease mortality, and diagnostic costs can all be reduced with early identification and treatment. Medical data is collected in large quantities by the healthcare industry, but it is not well mined. The discovery of previously unknown patterns and connections in this information can help with an improved decision when it comes to forecasting heart disorder risk. In the proposed study, we constructed an ML-based diagnostic system for heart illness forecasting, using a heart disorder dataset. We used data preprocessing techniques like outlier detection and removal, checking and removing missing entries, feature normalization, cross-validation, nine classification algorithms like RF, MLP, KNN, ETC, XGB, SVC, ADB, DT, and GBM, and eight classifier measuring performance metrics like ramification accuracy, precision, F1 score, specificity, ROC, sensitivity, log-loss, and Matthews' correlation coefficient, as well as eight classification performance evaluations. Our method can easily differentiate between people who have cardiac disease and those are normal. Receiver optimistic curves and also the region under the curves were determined by every classifier. Most of the classifiers, pretreatment strategies, validation methods, and performance assessment metrics for classification models have been discussed in this study. The performance of the proposed scheme has been confirmed, utilizing all of its capabilities. In this work, the impact of clinical decision support systems was evaluated using a stacked ensemble approach that included these nine algorithms
Md. Maidul Islam, Tanzina Nasrin Tania, Sharmin Akter, Kazi Hassan Shakib
2023-04-12T17:53:59Z
http://arxiv.org/abs/2304.06015v1
# An Improved Heart Disease Prediction Using Stacked Ensemble Method ###### Abstract Heart disorder has just overtaken cancer as the world's biggest cause of mortality. Several cardiac failures, heart disease mortality, and diagnostic costs can all be reduced with early identification and treatment. Medical data is collected in large quantities by the healthcare industry, but it is not well mined. The discovery of previously unknown patterns and connections in this information can help with an improved decision when it comes to forecasting heart disorder risk. In the proposed study, we constructed an ML-based diagnostic system for heart illness forecasting, using a heart disorder dataset. We used data preprocessing techniques like outlier detection and removal, checking and removing missing entries, feature normalization, cross-validation, nine classification algorithms like RF, MLP, KNN, ETC, XGB, SVC, ADB, DT, and GBM, and eight classifier measuring performance metrics like ramification accuracy, precision, F1 score, specificity, ROC, sensitivity, log-loss, and Matthews' correlation coefficient, as well as eight classification performance evaluations. Our method can easily differentiate between people who have cardiac disease and those are normal. Receiver optimistic curves and also the region under the curves were determined by every classifier. Most of the classifiers, pretreatment strategies, validation methods, and performance assessment metrics for classification models have been discussed in this study. The performance of the proposed scheme has been confirmed, utilizing all of its capabilities. In this work, the impact of clinical decision support systems was evaluated using a stacked ensemble approach that included these nine algorithms. Prediction, Heart Disease, CART, GBM, Multilayer Perception. ## 1 Introduction Heart disorder, which affects the heart and arteries, is one of the most devastating human diseases. The heart is unable to pump the required volume of blood toward other parts of the body when it suffers from cardiac problems. In the case of heart disease, the valves and heart muscles are particularly affected. Cardiac illness is also referred to as cardiovascular disease. The cardiovascular framework comprises all blood vessels, including arteries, veins, and capillaries, that constitute an intricate system of the bloodstream throughout the organ. Cardiovascular infections include cardiac illnesses, cerebrovascular infections, and artery illnesses. Heart disease may be a hazard, usually unavoidable and an imminent reason for casualty. Heart disease is currently a prominent issue with all other well-being ailments since many people are losing their lives due to heart disease. Cardiovascular disease kills 17.7 million people per year, accounting for 31% of all deaths globally, as per the World Health Organization (WHO). Heart attacks and strokes account for 85% of these cases. Heart-related disorders have also become the major cause of death in India [1]. In the United States, one person is killed every 34 seconds. [9]. Heart diseases killed 1.7 million Indians in 2016, concurring to the 2016 Worldwide Burden of Disease Report, released on September 15, 2017 [3]. According to a WHO report published in 2018, nearly 6 million people died globally in 2016 because of heart infections. [4]. Controlling heart disorders costs approximately 3% of total healthcare spending [20]. The World Health Organization's projections provided the impetus for this project. The WHO predicts that roughly 23.6 million people will die from heart disease by 2030. The expanding rate of heart infections has raised worldwide concern. Heart failure is tougher to diagnose because of diabetes, hypertension, hyperlipidemia, irregular ventricular rate, and other pertinent diagnosable conditions. As cardiac illness becomes increasingly common, data on the condition is getting more nonlinear, non-normal, association-structured, and complicated. As a result, forecasting heart illness is a major difficulty in medical data exploration, and clinicians find it extremely difficult to properly forecast heart disease diagnosis. Several studies have endeavored to use advanced approaches to analyze heart disease data. If the bagging is not adequately represented in the ensemble approach, it might result in excessive bias and consequently under-fitting. The boosting is also difficult to apply in real time due to the algorithm's increasing complexity. On the other hand, our proposed approach may combine the skills of several high-performing models on a classification or regression job to provide predictions that outperform any single model in the ensemble while also being simpler to build. Our suggested system hasn't received much attention; so we've attempted to build it correctly and come up with a nice outcome, and a superior prediction system. The organization of the paper is explained as follows. In Section II, we have made an effort to state related research contributions, state their major contributions and compare with our work. We also provided a table with the underlying overview of the related works and comparison analytics for readers. With Section III, we have provided an outline of the system methodology and outlined the architecture. In section IV, implementations, and experimental results are described. Section V, we speak on our limitation in section and we conclude the paper ## 2 Literature Review The study aims to look into how data mining techniques may be used to diagnose cardiac problems [15]. Practitioners and academics have previously employed pattern recognition and data mining technologies in the realm of diagnostics and healthcare for prediction purposes [13]. Various contributions have been made in recent times to determine the best preferred approach for predicting heart disorders [8]. So, the above part explores numerous analytical methodologies while providing a quick overview of the existing literature regarding heart disorders. In addition, current techniques have been evaluated in several ways, including a comprehensive comparison after this section. Mohan S. et al. [1] developed a unique approach to determining which ML approaches are being used to increase the accuracy of heart illness forecasting. The forecast model is introduced using a variety of feature combinations and well-known classification methods. They attain an enhanced performance level with an accuracy level of 88.7% using the Hybrid Random Forest with Linear Model prediction model for heart disease (HRFLM). As previously stated, the (ML) techniques used in this study include DT, NB, DL, GLM, RF, LR, GBT, and SVM. All 13 characteristics as well as all ML techniques were used to reproduce the investigation. Palaniappan S. et al. [2] applied a technology demonstrator Intelligent Heart Disease Prediction System (IHDPS), using data mining approaches such as DT, NB, and NN. The results show that each approach seems to have a different advantage in reaching the defined extraction criteria. Based on medical factors like sex, age, blood sugar, and blood pressure, this can forecast the probability of individuals developing heart disorders. It enables considerable knowledge to be established, such as patterns and correlations among galenic aspects connected to heart illness. The Microsoft.NET platform underpins IHDPS. The mining models are built by IHDPS using the CRISP-DM approach. Bashir S. et al. [4] in their research study discusses how data science can be used to predict cardiac disease in the medical industry. Despite the fact that several studies have been undertaken on the issue, prediction accuracy still needs to be improved. As a result, the focus of this study is on attribute selection strategies as well as algorithms, with numerous heart disease datasets being utilized for testing and improving accuracy. Attribute selection methodologies such as DT, LR, Logistic Regression SVM, NB, and RF are used with the Rapid miner, and the results indicate an increase in efficiency. Le, H.M. et al. [5] rank and weights of the Infinite Latent Feature Selection (ILFS) approach are used to weight and reorder HD characteristics in our method. A pulpous margin linear SVM is used to classify a subset of supplied qualities into discrete HD classes. The experiment makes use of the UCI Machine Learning Repository for Heart Disorders' universal dataset. Experiments revealed that it suggested a method is useful for making precise HD predictions; our tactic performed the best, with an accuracy of 90.65% as well as an AUC of 0.96 for discriminating 'No existence' HD from 'Existence' HD. Yadav, D.C. and Pal et al. [6] implemented M5P, random Tree, and Reduced Error Pruning using the Random Forest Ensemble Method were presented and investigated as tree-based classification algorithms. All prediction-based methods were used after identifying features for the cardiac patient dataset. Three feature-based techniques were employed in this paper: PC, RFE, and LR. For improved prediction, the set of variables was evaluated using various feature selection approaches. With the findings, they concluded that the attribute selection methods PC and LR, along with the random-forest-ensemble approach, deliver 99% accuracy. Kabir, P.B. and Akter, S. et al. [7] among the most fundamental and widely used ensemble learning algorithms are tree-based techniques. Tree-based models such as Random Forest (RF), and Decision Tree (DT), according to the study, provide valuable intelligence with enhanced efficiency, consistency, as well as application. Using the Feature Selection (FS) method, relevant features are discovered, and classifier output is produced using these features. FS eliminates non-essential characteristics without affecting learning outcomes. Our study aims to boost the performance. The aim of the research is really to apply FS in conjunction with tree-based approaches to increase heart disease prediction accuracy. Islam, M.T. et al, [8] in this work, PCA has been used to decrease characteristics. Aside from the final clustering, a HGA with k-means was applied. For clustering data, the k-means approach is often applied. Because this is a heuristic approach, it is possible for it to become trapped in local optima. To avoid this problem, they used the HGA for data clustering. The suggested methodology has a prediction accuracy of 94.06 percent for early cardiac disease. Rahman, M.J.U. et al, [10] the main purpose of this work is just to create a Robust Intelligent Heart Disease Prediction System (RIHDPS) applying several classifiers such as NB, LR, and NN. This content investigated the effectiveness of medical decision assistance systems utilizing ensemble techniques of these three algorithms. The fundamental purpose of this study is to establish a Robust Intelligent Heart Disease Prediction System (RIHDPS) by combining 3 data mining modelling techniques into an ensemble method: NB, LR, and NN. Patel, J. et al, [12] utilizing W-E-K-A, this study evaluates alternative Decision Tree classification algorithms to improve contribution in heart disorder detection. The methods being tested include the J48 approach, the LMT approach, and the RF method. Using existing datasets of heart disease patients as from the UCI repository's Cleveland database, the performance of decision tree algorithms is examined and validated. The aim of the research is to utilize data mining tools that uncover hidden patterns in cases of heart problems as well as to forecast the existence of heart disorders in individuals, ranging from no existence to likely existence. Bhatla, N. et al. [28] research aims to look at different data mining techniques that might be employed in computerized heart disorder forecasting systems. The NN with 15 features has the best accuracy (100%) so far, according to the data. DT, on either hand, looked impressive with 99.62 percent accuracy when using 15 characteristics. Furthermore, the Decision Tree has shown 99.2% efficiency when combined with the Genetic Algorithm and 6 characteristics. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Source & Datasets & FS & Attributes & Classifier \& Validation techniques & Accuracy \\ \hline \end{tabular} \end{table} Table 1: A literature evaluation of cardiac disease predictions included a comparison of several methods. ## 3 Methodology This section mentioned above proposes an advanced and efficient prediction of heart disease based on past historical training data. The ideal strategy is to analyze and test various data-mining algorithms and to implement the algorithm that gives out the highest accuracy. This research also consists of a visualization module in which the heart disease datasets are displayed in a diagrammatic representation using different data visualization techniques for user convenience and better understanding. The subsections that follow go through several materials and methodologies in detail. The research design is shown in Section A, the data collection and preprocessing are summarized in Section B, and the ML classification techniques and stacked ensemble approach are explained in Section C of this study. ### Research Design In this section, gather all of the data into a single dataset. This approach for extracting functions for cardiovascular disease prognostication may also be applied with this aspect analysis procedure. Following the identification of accessible data resources, those are additionally picked, cleansed, and then converted to the required distribution. The atypical identification survey provides valuable characteristics for predicting coronary artery disease prognosis. Cross-validation, several classification approaches, and the stacked ensemble method will be utilized to predict using pre-processed data. After completing all of these steps, the illness will be forecast favorably. Following that, we'll assess the entire performance. The outcome will be determined after the performance review. ### Data Collection & Preprocessing In this study, we used Statlog, Cleveland, and Hungary datasets as the three datasets in this fact compilation. There are 1190 records in all, with 11 characteristics and one target variable. Chest pain, cholesterol, sex, resting blood pressure, age, resting ecg-normal (0), st-t abnormality (1), lv hypertrophy (2), fasting blood sugar max hate rate, exercise angina, old-peak, st slope-normal (0), upsloping (1), flat (2), downsloping(3), 0 denoting no disease and 1 denoting illness. It should be noted that null or missing values are utilized to represent zero values. As a result, we must delete null values throughout the data preparation step. But in our case, we have no null values. After that, we complete exploratory data analysis. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Features & Definition & Type \\ \hline Age & Patient’s age in years successfully completed & Numerical \\ \hline \end{tabular} \end{table} Table 2: Features of the dataset descriptive information. Figure 1: Methodological framework of heart disease. ### Models Machine learning classification methods are utilized in this phase to classify cardiac patients and healthy people. The system employs RF Classifier, MLP, KNN, ET Classifier, XGBoost, SVC, AdaBoost Classifier, CART, and GBM, among other common classification techniques. For our suggested system, we will apply the stacked ensemble approach. We need to construct a base model and a meta-learner algorithm for a stacked ensemble. The most relevant and standard evaluation metrics for this problem area, such as sensitivity, specificity, precision, F1-Score, ROC, Log Loss and Mathew correlation coefficient are used to assess the outcome of each event. 1. RF Classifier: Random Forest Model is a classification technique that uses a random forest as its foundation. As in regression and classification, an algorithm may handle data sets with both continuous and categorical variables. It outperforms the competition when it comes to categorized problems. Criterion: this is a function that determines whether or not the split is correct. We utilized "entropy" for information gain, and "gini" stands for Gini impurity. \[Gini =1-\sum_{l=1}^{G}(p_{l})^{2}\] \[Entropy =\sum_{i=1}^{G}-p_{l}*log_{2}\ (p_{l})\] 2. MLP: A pelleting neural network called a multi-layer perceptron (MLP) establishes a number of outputs from a collection of inputs. Multiple sections of input nodes comprise an MLP, between the inlet and outlet layers is linked as a directed graph. 3. KNN: K-NN method is straightforward to implement and does not require the use of a hypothesis or any other constraints. This algorithm may be used to do exploration, validation, and categorization. Despite the fact that K-NN is the most straightforward approach, it is hampered by duplicated and unnecessary data. 4. Extra Tree Classifier: Extremely Randomized Trees, or Extra Trees, is a machine learning ensemble technique. This is a decision tree ensemble comparable like bootstrap aggregation and random forest, among other decision tree ensemble, approaches. The Extra Trees approach uses the training data to construct a significant number of extremely randomized decision trees. An Average of decision tree estimates is used in regression, whereas a democratic majority is utilized in classification. 5. XGBoost: The XGBoost classifier is a machine learning method for categorizing both structured and tabular data. XGBoost is a high-speed and high-performance gradient boosted decision tree implementation. XGBoost is a high-gradient gradient boost algorithm. As a result, it's a complicated machine learning method with many moving parts. XGBoost can handle large, complicated datasets with ease. XGBoost is an ensemble modelling approach. 6. SVC: In both classification and regression issues, the Support Vector Classifier (SVC) is a common supervised learning technique. The SVC method's purpose is to find the optimal path or set point for categorizing n-dimensional regions because the following observations may be readily classified. SVC can be used to select the extreme positions that aid in the construction of the hyperplane. The Support Vector Machine is the method, and support vector classifiers are prominent examples. 7. AdaBoost Classifier: The Algorithms, shorthand for Adaptive Boosting, is a boosting approach used in Machine Learning as Ensemble Learning. Each instance's weights are reassigned, with larger weights applied for instances that were mistakenly identified. This is known as "Adaptive Boosting". 8. CART: Data is divided up frequently based on a parameter in decision trees, a kind of supervised machine learning. In the training data, specify the input and the associated output. Two entities may be used to explain the tree: decision nodes and leaves. 9. GBM: Gradient boosting is a collection of classification algorithms that may be applied to a variety of issues such as classification and regression problems. It as sembles a prediction system from a collection of weak frameworks, -- usually decision trees. 10. Stacked Ensemble: The term "ensemble" relates to the procedure of combining many models. As a result, instead of employing model to make predictions, a group of models is used. Ensemble uses two different techniques: * Bagging creates a unique training segment with replenishment from experimental training phase, as well as the outcome is determined by a majority vote. Consider the Random Forest example. * Boosting transforms weak learners to strong learners through creating pursuant models with overall performance as the final model. For instance, in AdaBoost and XG BOOST. The stacked ensemble approach will be used. The stacked ensemble approach would be a supervised ensemble classification strategy that stacks many prediction algorithms to find the optimum combination. Stacking, also called as Superior Training or Stacking Regression, is a set of computational where another second-level regression model "metalearner" is combined with a first-level regression model has been programmed to find the optimum possible combination of basic learners. Stacking, in contrast to bagging and boosting, aims to bring together strong, varied groups of learners. We have completed our work in the following sections 1. For this system, we import all of the necessary libraries. 2. After loading our dataset, we clean and preprocess it. 3. We use the z-score to identify and eliminate outliers. 4. We divided the data into two parts: training and testing, with 80/20 percentages. 5. We developed a model using cross-validation. 6. For a stacked ensemble technique, we stack all of the models such as RF, MLP, KNN, ETC, XGB, SVC, ADB, CART, and GBM. 7. We assess and compare our model to other models. Figure 2 depicts two levels: LEVEL 0 and LEVEL 1. First, we use the base learners (level 0) to make forecasts. The ensemble prediction is then generated by feeding those forecasts into the meta-learner (level 1). Figure 2: Stacked Ensemble Method Result Analysis This section presents the outcomes of changing the ten orders indicated above. PRC, Sensitivity, Specificity, F1 Score, ROC, Log Loss, and MCC are the most common evaluation metrics used in this analysis. Complexity refers to a calculation that defines the importance of a segment of the review, whereas recall refers to the number of times genuinely qualified people are recovered. The Stacked Ensemble Classifier, with an accuracy of 0.910, sensitivity of 0.934, specificity of 0.883, best f1-score of 0.916, minimum Log Loss of 3.08, and highest ROC value of 0.909, is the best performer. Of the same evaluation metrics in every region, Random Forest has the highest sensitivity level, while XGboost is second best. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Model & Accuracy & PRC & Sensitivity & Specificity & F1 Score & ROC & Log\_Loss & MCC \\ \hline Stacked & 0.910638 & 0.898438 & 0.934959 & 0.883929 & 0.916335 & 0.909444 & 3.086488 & 0.821276 \\ Classifier & & & & & & & \\ \hline RF & 0.893617 & 0.865672 & 0.943089 & 0.839286 & 0.902724 & 0.891188 & 3.674399 & 0.789339 \\ \hline MLP & 0.821277 & 0.809160 & 0.861789 & 0.776786 & 0.834646 & 0.819287 & 6.172973 & 0.642127 \\ \hline KNN & 0.800000 & 0.787879 & 0.845528 & 0.750000 & 0.815686 & 0.797764 & 6.907851 & 0.599458 \\ \hline Extra Tree Classifier & 0.885106 & 0.869231 & 0.918699 & 0.848214 & 0.893281 & 0.883457 & 3.968343 & 0.770445 \\ \hline XGB & 0.897872 & 0.896000 & 0.910569 & 0.883929 & 0.903226 & 0.897249 & 3.527409 & 0.795248 \\ \hline SVC & 0.812766 & 0.788321 & 0.878049 & 0.741071 & 0.830769 & 0.809560 & 6.466933 & 0.627138 \\ \hline AdaBoost & 0.817021 & 0.812500 & 0.845528 & 0.785714 & 0.828685 & 0.815621 & 6.319943 & 0.633084 \\ \hline CART & 0.851064 & 0.879310 & 0.829268 & 0.875000 & 0.853556 & 0.852134 & 5.144121 & 0.703554 \\ \hline GBM & 0.829787 & 0.826772 & 0.853659 & 0.803571 & 0.840000 & 0.828615 & 5.879016 & 0.658666 \\ \hline \end{tabular} \end{table} Table 3: Result of various models with proposed model. discussed machine learning techniques. Stacked classifier model's accuracy is 91.06%, however, the F1 score is 0.9163. The accuracy of the XGB and RF algorithms, on the other hand, is 89.78% and 89.36%, respectively, with F1 scores of 0.8972 and 0.8911. The accuracies of Extra Tree Classifiers, CART, GBM, MLP, SVC, and KNN algorithms are 88.51%, 85.10%, 82.97%, 82.12%, 81.27%, and 80.00%. The confusion matrix for the implemented system is generated as shown in the diagram above. In the area of machine learning, extracted features are also referred to as artificial neurons. It is a statistical form that allows the reproduction of the results of an approach. In the case of graph partitioning, an ensemble learning approach is extremely useful. Knowledge is, specifically, the complexity of quantitative categorization. Figure 4: Confusion Matrixes of Stacked Classifier Models and ROC Curve. Figure 3: Accuracy Chart of ML Models Figure-5 depicts a visual representation of all cardiac problems being detected. Crimson indicates a heart attack, whereas verdant indicates no cardiac disease. ## 5 Conclusion & Future Recommendation Among the most significant threats to human survival is heart disease. Predicting cardiac illness has become a major concern and priority in the medical industry. Using the Stacked Ensemble Classifier, we have shown an improved heart disease prediction method. It incorporates a number of different prediction techniques. In this work, we examined the significance of prediction performance, precision, ROC sensitivity, Specificity, F1 Score, Log Loss, and MCC. To identify whether or not a person has a heart problem, we applied machine learning techniques. The medical data set was used in a variety of ways. As a consequence of the findings, we discovered that the enhanced stacked ensemble approach provides better accuracy than previous methods. The purpose of this research is to inquire about particular ML techniques on a form, therefore we further wanted to increase the dependability of the system's operations to provide a much adequate assertion as well as encourage certain Approaches for recognizing the appearance of CVD. The above-mentioned structure could be adapted and repurposed for new purposes. The results show that these data mining algorithms may accurately predict cardiac disease with a 91.06 percent accuracy rate. As our study is based on recorded data from the Statlog, Cleveland, and Hungary datasets, for future research possibilities, we will aim to train and test on a large medical data set using many ensemble methods in the future to see if we can enhance their performance. Our ensemble method is superior to traditional methods, as even if it is overfitting at times, it usually reduces variances, as well as minimizes modeling method bias. It also has superior Predictive performance, reduces dispersion and our approach has superior efficiency by choosing the best combination of models. Figure 5: Heart Disease Identification.
2310.04945
Balancing Specialized and General Skills in LLMs: The Impact of Modern Tuning and Data Strategy
This paper introduces a multifaceted methodology for fine-tuning and evaluating large language models (LLMs) for specialized monetization tasks. The goal is to balance general language proficiency with domain-specific skills. The methodology has three main components: 1) Carefully blending in-domain and general-purpose data during fine-tuning to achieve an optimal balance between general and specialized capabilities; 2) Designing a comprehensive evaluation framework with 45 questions tailored to assess performance on functionally relevant dimensions like reliability, consistency, and business impact; 3) Analyzing how model size and continual training influence metrics to guide efficient resource allocation during fine-tuning. The paper details the design, data collection, analytical techniques, and results validating the proposed frameworks. It aims to provide businesses and researchers with actionable insights on effectively adapting LLMs for specialized contexts. We also intend to make public the comprehensive evaluation framework, which includes the 45 tailored questions and their respective scoring guidelines, to foster transparency and collaboration in adapting LLMs for specialized tasks.
Zheng Zhang, Chen Zheng, Da Tang, Ke Sun, Yukun Ma, Yingtong Bu, Xun Zhou, Liang Zhao
2023-10-07T23:29:00Z
http://arxiv.org/abs/2310.04945v1
# Balancing Specialized and General Skills in LLMs: ###### Abstract This paper introduces a multifaceted methodology for fine-tuning and evaluating large language models (LLMs) for specialized magnetization tasks. The goal is to balance general language proficiency with domain-specific skills. The methodology has three main components: 1) Carefully blending in-domain and general-purpose data during fine-tuning to achieve an optimal balance between general and specialized capabilities; 2) Designing a comprehensive evaluation framework with 45 questions tailored to assess performance on functionally relevant dimensions like reliability, consistency, and business impact; 3) Analyzing how model size and continual training influence metrics to guide efficient resource allocation during fine-tuning. The paper details the design, data collection, analytical techniques, and results validating the proposed frameworks. It aims to provide businesses and researchers with actionable insights on effectively adapting LLMs for specialized contexts. We also intend to make public the comprehensive evaluation framework, which includes the 45 tailored questions and their respective scoring guidelines, to foster transparency and collaboration in adapting LLMs for specialized tasks. ## 1 Introduction Recent years have witnessed unprecedented advancements in Natural Language Processing (NLP), spearheaded by the evolution of Large Language Models (LLMs) [17] like Transformers [22], BERT [6], GPT [3], and their variants. These models have set new benchmarks in a multitude of tasks, from text classification and machine translation to sentiment analysis and summarization, significantly advancing the state-of-the-art in the NLP domain. Significantly, advancements in architectures and training methodologies have given rise to emergent capabilities, setting state-of-the-art models like GPT-3.5 [3], GPT-4 [14], Claude-2 [1], BARD, LlaMA [20], and LlaMA-2 [21] apart from their predecessors. For instance, in-context learning [12] and zero-shot capabilities [9; 28] enable these models to generalize across tasks for which they were not explicitly trained. This is confirmed by their excellent performance in complex activities such as mathematical reasoning and Question Answering systems. These innovations have not just expanded the boundary of traditional NLP applications but have also revolutionized domains like customer service automation and knowledge retrieval. However, despite these general proficiencies, the application of LLMs to specific monetization tasks within specialized domains [15; 11] presents unique challenges. In business scenarios, these models usually struggle with domain-specific queries that require tailored solutions [10]. Even though Supervised Fine-Tuning (SFT) methodologies are prevalent for adapting general-purpose LLMs to specific use-cases, the balancing act between maintaining general language capabilities and achieving domain-specific effectiveness remains a complex challenge. For example, the intricacies of the monetization system necessitate robust capabilities to address user feedback (e.g., sales) and to facilitate problem resolution and advisory consultancy. The business ticketing system serves as a primary tool to assist users with their question and concerns. However, new employee often face a steep learning curve in understanding the slight difference of monetization and specific businesses. The on-call support system, while valuable, becomes increasingly resource-intensive, especially when ticket backlogs accumulate. Furthermore, although LLMs have excelled across diverse benchmarks [24; 23], their evaluation in commercial applications is not straightforward. A uniform method of assessment is noticeably absent, especially given that open-source benchmarks are generally inadequate for gauging performance in specialized industrial contexts. These benchmarks are generally designed to evaluate general language capabilities rather than the requirements of domain-specific applications. As a result, key questions regarding the models' reliability, consistency, and business impact in monetization contexts remain unresolved. Lastly, fine-tuning LLMs invariably involves a careful choice of hyperparameters--a task complicated by the extensive computational resources required for exhaustive testing [29]. Especially for small and mid-sized businesses, which often lack the necessary computational infrastructure Moreover, there is an absence of comprehensive comparative studies that evaluate the performance of various open-sourced LLMs against industrial benchmarks, further complicating their application. This paper delves into the methods of fine-tuning open-source Large Language Models (LLMs) for tasks in specialized monetization domains. Our goal is to find a balance that keeping the models' broad language skills while improving their performance in specific areas. **Firstly, we examine how to balance the model's skills for both general use and specific areas.** To achieve this equilibrium, we employ a methodical blending of in-domain and general-purpose data, thereby fine-tuning the model in a manner that retains its broad linguistic capabilities while enhancing its specialized utility. **Secondly, we present a robust evaluative framework tailored to both industrial applications and general ability.** Included within this framework is a curated set of 45 questions designed to probe the model's performance across an array of functionally relevant dimensions. This serves to furnish a comprehensive, multi-faceted assessment that speaks directly to the model's reliability, consistency, and overall business impact. **Lastly, we explore the influence of key determinants such as model size and continual training on performance metrics.** This not only helps to allocate computational resources wisely but also provides a deeper understanding of how these variables interact to affect the overall efficacy of the model. To better benefit the research community, we aim to furnish both the business and academic communities with actionable insights by open-sourcing a comprehensive repository. This includes details of our data-balancing techniques, the set of 45 crafted evaluative questions, and the metrics comprising our evaluation criteria. The remainder of this paper is organized as follows: We begin by presenting a detailed literature review, tracing the evolution of Large Language Models and their application in specialized domains. Following this, we delve into the elaborating on the data combinations, and evaluation techniques employed. The subsequent section presents our findings, complete with empirical data and interpretive analysis, serving to validate our proposed fine-tuning and evaluation frameworks. We then move on to a discussion section where the implications of our findings are explored in the context of existing research and commercial applications. Finally, the paper wraps up with a summary of the main points and ideas for further research. ## 2 Related Works ### Adapt LLM for monetization applications Current research on adapting large language models like GPT-4 for business applications such as chatbots is exploring various techniques [21; 14]. The leading approaches involve fine-tuning the model on domain-specific datasets using transfer learning. This can quickly tailor the model to the target task, but requires scarce in-domain data and risks overfitting [15; 11]. Prompt engineering is also popular, carefully crafting prompts to provide context without fine-tuning. However, finding the right prompts often requires much trial-and-error. Knowledge grounding shows promise by incorporating external knowledge into training [7; 30], improving factual consistency without large datasets. But this requires additional engineering and the knowledge sources may be incomplete. Other emerging techniques are architecture search to automatically find optimal model designs, and multi-task learning to leverage synergies from related tasks during training [26]. Both can enhance generalization but remain less adopted currently. Overall, fine-tuning and prompt engineering are the most common techniques today, with knowledge grounding gaining traction to make models more robust. Architecture search and multi-task learning have strong potential but need more development and use cases to become mainstream adaption approaches. ### LLM evaluation Current research on evaluating large language model generation quality employs a diverse set of techniques, each with distinct tradeoffs [5]. Human evaluation through ratings and reviews provides nuanced assessments accounting for subjective aspects of quality, but is time-consuming, inconsistent, and doesn't scale [21]. Automated metrics like BLEU [16] are fast and consistent, but focus narrowly on n-gram overlap with reference texts. Adversarial evaluation [4] can reveal flaws invisible to standard tests, yet constructing effective adversarial examples remains challenging. Designing specific benchmark tasks can test particular skills relevant to generation quality, but requires developing comprehensive suites covering diverse skills [8]. Human-in-the-loop training iteratively improves models using human feedback, but is slow and introduces confounding factors. Overall, human evaluation remains the gold standard despite difficulties with scalability and subjectivity. Automated metrics are the most widely adopted for development due to speed and consistency, complemented by adversarial techniques and specifically designed tests to evaluate particular aspects of generation quality. But effectively incorporating human assessment during training remains an open challenge [14]. ### Instruction tuning Instruction tuning [13; 27] is an active area of research for improving large language models like GPT-3[3]. The goal is to provide the model with instructions that steer its behavior for improved performance on specific tasks [17; 15; 29]. Current methods for instruction tuning fall into two main categories: (1) Prompt-based tuning. This involves providing a prompt that describes the desired model behavior before giving it the actual input [13]. For example, instructing the model to "translate this into French" before inputting an English sentence. The advantage is it's simple and interpretable. But it requires carefully crafting prompts for each task. (2) Example-based tuning. Here the model is shown input-output examples that demonstrate the desired behavior [2]. For instance, providing English-French translation pairs. This is easy to scale across tasks but lacks interpretability. The model behavior is opaque compared to prompt tuning. In summary, prompt-based methods allow transparent instruction tuning but don't easily scale across tasks. Example-based tuning is more scalable but makes model behavior harder to understand. Current research is attempting to get the best of both approaches, with scalable yet interpretable instruction tuning. Auto-generated prompts and hybrid prompt-example methods are areas of focus. ## 3 Methodology ### Overview of Framework Our paper offers a thorough, multi-module testing system to evaluate the generation quality of LLMs on both general and in-domain business perspective, which is illustrated in Figure 1. Specifically, it consists of four core modules as described below: * **In-domain and general data combination**. When users engage with the intelligent customer service assistant, they often pose a variety of questions, some of which may be ambiguous or unclear. Relying solely on company-specific, in-domain knowledge may prove insufficient for the model to furnish accurate and satisfactory responses in such cases. The ability to offer prompt and helpful responses that go beyond domain-specific information is thus critical for enhancing user experience. During empirical tests, we observed that Language Learning Models (LLMs) exhibited a notable decline in their general capabilities when fine-tuned exclusively with domain-specific data. To mitigate this performance degradation, we employ a data combination strategy that integrates both in-domain and out-of-domain data across a range of tasks. This approach is designed to maintain the LLMs' proficiency in general interactive capabilities. * **Supervised fine-tune**. Through our investigation, we find fine-tuning is necessary to make the model obtain reasonably strong ability in answering questions that require company in domain knowledge. We employ instruction-based fine-tuning, a technique proven effective in recent LLM developments [19; 25; 28]. * **Test-inference module**. In order to furnish a thorough assessment of the quality of generated responses from both in-domain and out-of-domain perspectives, we employ an extensive evaluation protocol consisting of a carefully curated set of 45 questions. This question set encompasses a broad class of scenarios, from specialized domain-specific inquiries to more generalized queries, aiming to challenge and appraise the model's adaptive capabilities. * **Scoring module**. Evaluating Large Language Models (LLMs) for specialized monetization applications is challenging due to the limitations of current testing methods, which often focus solely on general language skills. Our paper proposes a comprehensive, multi-faceted testing system to assess both general and specialized capabilities of LLMs. The system uses a eight-category scoring framework that adapts to model improvements and training stages. This flexible and evolving approach aims to offer a more accurate and complete assessment of LLMs' capabilities. ### Data Combination Technique To develop an intelligent agent that integrates seamlessly with business applications, we employ supervised fine-tuning methods with data fusion techniques. This approach ensures balanced performance across both in-domain and out-of-domain data. Specifically, we performed more robust data cleaning and updated our data mixes of in-domain and out-of-domain to improve generation ability for models. Our training corpus includes a new combination of data from publicly available sources, while also incorporating data from various products and services. All sensitive information, such as usernames, email addresses, and financial details, has been meticulously removed to ensure data privacy and security. A detailed distribution of our data sources is illustrated in Figure 2. Generally, our supervised fine-tuning dataset encompasses tasks from various domains, including public question-answering, business-specific question-answering, and multilingual alignment. Figure 1: The overview figure of testing framework. (top) The supervised fine-tune module utilizes data combinations of both in-domain and general data. (middle) A unified test-inference framework to support testing both zero-shot ability of original models and fine-tuned models by the same comprehensive questions set. (bottom) The scoring system consists of multiple criterias, each focusing on different aspects of LLM performance, such as clarity, accuracy, and completeness. ### Supervised Fine-Tuning Configuration In the Supervised Fine-Tuning (SFT) phase, we employ a cosine learning rate schedule with an initial learning rate set at \(3\times 10^{-5}\). We also implement a weight decay of 0.1 and a sequence length capped at 2048 tokens. Batch sizes are tailored to the model sizes: 18 for 7B models, 12 for 13B models, and 6 for 33B models. All experimental evaluations were conducted using a distributed computing environment powered by 32 NVIDIA A100-80GB GPUs, with optimization facilitated by the DeepSpeed library [18]. The typical computational time required for a single epoch of fine-tuning varies depending on the model architecture: approximately 28 hours for 7B models, 60 hours for 13B models, and 127 hours for 33B models. **Sample Structure**: During the fine-tuning process, each training sample comprises a prompt followed by a corresponding answer. To maximize the utilization of the predefined sequence length, prompts and answers from the training set are concatenated. A special token is inserted between the prompt and answer segments to delineate them. **Training Objective**: An autoregressive training objective is utilized. The loss function is zeroed out for tokens appearing in the user prompt, ensuring that backpropagation is carried out solely on the answer tokens. **Continual Training**: We extend the fine-tuning phase to range from 1 to 5 epochs to investigate the effects of continual training on model performance. ### Test-inference module We have compiled a diverse set of 45 questions aimed at rigorously evaluating the model's capabilities. These questions encompass both general and business-specific domains relevant to the business. Here we give the categories of questions and few examples to illustrate, and a full list of questions can be found in Appendix A. 1. General questions: 1. Basic interactive questions, such as "Who are you?" 2. Mathematical and logical queries, e.g. "List all the prime numbers within 100." 3. Creative prompts like, "Compose an article which starts with flowers." 4. Multi-language tasks, for instance, "Describe the panda in Chinese." 2. Business-specific, in-domain questions: 1. Explanations of terms, for example, "What is machine moderation?" 2. Operational guidance like, "Can I add a link to my account in an ad?" 3. Classification tasks, for instance, classifying the comment "Didn't get it" into labels [non-delivery, description doesn't match item, data theft, body harm, unreasonable expense]. Figure 2: The data combination ratio. **iv.**: Generative tasks, for instance, rewrite "Number 1 product in the world" to the text that doesn't violate exaggerated description policy. ### Evaluation Criteria Evaluating the quality of text created by LLMs for specialized monetization application remains a huge challenge. Current testing methods fall short because they mainly check general language skills, not the specific abilities needed for special uses. So we need a better way to test both the general language skills and the specific skills needed for special uses. Our paper offers a thorough, multi-part testing system to evaluate what LLMs can do, especially for specialized money-making uses. It scores models in eight categories, with flexible scoring to give more weight to the most important areas. This detailed scoring can cover all of what LLMs can do. It is designed to change along with improvements made to the models during training. The foundation for scoring will be based on the merits of bot responses into \(8\) different categories, label priorities, static scoring and dynamic weights. As we develop, we will continue to adjust the label scoring criteria based on the bot training stages. Clarity:Messages should be easily understandable, utilizing straightforward and concise language. A clear message is precise and uses concrete terms, focusing on a single objective or point to avoid overwhelming the reader. Clarity in communication is achieved through the use of exact, appropriate, and specific words. Accuracy:This criterion pertains to how closely the response aligns with verified or accepted facts. In other words, if the information is verified, the response should match the factual record. Furthermore, following the guidance provided in the response should lead to the intended outcome. Completeness:A complete response addresses all aspects of the question or request. It should provide all necessary details to enable the receiver to make an informed decision. The response should also contain a clear call to action, guiding the receiver on the next steps to take. Conciseness:Responses should use the minimum number of words necessary to convey the intended message, without sacrificing clarity, accuracy, or any other communication requirement. Safety:No personally identifiable information is collected or disseminated. If such information is required, it should trigger an escalation process and be adequately anonymized or redacted. Concreteness:Concrete messages are unambiguous and typically backed by facts rather than generalizations. Such messages maintain a sharp focus and avoid vagueness, enhancing the clarity and reliability of the communication. Consideration:Responses should be crafted with the receiver's perspective, emotional state, background, and knowledge base in mind. Messages should focus on feasible solutions and be tailored to meet the specific needs of the receiver. Courtesy:All communications should be framed in polite language that demonstrates respect towards the receiver. The tone should be positive and constructive, reflecting a respectful and considerate approach. ## 4 Experiments and Results ### Unified Supervised Fine-Tuning (SFT) Framework We have successfully devised a unified Supervised Fine-Tuning (SFT) framework to streamline the training process for various state-of-the-art LLMs, including Llama-1, Vicuna-v1.1, Vicuna-v1.3, and Llama-2 series models. For our empirical analysis, we selected two versions of the Vicuna-13b model (v1.1 and v1.3) along with the Llama-2-13b model. The key difference between Vicuna-v1.1 and Vicuna-v1.3 is the volume of ShareGPT data utilized for fine-tuning, where Vicuna-v1.3 incorporates twice the dataset size compared to Vicuna-v1.1 version. For the purpose of a fair comparison, the Llama-2-13b model was fine-tuned without its chat-enabled version (after employing SFT and RLHF methodologies). We give few examples in Table 1 and summarize our key observations as follow: \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Question** & **Llama-1** & **Llama-2** & **Vicuna-v1.1** & **Vicuna-v1.3** \\ \hline 1 can’t understand & I am not familiar with the terms used in this question. Can you please explain what you mean by the term? Thank you for your help. I will be more informed about this topic. & I apologize if responses aren’t helping you. Is they are anything else I can assist you with? & I apologize if I’ve caused any confusion. Please feel free to rephrase your question or statement? & I’ve caused any confusion. Please feel free to rephrase your question or request, and I will do my best to provide a clear response. \\ \hline Describe the panda in Chinese & The panda is a famous Chinese brand, known for the quality of their products. The panda is a brand that is known for the quality of their products. & & & \\ \hline Classify the following comment & Non delivery & non delivery & classify as ”data theft” & The comment "Didn’t get it!" can be classified as ”Description doesn’t match item.” & The comment can be classified as ”Description doesn’t match item.” \\ \hline What ad does the following text violate? & I am glad to know that Red Boost is a powerful new formula for boosting male sexual health. & The advertisement text may violate legally claims that the product has benefits for sexual health, weight loss, and muscle building, which may not be supported by scientific evidence. & The advertisement text violates the policy against making facials claims about products or services. \\ \hline \end{tabular} \end{table} Table 1: Example responses of four fine-tuned state-of-the-art 13B models. * **Politeness**. Vicuna-v1.3 surpassed other models in terms of generating responses with higher degrees of politeness, followed by Vicuna-v1.1, Llama-2, and Llama-1. Vicuna-v1.3 used polite phrasing such as "I apologize" and "Please feel free to rephrase", while Vicuna-v1.1 is also polite but less than v1.3, using "I apologize" and offering further assistance. This is potentially attributed to the enrichment of the Vicuna training datasets with ShareGPT data, which may contain cues for polite interaction. * **Cognitive Abilities in Mathematics and Logic**. Both Vicuna and Llama-2 models demonstrated a strong ability for mathematics and logical reasoning. * **Multilingual Proficiency**. The Vicuna-v1.3 model exhibited enhanced multilingual capabilities, outperforming other models in terms of accuracy and completeness. * **Response Characteristics in Company Operations**. For queries pertaining to operations, Llama-2 provided succinct and accurate responses, particularly evident in the ad policy question. Conversely, Vicuna models presented more elaborate, context-rich answers. The Llama-1 model gives short but accurate answer. * **Classification Abilities**. All models tested exhibited a comparable level of performance in classification tasks. However, an observation suggests that Llama-2 models may possess a marginally superior classification ability that accurately classified the comment as "Non delivery", showing a marginally superior classification ability. Given these findings, Vicuna-v1.3 and Llama-2 generally outperform the others, but each has unique strengths and weaknesses. Analyzing multiple model performance factors provides a comprehensive empirical basis to guide future research and optimization. ### Zero-Shot and Single-Epoch Fine-Tuned Performance and Scaling Ability Evaluation In this subsection, we focus on the zero-shot and one-epoch fine-tuned performance metrics for three Vicuna models: Vicuna-7B, Vicuna-13B, and Vicuna-33B. In this context, 'zero-shot' refers to the utilization of the model without additional Supervised Fine-Tuning (SFT). The example generated answers are shown in Appendix Table 6 due to the limitation of pages. We summarize the key observations as below: * **Limitations of Zero-Shot Models**. Our analysis revealed that zero-shot models face difficulties in adhering to task-specific instructions, resulting in the extraneous generation of content. These models appear to lack the refinement to discern an optimal termination point for their output, leading to irrelevant or excessive information in their responses. Generally, finetuning seems to enhance clarity, accuracy, and courtesy across all sizes. * **Model Size and Performance**. The larger Vicuna models (Vicuna-13B and Vicuna-33B) displayed significantly enhanced capabilities in handling extensive contextual information and logical reasoning, particularly in classification tasks. Larger models gain in completeness and consideration at the cost of conciseness. The observations derived from this performance evaluation extend our understanding of the inherent limitations and strengths of various Vicuna models, thereby serving as a cornerstone for future optimizations and research efforts. ### Impact of Continual Training In this section, we explore the influence of continual training on the Vicuna-13b-v1.3 model's performance characteristics. To achieve this, we extended the Supervised Fine-Tuning (SFT) process for this particular model across additional epochs, ranging from one to five. * **Response Brevity**. Our analysis reveals a direct correlation between the number of SFT epochs and the succinctness of the model's responses. As the epoch count increases, the model progressively produces more concise outputs. * **Enhanced Multilingual Capabilities**. Notably, an expanded multi-language proficiency is observed as we increase the number of SFT epochs. This is likely attributable to the significant presence of Chinese linguistic data in our training dataset. * **Optimal Performance Sweet Spot**. Based on empirical data, the Vicuna-13b-v1.3 model appears to reach an optimal level of overall performance when trained with 2-3 SFT epochs. Additional fine-tuning beyond this range appears to result in diminished performance metrics. Across the epochs, the model seems to experiment with the trade-offs between clarity, conciseness, and completeness. Early epochs prioritize clarity and conciseness, while the middle epoch (Epoch 4) leans towards completeness at the cost of becoming a bit verbose. The latest epoch (Epoch 5) seems to strike a balance among all factors. ### Human Evaluation Benchmark Results Based on the evaluation criteria detailed in Section 3.5, we have compiled a comprehensive benchmark of evaluation results for state-of-the-art LLMs. The scores are displayed in Table 2. When comparing models of equal size, we find that Llama-2-13b generally outperforms the Vicuna-13b variants. Among the Vicuna-13b models, the v1.3 version slightly outperforms v1.1. As expected, larger models like Vicuna-33b significantly exceed the capabilities of smaller variants such as Vicuna-13b. Specifically, the Vicuna-v1.3 models demonstrate greater strengths in courtesy and safety criteria. In contrast, the Llama-2 model shows stronger reasoning abilities, reflected by higher accuracy scores. Overall, these comparative findings validate our earlier analyses of the relative strengths and weaknesses across models. ## 5 Conclusion This paper introduced a comprehensive evaluation framework for fine-tuning and evaluating large language models for specialized monetization tasks. We carefully blending in-domain and general-purpose data during fine-tuning to balance general and specialized capabilities. A robust 54-question evaluation framework is designed to assess performance on functionally relevant dimensions. Key model characteristics such as model size, continual training, and other factors were analyzed to guide efficient resource allocation. Our experiments validated that blending domain data with out-of-domain data helps preserve general proficiency. The curated evaluation framework provided a more accurate assessment of business impact than standard benchmarks. We also found that continual training and model scaling influence metrics in nuanced ways that can inform optimization. The overarching implication is that applying LLMs to commercial applications requires balanced data fine-tuning and multi-faceted evaluation attuned to real-world requirements. Neither generic benchmarks nor solely in-domain data suffice. Our methodology and findings aim to provide both researchers and businesses with actionable insights on effectively adapting LLMs for specialized contexts. Future work can build on our approach in multiple directions. Personalized tuning per business vertical and iterative human-in-the-loop training are promising areas. Our data blending techniques could be enhanced using optimal blending ratios per model. The evaluation framework can expand to additional specialized tasks. Overall, developing LLMs for monetization requires continued research into balanced tuning strategies and comprehensive performance assessment. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & Vicuna-13b-v1.1 & Vicuna-13b-v1.3 & Llama-2-13b & Vicuna-33b-v1.3 \\ \hline **Clarity** & 0.785 & 0.915 & 0.904 & 0.937 \\ **Accuracy** & 0.707 & 0.644 & 0.763 & 0.718 \\ **Completeness** & 0.781 & 0.633 & 0.663 & 0.730 \\ **Conciseness** & 0.911 & 0.952 & 0.967 & 0.970 \\ **Safety** & 0.820 & 1.00 & 0.967 & 1.00 \\ **Concreteness** & 0.726 & 0.926 & 0.867 & 0.907 \\ **Consideration** & 1.00 & 0.989 & 1.00 & 1.00 \\ **Courtesy** & 0.922 & 1.00 & 1.00 & 1.00 \\ \hline **Overall Score** & 0.820 & 0.828 & 0.856 & 0.869 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluating human-assessed scores for several state-of-the-art Language Learning Models. ## References * [1] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_, 2022. * [2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877-1901, 2020. * [3] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. _ArXiv_, abs/2005.14165, 2020. * [4] Elia Bruni and Raquel Fernandez. Adversarial evaluation for open-domain dialogue generation. In _Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue_, pages 284-288, Saarbrucken, Germany, August 2017. Association for Computational Linguistics. * [5] Yu-Chu Chang, Xu Wang, Jindong Wang, Yuanyi Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Weirong Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qian Yang, and Xingxu Xie. A survey on evaluation of large language models. _ArXiv_, abs/2307.03109, 2023. * [6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. * [7] Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, and Xiang Ren. Scalable multi-hop relational reasoning for knowledge-aware question answering. In _EMNLP_, 2020. * [8] Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Fanchao Qi, Yao Fu, Maosong Sun, and Junxian He. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. _ArXiv_, abs/2305.08322, 2023. * [9] Takeshi Kojima, Shixiang Shane Gu, Michel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. _Advances in neural information processing systems_, 35:22199-22213, 2022. * [10] Andreas Kopf, Yannic Kilcher, Dimitri von Rutte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richard Nagyfi, et al. Open-assistant conversations-democratizing large language model alignment. _arXiv preprint arXiv:2304.07327_, 2023. * [11] Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, et al. Domain specialization as the key to make large language models disruptive: A comprehensive survey. _arXiv preprint arXiv:2305.18703_, 2023. * [12] Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. Metacl: Learning to learn in context. _arXiv preprint arXiv:2110.15943_, 2021. * [13] Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions. In _Annual Meeting of the Association for Computational Linguistics_, 2021. * [14] OpenAI. Gpt-4 technical report. _ArXiv_, abs/2303.08774, 2023. * [15] Wenbo Pan, Qiguang Chen, Xiao Xu, Wanxiang Che, and Libo Qin. A preliminary evaluation of chatgpt for zero-shot dialogue understanding. _arXiv preprint arXiv:2304.04256_, 2023. * [16] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In _Annual Meeting of the Association for Computational Linguistics_, 2002. * [17] Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. Pre-trained models for natural language processing: A survey. _Science China Technological Sciences_, 63(10):1872-1897, 2020. * [18] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 3505-3506, 2020. * [19] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. * [20] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothe Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. _ArXiv_, abs/2302.13971, 2023. * [21] Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajijwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyu Fu, Brian Fuller, Cynthia Gao, Vedaniy Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Klumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models. _ArXiv_, abs/2307.09288, 2023. * [22] Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NIPS_, 2017. * [23] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. _Advances in neural information processing systems_, 32, 2019. * [24] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. _arXiv preprint arXiv:1804.07461_, 2018. * [25] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. _arXiv preprint arXiv:2212.10560_, 2022. * [26] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In _Annual Meeting of the Association for Computational Linguistics_, 2022. * [27] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. 2022. * [28] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. _arXiv preprint arXiv:2109.01652_, 2021. * [29] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Z. Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jianyun Nie, and Ji rong Wen. A survey of large language models. _ArXiv_, abs/2303.18223, 2023. * [30] Chen Zheng and Parisa Kordjamshidi. Dynamic relevance graph network for knowledge-aware question answering. In _Proceedings of the 29th International Conference on Computational Linguistics_, pages 1357-1366, Gyeongju, Republic of Korea, October 2022. International Committee on Computational Linguistics. List of Questions \begin{tabular}{|l|l|} \hline **No.** & **Questions** \\ \hline 1 & Hi \\ 2 & Hello, nice to meet you \\ 3 & Who are you? \\ 4 & I can't understand you \\ 5 & You are wrong \\ 6 & 1,2,5,7,9,11 \\ 7 & List all the prime numbers within 100 \\ 8 & What is the result of 9 divided by 3? \\ 9 & What is an auction ad? \\ 10 & What is pre-audit and smart fix in ads platform? \\ 11 & What is machine moderation \\ 12 & What is human moderation in advertising \\ 13 & Will the company use machine learning models to moderate ad groups? \\ 14 & Introduce BI (Business Integrity) team \\ 15 & Explain Advertiser Experience (AdEx) \\ 16 & Generate a list of advertising metrics on platform \\ 17 & How to advertise on platform \\ 18 & How to create an ad \\ 19 & How can I create an ad group \\ 20 & Can I add a link to my account in an ad? \\ 21 & Can I edit a bid? \\ 22 & Where can I see my account's transaction? \\ 23 & Why were my ad groups rejected? \\ 24 & Why did you reject my ads? \\ 25 & How do I fix an ad that was disapproved in different locations? \\ 26 & What ad policy does the following text violate? Lonely and thirsty for a man \\ 27 & What ad policy does the following text violate? This Streamline Makes Your Cheeky \\ & Hidden Hips Look Like a Juicy Peach \\ 28 & What ad policy does the following text violate? Red Boost is a powerful new formula \\ & for boosting male sexual health, weight loss, muscle building. \\ 29 & Generate 10 suggestions to post non-political content ads on platform \\ 30 & show me the details of prohibited industry policy in the united States \\ 31 & Classify the following comment into labels [non-delivery, description doesn't match \\ & item, data theft, body harm, unreasonable expense]: "Didn't get it!" \\ 32 & Classify the following text into these ad policies [political content policy, sexual hint \\ & policy, no violated policy]. "How are you" \\ 33 & List some examples concerning adult products, which is prohibited \\ 34 & List some examples concerning sexual hint, which is restricted \\ 35 & List some reasons why sexual hint is not allowed on platform \\ 36 & Generate 10 reasons why sexual hint is not allowed on platform \\ 37 & "This is a 100% natural product for all females and all ages" what ad policy does the \\ & above text violate? (A) no violations (B) exaggerated (C) political content (D) absolute \\ & terms (E) none of the above \\ 38 & "This is a 100% natural product for all females and all ages" what ad policy does the \\ & above text violate? \\ 39 & Explain the reasons why the following text violates exaggerated description ad policy. \\ & "This is a 100% natural product for all females and all ages" \\ 40 & Rewrite "This is a 100% natural product for all females and all ages" to an ad without any violations \\ 41 & Rewrite "Number 1 product in the world" to the text that doesn't violate exaggerated \\ & description policy \\ 42 & Generate a list of 10 common reasons why ad groups are rejected \\ 43 & How to lose weight without effort, write more than 1000 words \\ 44 & Compose an article which starts with flowers. \\ 45 & Describe the panda in Chinese \\ \hline \end{tabular} ## Appendix B Additional results
2310.15690
Power-Enhanced Residual Network for Function Approximation and Physics-Informed Inverse Problems
In this study, we investigate how the updating of weights during forward operation and the computation of gradients during backpropagation impact the optimization process, training procedure, and overall performance of the neural network, particularly the multi-layer perceptrons (MLPs). This paper introduces a novel neural network structure called the Power-Enhancing residual network, inspired by highway network and residual network, designed to improve the network's capabilities for both smooth and non-smooth functions approximation in 2D and 3D settings. By incorporating power terms into residual elements, the architecture enhances the stability of weight updating, thereby facilitating better convergence and accuracy. The study explores network depth, width, and optimization methods, showing the architecture's adaptability and performance advantages. Consistently, the results emphasize the exceptional accuracy of the proposed Power-Enhancing residual network, particularly for non-smooth functions. Real-world examples also confirm its superiority over plain neural network in terms of accuracy, convergence, and efficiency. Moreover, the proposed architecture is also applied to solving the inverse Burgers' equation, demonstrating superior performance. In conclusion, the Power-Enhancing residual network offers a versatile solution that significantly enhances neural network capabilities by emphasizing the importance of stable weight updates for effective training in deep neural networks. The codes implemented are available at: \url{https://github.com/CMMAi/ResNet_for_PINN}.
Amir Noorizadegan, D. L. Young, Y. C. Hon, C. S. Chen
2023-10-24T10:01:15Z
http://arxiv.org/abs/2310.15690v2
# Physics-Informed with Power-Enhanced Residual Network for Interpolation and Inverse Problems ###### Abstract This paper introduces a novel neural network structure called the Power-Enhancing residual network, designed to improve interpolation capabilities for both smooth and non-smooth functions in 2D and 3D settings. By adding power terms to residual elements, the architecture boosts the network's expressive power. The study explores network depth, width, and optimization methods, showing the architecture's adaptability and performance advantages. Consistently, the results emphasize the exceptional accuracy of the proposed Power-Enhancing residual network, particularly for non-smooth functions. Real-world examples also confirm its superiority over plain neural network in terms of accuracy, convergence, and efficiency. The study also looks at the impact of deeper network. Moreover, the proposed architecture is also applied to solving the inverse Burgers' equation, demonstrating superior performance. In conclusion, the Power-Enhancing residual network offers a versatile solution that significantly enhances neural network capabilities. The codes implemented are available at: [https://github.com/CMMAi/ResNet_for_PINN](https://github.com/CMMAi/ResNet_for_PINN). ## 1 Introduction Deep neural networks have revolutionized the field of machine learning and artificial intelligence, achieving remarkable success in various applications, including image recognition, natural language processing, and reinforcement learning. Moreover, their adaptability extends beyond these domains, as evidenced by their effective integration with physics-informed approaches [1]. One significant development in this domain was the introduction of residual networks, commonly known as ResNets [2, 3], which demonstrated unprecedented performance in constructing deep architectures and mitigating the vanishing gradient problem. ResNets leverage skip connections to create shortcut paths between layers, resulting in a smoother loss function. This permits efficient gradient flow, thus enhancing training performance across various sizes of neural networks [4]. Our research aligns closely with theirs, particularly in our exploration of skip connections' effects on loss functions. In 2016, Veit et al. [5] unveiled a new perspective on ResNet, providing a comprehensive insight. Velt's research underscored the idea that residual networks could be envisioned as an assembly of paths with varying lengths. These networks effectively employed shorter paths for training, effectively resolving the vanishing gradient problem and facilitating the training of exceptionally deep models. Jastrzebski et al.'s research [6] highlighted Residual Networks' iterative feature refinement process numerically. Their findings emphasized how residual connections guided features along negative gradients between blocks, and show that effective sharing of residual layers mitigates overfitting. In related engineering work, Lu et al. [7] leveraged recent neural network progress via a multifidelity (MFNN) strategy (MFNN: refers to a neural network architecture that combines outputs from multiple models with varying levels of fidelity or accuracy) for extracting material properties from instrumented indentation (see [7], Fig. 1(D) ). The proposed MFNN in this study incorporates a residual link that connects the low-fidelity output to the high-fidelity output at each iteration, rather than between layers. Wang et al. [8] proposed an improved fully-connected neural architecture. The key innovation involves integrating two transformer networks to project input variables into a high-dimensional feature space. This architecture combines multiplicative interactions and residuals, resulting in improved predictive accuracy, but with the cost of CPU time. In this paper, we propose a novel architecture called the Power-Enhancing SkipResNet, aimed at advancing the interpolation capabilities of deep neural networks for smooth and non-smooth functions in 2D and 3D domains. The key objectives of this research are as follows: * Introduces the "Power-Enhancing SkipResNet" architecture. * Enhances network's expressive power for improved accuracy and convergence. * Outperforms conventional plain neural networks. * Conducts extensive experiments on diverse interpolation scenarios and inverse Burger's equation. * Demonstrates benefits of deeper architectures. Through rigorous analysis and comparisons, we demonstrate the advantages of the proposed architecture in terms of accuracy and convergence speed. The remainder of this paper is organized as follows: Section 2 reviews the neural network and its application for solving interpolation problems. In Section 3, we briefly presents physics informed neural network for solving inverse Burgers' equation. Section 4 discusses the residual network and the proposed Power-Enhancing SkipResNet, explaining the incorporation of power terms and its potential benefits. Section 5 presents the experimental setup and the evaluation of results and discusses the findings. Finally, Section 6 concludes the paper with a summary of contributions and potential future research directions. Neural Networks In this section, we will explore the utilization of feedforward neural networks for solving interpolation problems, specifically focusing on constructing accurate approximations of functions based on given data points. ### Feedforward Neural Networks The feedforward neural network, also known as a multilayer perceptron (MLP), serves as a foundational architecture in artificial neural networks. Comprising interconnected layers of neurons, the information flow progresses unidirectionally from the input layer through hidden layers to the output layer. This process, termed "feedforward," entails transforming input data into desired output predictions. The core constituents of a feedforward neural network are its individual neurons. A neuron computes a weighted sum of its inputs, augmented by a bias term, before applying an activation function to the result. For a neuron with \(n\) number of inputs (data points) in the given layer \(i\), where inputs are denoted as \(\mathcal{X}=[\mathrm{x}_{1},\mathrm{x}_{2},\ldots,\mathrm{x}_{n}]\), weights as \(\mathbf{w}=[w_{1},w_{2},\ldots,w_{n}]\), and bias \(b_{i}\), the output \(z_{i}\) is computed as: \[z_{i}=\sum_{j=1}^{n}w_{j}\mathrm{x}_{j}+b_{i} \tag{1}\] where \(\mathrm{x}_{j}\) has a dimension of \(d\) such that \(\mathrm{x}=(x_{1},x_{2},\ldots,x_{d})\). The output is then transformed using an activation function \(h(\cdot)\): \[y_{i}=h(z_{i}) \tag{2}\] It is important to note that while hidden layers utilize activation functions to introduce non-linearity, the last layer (output layer) typically does not apply an activation function to its outputs. ### Neural Networks for Interpolation In the context of interpolation problems, feedforward neural networks can be leveraged to approximate functions based on a given set of data points. The primary objective is to construct a neural network capable of accurately predicting function values at points not explicitly included in the provided dataset. #### 2.2.1 Training Process The training of the neural network involves adjusting its weights and biases to minimize the disparity between predicted outputs and actual data values. This optimization process is typically driven by algorithms such as gradient descent, which iteratively update network parameters to minimize a chosen loss function. #### 2.2.2 Loss Function for Interpolation In interpolation tasks, the selection of an appropriate loss function is crucial. One common choice is the mean squared error (MSE), which quantifies the discrepancies between predicted values, denoted as \(N\), and the true data values, denoted as \(u\), at each data point, represented as x\({}_{j}\). The MSE is calculated over a total of \(n\) data points using the formula: \[MSE=\frac{1}{n}\sum_{j=1}^{n}(u(\mathrm{x}_{j})-N(\mathrm{x}_{j}))^{2} \tag{3}\] This loss function guides the optimization process, steering the network toward producing accurate predictions. ## 3 PINN for Solving Inverse Burgers' Equation In this section, we explore the application of Physics-Informed Neural Networks (PINN) [1] to solve the inverse Burgers' equation in one dimension. The 1D Burgers' equation is given by: \[\frac{\partial u}{\partial t}+\lambda_{1}u\frac{\partial u}{\partial x}= \lambda_{2}\frac{\partial^{2}u}{\partial x^{2}} \tag{4}\] where \(u(\mathrm{x})=u(x,t)\) is the solution, and \(\lambda_{1}\) and \(\lambda_{2}\) are coefficients to be determined. Here, \(x\in[-1,1]\) and \(t\in[0,1]\) represent two dimensions, space and time respectively. In the context of solving the inverse Burgers' equation, we combine the power of neural networks (Fig. 1(I)) with the physical governing equation (Fig. 1(II)) to form PINN. Utilizing the universal approximation theorem, we approximate the solution \(N(x,t)\approx u(x,t)\). By automatically differentiating the network, we can compute derivatives such as \(N_{t}=\frac{\partial N}{\partial t}\), \(N_{xx}=\frac{\partial^{2}N}{\partial x^{2}}\), etc. We define the function \(g(x,t)\) representing the residual of the Burgers' equation as: \[g(x,t)=N_{t}+\lambda_{1}NN_{x}-\lambda_{2}N_{xx} \tag{5}\] The PINNs loss function is given by (Fig. 1(II)): \[MSE_{g}=\frac{1}{n_{c}}\sum_{i=1}^{n_{c}}(g(x^{i},t^{i}))^{2} \tag{6}\] where \(n_{c}\) is the number of collocation points and pairs of \((x,t)\) specify the space and time values. In this inverse problem, we know the true data, so we compute the loss with respect to the reference solution as well (Fig. 1(I)): \[MSE_{u}=\frac{1}{n}\sum_{j=1}^{n}\left(u(x^{i},t^{i})-N(x^{j},t^{j})\right)^{2} \tag{7}\] where \(n\) represents the number of data locations. The total loss function minimized during training is: \[MSE=MSE_{u}+MSE_{g} \tag{8}\] We aim to minimize \(MSE\) to obtain the neural network parameters \((\mathbf{w},b_{i})\) and the Burgers' equation parameters \(\lambda_{1}\) and \(\lambda_{2}\). ## 4 Residual Network Residual networks, commonly referred to as ResNets [2, 3], have become a prominent architecture in neural networks. They are characterized by their residual modules, denoted as \(f_{i}\), and skip connections that bypass these modules, enabling the construction of deep networks. This allows for the creation of residual blocks, which are sets of layers within the network. In contrast with Fig. 2(a), which illustrates the plain neural network, Fig. 2(b) showcases the network architecture incorporating ResNet features. To simplify notation, the initial pre-processing and final steps are excluded from our discussion. Therefore, the definition of the output \(y_{i}\) for the \(i\)-th layer is given as follows: \[y_{i}=f_{i}(y_{i-1})+y_{i-1}. \tag{9}\] Here, \(f_{i}(x)\) encompasses a sequence of operations, including _linear transformations_ (1), _element-wise activation functions_ (2), and _normalization techniques_. In this study, we propose a power-enhanced variant of the ResNet that skips every other layer, denoted as the "Power-Enhanced SkipResNet." The modification involves altering the recursive definition in (9) as follows: \[\begin{cases}y_{i}=f_{i}(y_{i-1})+y_{i-1}^{p},&\text{for}\quad i=1,3,5,\dots \\ y_{i}=f_{i}(y_{i-1}),&\text{for}\quad i=2,4,6,\dots\end{cases} \tag{10}\] Figure 1: The neural network (interpolation stage) + physics (inverse Burger’s equation). Here, \(x\) and \(t\) represent two dimensions, each including \(n\) examples. This novel configuration, illustrated in Fig. 2(c), introduces the use of a power term \(y_{i-1}^{p}\) for specific layers, enhancing the expressive power of the network. For the purpose of comparison among Plain NN, ResNet, and SQR-SkipResNet (Figs. 2(a)-(c), respectively), we evaluate the output of the third hidden layer concerning the input \(y_{0}=\mathcal{X}\). The results for the plain neural network are as follows: \[y_{3} =f_{3}(y_{2})\] \[=f_{3}(f_{2}(y_{1}))\] \[=f_{3}(f_{2}(f_{1}(y_{0}))) \tag{11}\] Meanwhile, the corresponding ResNet formulation is as follows [5]: \[y_{3} =f_{3}(y_{2})+y_{2}\] \[=f_{3}(f_{2}(y_{1})+y_{1})+[f_{2}(y_{1})+y_{1}]\] \[=f_{3}(f_{2}(f_{1}(y_{0})+y_{0})+f_{1}(y_{0})+y_{0})+[f_{2}(f_{1}( y_{0})+y_{0})+f_{1}(y_{0})+y_{0}] \tag{12}\] Finally, the formulation of the first three hidden layers for the SQR-SkipResNet is as follows: \[y_{3} =f_{3}(y_{2})+y_{2}^{p}\] \[=f_{3}(f_{2}(y_{1}))+[f_{2}(y_{1})]^{p}\] \[=f_{3}(f_{2}(f_{1}(y_{0})+y_{0}^{p}))+[f_{2}(f_{1}(y_{0})+y_{0}^{ p})]^{p} \tag{13}\] Figure 2(d) visually represents the "expression tree" for the case with \(p=2\), providing an insightful illustration of the data flow from input to output. The graph demonstrates the existence of multiple paths that the data can traverse. Each of these paths represents a distinct configuration, determining which residual modules are entered and which ones are skipped. Figure 2: Three neural network architectures: (a) plain neural network (Plain NN), (b) residual network (ResNet), (c) power-enhanced SkipResNet, and (d) Unraveled SQR-SkipResNet (plot (c) with \(p=2\)) where \(\odot\) denotes element-wise multiplication. Our extensive numerical experiments support our approach, indicating that a power of 2 is effective for networks with fewer than 30 hidden layers. However, for deeper networks, a larger power can contribute to network stability. Nonetheless, deploying such deep networks does not substantially enhance accuracy and notably increases CPU time. In tasks like interpolation and solving PDEs, a power of 2 generally suffices, and going beyond may not justify the added complexity in terms of accuracy and efficiency ## 5 Numerical Results In this study, we employ the notations \(n\), \(n_{l}\), and \(n_{n}\) to represent the number of data points (training), layers, and neurons in each layer, respectively. In all following examples, unless otherwise mentioned, we consider \(100^{2}\) validation data points. We also introduce three distinct types of error measurements between exact \(u\) and approximated \(N\) solutions: 1. Mean Square Error: The training errors shown in the plotted graphs, relative to the iteration number, are computed using the mean square error criterion. \[\text{Mean Square Error}=\frac{1}{n}\sum_{i=1}^{n}\left(u_{i}-N_{i}\right)^{2}\] 2. Relative L2 Norm Error: The validation errors, calculated over the test data and presented in the plotted graphs concerning the iteration number, are measured using the relative L2 norm error metric. \[\text{Relative L2 Norm Error}=\frac{\|u-N\|_{2}}{\|u\|_{2}}\] 3. Maximum Absolute Error: When visualizing errors across the entire domain, whether in 2D or 3D scenarios, the error represented on the contour error plot is referred to as the maximum absolute error. It is important to note that the contour bars are scaled according to the largest error in the plot. \[\text{Maximum Absolute Error}=\max|u-N|\] These error metrics provide valuable insights into the accuracy and convergence of the methods used in this study. In this section four methods will be investigated. 1. Plain NN: A conventional neural network without any additional modifications or residual connections (see Fig. 2(a)). 2. ResNet: A residual neural network architecture where the output of each layer is obtained by adding the residual to the layer's output (see Fig. 2(b)). 3. SkipResNet: An extension of ResNet, where the residual connection is applied every other layer, alternating between including and excluding the residual connection (see Fig. 2(c) where \(p=1\)). 4. SQR-SkipResNet: An innovative variation of the ResNet architecture, where the squared residual is added every other layer. In this approach, the output of each alternate layer is obtained by squaring the previous layer's output and adding the squared residual to it (see Fig. 2(c)-(d) where \(p=2\)). In all our experiments, we primarily employ L-BFGS-B (Limited-memory Broyden-Fletcher-Goldfarb-Shanno with Box constraints) and occasionally, for comparison, we also use Adam (Adaptive Moment Estimation). Convergence, particularly with L-BFGS-B optimization, is identified by satisfying preset tolerance levels for gradient or function value change, or by reaching the defined maximum number of iterations, with a gradient tolerance of \(1\times 10^{-9}\), and a change in function value tolerance of \(1\times 10^{-9}\). The numerical experiments were executed on a computer equipped with an Intel(R) Core(TM) i9-9900 CPU operating at 3.10GHz with a total of 64.0 GB of RAM. **Example 1**: For the first example, three test functions are investigated and depicted in Fig. 3. The top panel of Fig. 3 displays the 3D surface plot of the test functions, while the bottom panel presents the corresponding contour plots. F1 is a smooth function, originally introduced by Franke [9], which has been extensively used for studying radial basis function (RBF) interpolation. On the other hand, F2 and F3 are non-smooth functions [10]. \[\text{F1}(x_{1},x_{2}) =\frac{3}{4}\exp\left[\frac{-1}{4}\left((9x_{1}-2)^{2}+(9x_{2}-2) ^{2}\right)\right]+\frac{3}{4}\exp\left[\frac{-1}{49}(9x_{1}+1)^{2}-\frac{1}{ 10}(9x_{2}+1)^{2}\right]\] \[+\frac{1}{2}\exp\left[\frac{-1}{4}\left((9x_{1}-7)^{2}+(9x_{2}-3) ^{2}\right)\right]-\frac{1}{5}\exp\left[-(9x_{1}-4)^{2}-(9x_{2}-7)^{2}\right],\] \[\text{F2}(x_{1},x_{2}) =\frac{0.0025}{(x_{1}-1.01)^{2}+(x_{2}-1.01)^{2}},\] \[\text{F3}(x_{1},x_{2}) =\frac{1}{9}\left[64-81\left(\left|x_{1}-\frac{1}{2}\right|+\left| x_{2}-\frac{1}{2}\right|\right)\right]-\frac{1}{2}.\] First we investigate the performance of four neural networks: Plain NN, ResNet, SkipResNet, and SQR-SkipResNet. The entire analysis is based on the network with \(n_{l}=10\), and each layer contains 50 neurons (\(n_{n}\)). Figure. 4 shows the results of interpolation using 500 training data and \(100^{2}\) validation data. Figure 4(a) presents the Mean Squared Error (MSE) over training (dashed line) and Relative L2 Norm over validation (solid line) data points. Figure 4(b)-4(c) show the maximum absolute errors for Plain NN and SQR-SkipResNet, respectively. Our observations from these plots are as follows: 1. Plot (a) indicates that the ResNet is not accurate enough compared to the other three networks, both during training and validation. This pattern has been consistently observed in various examples, and we will no longer investigate the ResNet performance. Figure 4: The profiles of training on F1 for different number of collocation points \(n\). Dotted-line curves denote training error, and solid-line curves denote validation error. Figure 3: The profile of the F1, F2, and F3 2. As indicated by the plot, it can be observed that the Plain NN necessitates approximately 2400 iterations for convergence, whereas the proposed SQR-SkipResNet achieves convergence in a significantly reduced 1400 iterations. Additionally, the latter method exhibits higher accuracy compared to the former. 3. Plot (a) also shows that SkipResNet performs somewhat between Plain NN and SQR-SkipResNet. This behavior has been observed in different examples conducted by the authors, but we do not plan to further investigate this method. 4. Contour error plots for both Plain NN and SQR-SkipResNet are presented in plots (b) and (c) respectively. These plots highlight that the maximum absolute error achieved with SQR-SkipResNet exhibits a remarkable improvement of approximately 60% compared to Plain NN. Therefore, a higher accuracy and better convergence are observed when using SQR-SkipResNet compared to other algorithms. Fig. 5 illustrates the outcomes obtained through the utilization of a large number of data points, \(n=5000\), employed for the interpolation of F1. A better convergence from Fig. 5(a) can be observed using the proposed SQR-SkipResNet compared to Plain NN. The Plain NN yields a maximum absolute error of \(9.07\times 10^{-3}\) in 113 seconds, whereas the proposed SQR-SkipResNet approach achieves a significantly reduced error of \(1.56\times 10^{-3}\) in only 55 seconds, shown in Fig. 5(b)-5(c), respectively. This represents an improvement of approximately 82.8% in terms of error reduction and a substantial 51.3% reduction in CPU processing time. A comparison between Fig. 4(a) and Fig. 5(a) reveals that a greater number of data values results in an improved convergence rate for the proposed SQR-SkipResNet, whereas the Plain NN exhibits a slightly higher iteration number. More investigations on the performance of the SQR-SkipResNet has been done by interpolating the non-smooth functions F2 and F3. Figure 6 presents the interpolation results for F2 on the top panel and F3 on the bottom panel with \(n=1000\). The corresponding training and validation error with respect to the epoch are shown in the Figure 5: Example 1: The profiles of (a) training and validation results on F1 with 5000 data points. Dotted-line curves denote training error, and solid-line curves denote validation error. The corresponding contour error plots for (b) the plain NN and (c) SQR-SkipResNet. first column. The second and third columns show the interpolated surface using Plain NN and SQR-SkipResNet, respectively. Clearly, a better surface interpolation has been carried using the proposed method. More details are listed in Table 1. This table shows that the accuracy using the SQR-SkipResNet is slightly better than Plain NN, however it is worth nothing that these functions are non-smooth and a slightly changes in error would affect the quality of interpolation tremendously as shown in Fig. 6. However to reach a better accuracy, the SQR-SkipResnet requires larger number of iterations and consequently the higher CPU time. This can be seen as the trade-off that SQR-SkipResNet makes for interpolating non-smooth functions to obtain better accuracy, in contrast to the smaller CPU time it requires for interpolating smooth functions. Figure 8: Example 2: Maximum absolute error for Mt. Eden interpolation using L-BFGS-B optimizer with \(n=200\), \(n_{n}=50\), and \(n_{l}=5\). Figure 7: Example 2: (a) An image showcasing the Mt. Eden or Maungawhau volcano located in Auckland, New Zealand [11]. (b) A 3D surface representation generated from a dataset containing \(n=5307\) data points. (c) A contour plot providing insights into the topography of Mt. Eden. Figure 9: Example 2: Mt. Eden interpolation results using Adam optimizer with \(n=200\), \(n_{n}=50\), and \(n_{l}=5\). In the first experiment, we utilize only 200 collocation points, 5 hidden layers with \(n_{n}=100\), and we optimize the training using L-BFGS-B. The remaining data, 5107 data points, are used for validation. The results are shown in Fig. 8(a), which illustrates the relative L2 norm error over the test data. Evidently, SQR-SkipResNet achieves higher accuracy with fewer iterations. The convergence time for SQR-SkipResNet is 68 seconds, and it requires 4600 iterations to converge. On the other hand, Plain NN requires 80 seconds and 5600 iterations to achieve convergence. Additionally, we provide more details on the interpolated surface and accuracy in Fig. 8. The second row shows the interpolated surface using Plain NN, while the third row shows the results obtained with SQR-SkipResNet. Specifically, plots 8(b) and 8(e) depict the interpolated surfaces for Plain NN and SQR-SkipResNet, respectively. Similarly, plots 8(c) and 8(f) display the contour plots for both methods. Finally, plots 8(d) and 8(g) represent the contour error plots, measured by the maximum absolute error, for Plain NN and SQR-SkipResNet, respectively. Clearly, the results using SQR-SkipResNet significantly outperform those from Plain NN. The accuracy of Plain NN, specifically in terms of the maximum absolute error, improves significantly (500%) when using the SQR-SkipResNet architecture. This underscores the superiority of SQR-SkipResNet in achieving more accurate and reliable interpolation results. In our second experiment, we repeat the the previous example but this time we use Adam optimizer with the learning rate of 1.0E-3, and 10k iteration. The organization of plots are as the previous example. Plot 9(a) shows that SQR-SkipResNet works much more accurate from the beginning of the iterations with much less fluctuation compare with Plain NN (compare plots 9 (b) and 9(e), respectively, and its corresponding contour plots in 9(c) and 9(f)). We also see that the interpolated surface when using the SQR-SkipResNet (plot 9(g)) can be completely better than Plain NN (plot 9(d)). The accuracy with respect to the maximum absolute error for the latter one is about 462% better than the Plain NN. A comparisopn between these two optimizers, L-BFGS-B (Fig. 8 and \begin{table} \begin{tabular}{c|c|c|c c} \hline \(n\) & \(n_{n}\) & \(n_{l}\) & Plain NN & SQR-SkipResNet \\ \hline \hline \multirow{4}{*}{200} & 50 & 5 & ✗ & 12.9 \\ & & 10 & ✗ & 32.6 \\ \cline{2-5} & 100 & 5 & 114 & 19.6 \\ & & 10 & ✗ & 25.5 \\ \hline \multirow{4}{*}{1000} & 50 & 5 & 21.6 & 4.77 \\ & & 10 & ✗ & 14.0 \\ \cline{1-1} \cline{2-5} & 100 & 5 & 7.36 & 6.28 \\ \cline{1-1} \cline{2-5} & 10 & 10 & ✗ & 7.78 \\ \hline \end{tabular} \end{table} Table 2: Example 2: Maximum absolute errors (\(m\)) for Mt. Eden interpolation using Adam optimizer for various number of training data points \(n\), neurons \(n_{n}\) and layers \(n_{l}\). Fig. 9) shows a better performance using Adam for both Plain NN and proposed SQR-SkipResNet. Therefore, we further investigate the impact of the number of data points \(n\), neurons \(n_{n}\), and layers \(n_{l}\) as listed in Table 2. In this table, \(\boldsymbol{\mathsf{X}}\) denotes cases where training failed. When training fails, the interpolated surface remains partly flat and partly non-smooth. we have the following observations: * As \(n\) increases, smaller errors obtained. * With a fixed number of neurons \(n_{n}\), the errors are smaller when the number of layers is \(n_{l}=5\) compared to \(n_{l}=10\). * With a fixed number of layers \(n_{l}\), the errors are smaller when the number of neurons is \(n_{n}=50\) compared to \(n_{n}=100\). * Plain NN failed to train in 5 cases, while the proposed method exhibited successful performance. Finally we see that in all cases, SQR-SkipResNet led to better accuracy compare to Plain NN. **Example 3**: In the concluding example regarding the interpolation problems, we analyze the effectiveness of the proposed neural network in a 3D example, specifically using the Stanford bunny model, as depicted in Fig. 10(a). The entire bunny model has been scaled by a factor of 10. A distribution of points over the bunny's surface is illustrated in Fig. 10(b), comprising a total of 8171 data points. The validation error is performed using the following test function (refer to [12], F4): \[\mathrm{F4}(x_{1},x_{2},x_{3})=\frac{1}{3}\exp\left[-\frac{81}{16}\left((x_{1} -0.5)^{2}+(x_{2}-0.5)^{2}+(x_{3}-0.5)^{2}\right)\right]\] Figure 10: Example 3: The Stanford Bunny. In Fig. 11, the training process (dotted line) is depicted with 500 data values, while the remaining 7671 points are reserved for validation error assessment (solid line). The top panel showcases results obtained using the L-BFGS-B optimizer, while the bottom panel displays outcomes achieved through the Adam optimizer. As demonstrated in Fig. 11(a), the SQR-SkipResNet surpasses the Plain NN in terms of accuracy and convergence rate across both the training and test datasets. The recorded CPU times amount to 35 seconds for Plain NN and 15 seconds for SQR-SkipResNet. Plots (b) and (c) offer insight into the maximum absolute error, highlighting an accuracy improvement of approximately 70% when implementing the proposed network architecture. Moreover, the lower panel of the figure reveals that the efficacy of the SQR-SkipResNet method persists even when utilizing the Adam optimizer. Plot (a) illustrates a more rapid convergence rate for the proposed method when evaluated against test data. The plots (b) and (c) portraying the maximum absolute error clearly exhibit significantly improved accuracy achieved through the proposed approach. This consistent superiority serves to highlight the distinct advantages of the SQR-SkipResNet approach over its alternatives. In comparing the L-BFGS-B and Adam optimizers, it becomes evident that the former displays enhanced performance in both accuracy and CPU time, accomplishing the desired accuracy level more efficiently. _One might wonder about the advantages of employing deep neural networks and their computational implications._ To illustrate this aspect, we emphasize the significance of network depth in neural networks, as shown in Fig. 12, specifically focusing on F4 with Figure 11: Example 3: Error profile comparison for the Stanford Bunny model using L-BFGS-B (top panel) and Adam optimizers (bottom panel). Training errors are indicated by the dotted line, and validation errors are represented by the solid line. \(n=500\) data points and employing the L-BFGS-B optimizer. The results presented here encompass scenarios with 5, 10, and 20 hidden layers, each consisting of 50 neurons. Examining plot (a), which illustrates the validation error using the Plain NN, we note that increasing the number of hidden layers from 5 to 10 results in a decreased convergence rate. Interestingly, increasing the number of layers to 20, denoted by \(n_{l}=20\), leads to the most favorable convergence rate when compared to the cases of \(n_{l}=5\) and \(n_{l}=10\). Regarding accuracy, variations in the number of layers yield only marginal changes in accuracy. However, the network with 20 hidden layers displays the highest error. Conversely, in the case of SQR-SkipResNet, a deeper network correlates with improved convergence rate and enhanced accuracy. This suggests that deeper hidden layers can identify features when embedded within an appropriate neural network architecture for this particular example. This stands in contrast to our findings in the second example (Table 2), which highlighted the problem-dependent nature of selecting an optimal number of layers. In this context, the recorded CPU times for models with 5 and 20 hidden layers amount to 19 and 16 seconds, respectively. This observation suggests that deeper networks may not necessarily result in longer CPU times; rather, they can potentially expedite training due to improved convergence rates, as evident in this case. **Example 4**: In our final example, we delve into the performance evaluation of the proposed SQR-SkipResNet for solving the inverse problem, specifically focusing on the Burgers' equation. The ground truth coefficients are \(\lambda_{1}=1\) and \(\lambda_{2}=\nu=\frac{1}{100\pi}=0.003183\), while the initial estimates are \(\lambda_{1}=2.0\) and \(\lambda_{2}=0.2\). The outcomes of the investigation are presented in Figure 13, which showcases the results obtained during training and validation for \(n=500\) using the L-BFGS-B optimizer. Further analysis is conducted for different network architectures. Figure 13 demonstrates the outcomes for the configuration \((n_{l},n_{n})=(10,50)\), revealing improved accuracy for both collocation and validation data when employing the proposed method. The pre Figure 12: Example 3: Profiles of the validation errors for interpolating the the Stanford bunny for different number of layers using (a) Plain NN and (b) SQR-SkipResNet. dicted values of \(\lambda_{1}\) using SQR-SkipResNet and Plain NN show errors of \(0.25\%\) and \(0.35\%\), respectively, when compared with the exact results. Additionally, the percentage errors for predicting \(\lambda_{1}\) and \(\lambda_{2}\) are \(1.55\%\) and \(3.29\%\) for SQR-SkipResNet and Plain NN, respectively. Extending this analysis, Fig. 13(b) showcases the results for \((n_{l},n_{n})=(20,50)\). It is evident that a deeper network architecture leads to enhanced accuracy when utilizing the proposed method. Notably, as the number of hidden layers increases, Plain NN demonstrates larger errors. This effect is more pronounced in Fig. 13(c), which presents the results for a large number of hidden layers (\(n_{l}=50\)). Consequently, we can conclude that the proposed neural network architecture not only improves accuracy but also exhibits greater stability concerning varying numbers of hidden layers. Comparing the two plots, we observe that the accuracy difference between Plain NN and SQR-SkipResNet becomes more pronounced as the network size increases. This emphasizes the crucial role of architecture selection in achieving stable results. ## 6 Conclusion Throughout this study, we conducted a series of experiments to assess how different neural network setups, including Plain NN and SQR-SkipResNet, perform when it comes to interpolating both smooth and complex functions. Our findings consistently showed that SQR-SkipResNet outperforms other architectures in terms of accuracy. This was especially evident when dealing with non-smooth functions, where SQR-SkipResNet displayed improved accuracy, although it might take slightly more time to converge. We also applied our approach to real-world examples, like interpolating the shape of a volcano and the Stanford bunny. In both cases, SQR-SkipResNet exhibited better accuracy, convergence, and computational time compared to Plain NN. Furthermore, while opting for a deeper network might at times lead to reduced accuracy for both Plain NN and SQR-SkipResNet, we observed that this outcome is influenced by the specific problem. For instance, when dealing with the complicated geometry of the Stanford Bunny and its smooth function, we noticed that deeper networks yielded enhanced accuracy, quicker convergence, and improved CPU efficiency. Regardless of Figure 13: Example 4: Profiles of training (dotted line) and validation error (solid line) for different number of layers. whether deeper networks are suitable, the proposed method demonstrated superior performance. As the effectiveness of network depth varies based on the problem, our approach offers a more favorable architecture choice for networks of different depths. Additionally, when applied to solve the inverse Burgers' equation using a physics-informed neural network, our proposed architecture showcased significant accuracy and stability improvements across different numbers of hidden layers, unlike Plain NN. Prospective studies might delve into further optimizations, extensions, and applications of the SQR-SkipResNet framework across diverse domains, particularly for addressing a broad range of inverse problems coupled with PINN methodologies. ## Acknowledgments Authors gratefully acknowledge the financial support of the Ministry of Science and Technology (MOST) of Taiwan under grant numbers 111-2811-E002-062, 109-2221-E002-006-MY3, and 111-2221-E-002 -054 -MY3.
2307.11039
Indicatori comuni del PNRR e framework SDGs: una proposta di indicatore composito
The main component of the NextGeneration EU (NGEU) program is the Recovery and Resilience Facility (RRF), spanning an implementation period between 2021 and 2026. The RRF also includes a monitoring system: every six months, each country is required to send an update on the progress of the plan against 14 common indicators, measured on specific quantitative scales. The aim of this paper is to present the first empirical evidence on this system, while, at the same time, emphasizing the potential of its integration with the sustainable development framework (SDGs). We propose to develop a first linkage between the 14 common indicators and the SDGs which allows us to produce a composite index (SDGs-RRF) for France, Germany, Italy, and Spain for the period 2014-2021. Over this time, widespread improvements in the composite index across the four countries led to a partial reduction of the divergence. The proposed approach represents a first step towards a wider use of the SDGs for the assessment of the RRF, in line with their use in the European Semester documents prepared by the European Commission.
Fabio Bacchini, Lorenzo Di Biagio, Giampiero M. Gallo, Vincenzo Spinelli
2023-07-20T17:21:14Z
http://arxiv.org/abs/2307.11039v1
# Indicatori comuni del PNRR e _framework_ SDGs: ###### Abstract The main component of the NextGeneration EU (NGEU) program is the Recovery and Resilience Facility (RRF), spanning an implementation period between 2021 and 2026. The RRF also includes a monitoring system: every six months, each country is required to send an update on the progress of the plan against 14 common indicators, measured on specific quantitative scales. The aim of this paper is to present the first empirical evidence on this system, while, at the same time, emphasizing the potential of its integration with the sustainable development framework (SDGs). Our proposal is to develop a first linkage between the 14 common indicators and the SDGs which allows us to produce a composite index (SDGs-RRF) for France, Germany, Italy and Spain for the period 2014-2021. Over this time span, widespread improvements in the composite index across the four countries led to a partial reduction of the divergence. The proposed approach represents a first step towards a wider use of the SDGs for the assessment of the RRF, in line with their use in the European Semester documents prepared by the European Commission. In this respect, Italy's experience is valuable, given the inclusion of well-being and sustainability indicators in public finance assessments and the availability of the NRRP-SDGs dashboard prepared by Istat and the State General Accounting Department (RGS). _Key words:_ NRRP, policy evaluation, well-being and sustainability, composite indices. _JEL classification codes:_ C43, Q01, I38, C54
2307.09676
Domain Adaptation based Object Detection for Autonomous Driving in Foggy and Rainy Weather
Typically, object detection methods for autonomous driving that rely on supervised learning make the assumption of a consistent feature distribution between the training and testing data, this such assumption may fail in different weather conditions. Due to the domain gap, a detection model trained under clear weather may not perform well in foggy and rainy conditions. Overcoming detection bottlenecks in foggy and rainy weather is a real challenge for autonomous vehicles deployed in the wild. To bridge the domain gap and improve the performance of object detection in foggy and rainy weather, this paper presents a novel framework for domain-adaptive object detection. The adaptations at both the image-level and object-level are intended to minimize the differences in image style and object appearance between domains. Furthermore, in order to improve the model's performance on challenging examples, we introduce a novel adversarial gradient reversal layer that conducts adversarial mining on difficult instances in addition to domain adaptation. Additionally, we suggest generating an auxiliary domain through data augmentation to enforce a new domain-level metric regularization. Experimental findings on public benchmark exhibit a substantial enhancement in object detection specifically for foggy and rainy driving scenarios.
Jinlong Li, Runsheng Xu, Xinyu Liu, Jin Ma, Baolu Li, Qin Zou, Jiaqi Ma, Hongkai Yu
2023-07-18T23:06:47Z
http://arxiv.org/abs/2307.09676v4
# Domain Adaptation based Enhanced Detection for Autonomous Driving in Foggy and Rainy Weather ###### Abstract Typically, object detection methods for autonomous driving that rely on supervised learning make the assumption of a consistent feature distribution between the training and testing data, however such assumption may fail in different weather conditions. Due to the domain gap, a detection model trained under clear weather may not perform well in foggy and rainy conditions. Overcoming detection bottlenecks in foggy and rainy weather is a real challenge for autonomous vehicles deployed in the wild. To bridge the domain gap and improve the performance of object detection foggy and rainy weather, this paper presents a novel framework for domain-adaptive object detection. The adaptations at both the image-level and object-level are intended to minimize the differences in image style and object appearance between domains. Furthermore, in order to improve the model's performance on challenging examples, we introduce a novel adversarial gradient reversal layer that conducts adversarial mining on difficult instances in addition to domain adaptation. Additionally, we suggest generating an auxiliary domain through data augmentation to enforce a new domain-level metric regularization. Experimental findings on public V2V benchmark exhibit a substantial enhancement in object detection specifically for foggy and rainy driving scenarios. intelligent vehicles, deep learning, object detection, domain adaptation ## I Introduction The past decade has witnessed the significant breakthroughs on autonomous driving with artificial intelligence methods [2, 3], leading to numerous applications in transportation, including improving traffic safety [4, 5, 6], reducing traffic congestion [7, 8], minimizing air pollution [9, 10], and enhancing traffic efficiency [11, 12, 13]. Object detection is a critical component of autonomous driving, which relies on computer vision and artificial intelligence techniques to understand driving scenarios [2, 14]. However, the foggy and rainy weather conditions make the understanding of camera images particularly difficult, which poses challenges to the camera based object detection system installed on the intelligent vehicles [15, 16, 17, 18]. Thanks to the rapid advancements in deep learning, numerous object detection deep learning-based methods have achieved remarkable success in intelligent transportation systems. However, the impressive performance of these popular methods heavily relies on large-scale annotated data for supervised learning. Moreover, these methods make the assumption of consistent feature distributions between the training and testing data. In reality, this assumption may not hold true, especially in diverse weather conditions [23]. For example, as depicted in Fig. 1, CNN models such as YOLOv5 [19], Faster R-CNN [20], and CenterNet [21], trained on clear-weather data (source domain), exhibit accurate object detection performance under clear weather conditions (Fig. 1b). However, their performance significantly degrades under foggy weather conditions (Fig. 1c). This degradation can be attributed to the presence of a feature domain gap between different weather conditions, as illustrated in Fig. 1a. The model trained on the source domain is not familiar with the feature distribution in the target domain. Consequently, this paper aims to enhance object detection specifically in foggy and rainy weather conditions through domain adaptation-based transfer learning. The objective of this paper is to reduce the domain gap between various weathers for enhanced object detection. To handle the domain shift problem (_e.g._ Clear\(\rightarrow\)Foggy and Clear\(\rightarrow\)Rainy), in this paper, we present a new domain adaptation framework that aims to enhance the robustness of object detection in foggy and rainy weather conditions. Our proposed framework follows an unsupervised setting, similar to previous works [24, 25, 26]. In this setting, we have well-labeled clear-weather images as the source domain, while the Fig. 1: Illustration of the weather domain gap (foggy and rainy) for autonomous driving and the detection performance drop because of the domain gap. Three deep learning models (YOLOv5 [19], Faster R-CNN [20], and CenterNet [21]) are all trained with the clear weather data of Cityscapes [22]. foggy and rainy weather images, which serve as the target domains, lack any annotations. This unsupervised setting is because that adverse weather images with labeling (manual annotating) are time-consuming and costly. Inspired by [24, 27], the proposed method aims to reduce the domain feature discrepancies in both image style and object appearance. To achieve domain-invariant features, we incorporate both image-level and object-level domain classifiers as components to facilitate domain adaptation in our CNN architecture. These classifiers are responsible for distinguishing between different domains. By employing an adversarial approach, our detection model learns to generate features that are invariant to domain variations, thereby confusing the domain classifiers. This adversarial design encourages the network to produce features that are agnostic to the specific weather conditions, leading to improved object detection performance in foggy and rainy weather scenarios. Furthermore, we propose novel methodology for domain adaptation (DA). Current existing domain adaptation methods [24, 26, 27, 28, 29] might ignore: 1) the different challenging levels of various training samples, 2) the domain-level feature metric distance to the third related domain by only involving the source domain and target domain. This paper investigates the incorporation of hard example mining and an additional related domain to further strengthen the model's ability to learn robust and transferable representations. We propose a novel Adversarial Gradient Reversal Layer (AdvGRL) and introduce an auxiliary domain through data augmentation. The AdvGRL is designed to perform adversarial mining on challenging examples, thereby improving the model's ability to learn in challenging scenarios. Additionally, the auxiliary domain is leveraged to enforce a new domain-level metric regularization during the transfer learning process. In summary, the contributions of this paper can be summarized as follows: * This paper proposes a novel unsupervised domain adaptation method to enhance the object detection for autonomous vehicle under foggy and rainy conditions, including the image-level and object-level adaptations. * This paper proposes to perform adversarial mining for hard examples during domain adaptation to further improve the model's transfer learning capabilities under challenging samples, which is accomplished by our proposed AdvGRL. * This paper proposes a new domain-level metric regularization to improve transfer learning, _i.e._, the regularization constraint between source domain, added auxiliary domain, and target domain. * This paper explores the intensive transfer learning experiments of clear\(\rightarrow\)foggy, clear\(\rightarrow\)rainy, cross-camera adaptation, and also carefully studies the different-intensity (small, medium, large) fog and rain adaptations. ## II Related Work ### _Detection for intelligent vehicles_ The field of autonomous driving has made remarkable progress thanks to recent deep learning advancements [30, 31, 32]. Object detection, in particular, has emerged as one of the most actively researched areas in this field [33, 34]. In general, current object detection methods can be classified into two main categories: two-stage object detection methods and single-stage object detection methods. Two-stage methods typically consist of two main stages: region proposal generation and object classification/localization. The first process is region proposal, where potential object regions are identified within an image. The second process is object classification and localization refinement. In the beginning, R-CNN [35] introduced the concept of using selective search for generating region proposals and employing separate CNNs for each object prediction. Fast R-CNN [20] enhanced R-CNN by extracting object features directly from a shared CNN feature map. Faster R-CNN [20] enhanced the framework by introducing the Region Proposal Network (RPN) as a replacement for the selective search stage. This enhancement enabled more efficient and accurate region proposal generation. Single-stage detectors typically use a set of predefined anchor boxes or default boxes at different scales and aspect ratios to densely cover the image. Single-stage methods are typically faster in terms of inference speed compared to two-stage algorithms, but they may sacrifice some accuracy. Significant advancements in this category include the SSD series [36], YOLO series [37], and RetinaNet [38]. Although these methods have achieved significant success in visual scenes with clear weather conditions, their direct application in autonomous driving scenarios is often limited due to the challenges posed by challenging real-world weather conditions. ### _Detection for intelligent vehicles under foggy and rainy weather_ In recent years, considerable research has been dedicated to addressing the challenges posed by various weather conditions encountered in autonomous driving scenarios. Researchers have generated various datasets [23, 39] and proposed numerous methods [40, 41, 42, 43, 44, 45] to improve object detection under adverse weather conditions. One notable example is the Foggy Cityscape dataset, which is a synthetic dataset created by applying fog simulation to the Cityscape dataset [23]. In the context of object detection research in rainy weather, several synthesized rainy datasets have been proposed [44, 45, 46]. [41] devised a fog simulation technique to augment existing real lidar datasets, thereby enhancing their quality and realism. The simulated foggy data offers valuable opportunities to enhance object detection methods that are specifically tailored for foggy weather conditions. For leveraging information from multiple sensors, [43] designed a network to integrate data from different sensors _e.g._, LiDAR, camera, and radar. [42] proposed a method that exploits both LiDAR and radar signals to obtain object proposals. The features extracted from the regions of interest in both sensors are fused together to improve the performance of object detection. However, these mentioned methods often rely on input data from different types of sensors other than the camera alone, which may not be applicable to all autonomous driving vehicles. Therefore, the objective of this work is to develop a DA network by utilizing only camera-sensor data as input. ### _Object Detection via Domain Adaptation_ Domain adaptation is effective in reducing the distribution discrepancy between different domains, enabling models trained on a labeled source domain to be applicable to an unlabeled target domain. There has been a growing interest in addressing domain adaptation for object detection [24, 27, 47, 48, 49, 50, 51, 52] in recent years. Several studies [24, 27, 48, 52, 53] have explored the alignment of features from different domains to achieve DA object detectors. A DA Faster R-CNN framework [24] was proposed to reduces the domain gap at both the image level and instance level. He et al. [53] proposed a MAF network that aligns domain features and proposal features hierarchically to minimize domain distribution disparity. In addition to feature alignment, image style transfer approaches [54, 55, 33, 47] are utilized to address the challenge of DA. An image translation module [33] was utilized to convert images from the source domain to the target domain. They then trained the object detector using adversarial training on the target domain. [54] adopted a progressive image translation strategy and introduced a weighted task loss during adversarial training to address image quality differences. Several previous methods [56, 57, 58, 59] have also proposed complex architectures for domain adaptation in object detection. Feature Pyramid Networks (FPN) was utilized to incorporate pixel-level and category-level adaptation for object detection [56]. In order to incorporate the uncertainty of unlabeled target data, [58] introduced an uncertainty-guided self-training mechanism, which leverages a Probabilistic Teacher and Focal Loss. Different with these methods, our approach does not introduce additional learnable parameters to the Faster R-CNN. Instead, we utilize an AdvGRL and a Domain-level Metric Regularization based on triplet loss. A key difference between our method and previous domain adaptation approaches lies in the treatment of training samples. While existing methods often assume that training samples are at the same challenging level, our approach introduces the AdvGRL for adversarial hard example mining, specifically targeting the improvement of transfer learning performance. Additionally, to mitigate overfitting and improve domain adaptation, an auxiliary domain is generated and incorporated to domain-level metric regularization. ## III Methodology This section introduces the overall network architecture, each detailed component, loss functions of our proposed method. ### _Network Architecture_ As shown in Fig. 2, our proposed network follows the pipeline of Faster R-CNN. In the first step, we involve a CNN backbone to extract the image-level features from input images. These features are then fed into the Region Proposal Network to produce region proposals. The next stage involves the ROI pooling, both the image-level features and the object proposals are as input to obtain object-level features. Finally, we apply a detection head for the object-level features to make the final outputs. To enhance the framework of Faster R-CNN for domain adaptation, we incorporate two additional domain adaptation modules: image-level and object-level modules. Both of them utilize a novel AdvGRL in conjunction with the domain classifier. By combining these modules, we are able to extract domain-invariant features and effectively perform adversarial hard example mining. Additionally, an auxiliary domain is introduced to enforce a new domain-level metric regularization. During training, source, target, and auxiliary domains, are simultaneously utilized. ### _Image-level based Adaptation_ The image-level domain representation is derived from the feature extraction process of the backbone network, encompassing valuable global information such as style, scale, and illumination. These factors have the potential to greatly influence the performance of the object detection task [24]. To address this, we incorporate a domain classifier, which aims to classify the domains of the extracted image-level features and promote global alignment at the image level. The domain classifier is implemented as two simple convolutional layers. It takes the image-level features as input and produces a prediction to identify the feature domain. A BCE loss is employed for the domain classifier as follows: \[L_{img}=-\sum_{i=1}^{N}[G_{i}\text{log}P_{i}+(1-G_{i})\text{log}(1-P_{i})], \tag{1}\] where \(i\in\{1,...,N\}\) represents the \(N\) training images, the ground truth domain label of the \(i\)-th training image is denoted as \(G_{i}\in\{1,0\}\), where \(G_{i}\) takes a value of 1 or 0 to represent the source and target domains, respectively. The prediction of the domain classifier for the \(i\)-th training image is denoted as \(P_{i}\). ### _Object-level based Adaptation_ Besides the global differences at the image level, objects within different domains may exhibit variations in terms of appearance, size, color, and other characteristics. To address this, Each region proposal generated by the ROI Pooling layer is considered as a potential object of interest. After obtaining the object-level domain representation via ROI pooling, we introduce an object-level domain classifier to discern the origin of the local features, which is implemented by three fully-connected layers. The objective of the object-level domain classifier is to align the distribution of object-level features across different domains. Similar to the image-level domain classifier, we utilize the BCE loss to train our object-level domain classifier: \[L_{obj}=-\sum_{i=1}^{N}\sum_{j=1}^{M}[G_{i,j}\text{log}P_{i,j}+(1-G_{i,j}) \text{log}(1-P_{i,j})], \tag{2}\] where \(j\in\{1,...,M\}\) is the \(j\)-th predicted object in the \(i\)-th image, \(P_{i,j}\) is the prediction of the object-level domain classifier for the \(j\)-th region proposal in the \(i\)-th image, The corresponding binary ground-truth label for the source and target domains is denoted as \(G_{i,j}\). ### _Adversarial Gradient Reversal Layer_ In this section, we will begin by providing a brief overview of the original Gradient Reversal Layer (GRL), which serves as the foundation for our proposed Adversarial Gradient Reversal Layer (AdvGRL). The original GRL was initially developed for unsupervised domain adaptation in image classification tasks [60]. During forward propagation, the GRL leaves the input unchanged. However, during back-propagation, the gradient is reversed by multiplying it by a negative scalar before propagating it to the preceding layers of the base network. This reversal of gradients serves as a mechanism to confuse the domain classifier. Like this, by reversing the gradient during back-propagation, the GRL encourages the base network to learn domain-invariant features, enabling DA. The forward propagation of GRL is defined as: \[R_{\lambda}(\mathbf{v})=\mathbf{v}, \tag{3}\] where \(\mathbf{v}\) represents an input feature vector, and \(R_{\lambda}\) represents the forward function performed by GRL. Back-propagation of GRL is defined as: \[\frac{dR_{\lambda}}{d\mathbf{v}}=-\lambda\mathbf{I}, \tag{4}\] where \(\mathbf{I}\) is an identity matrix and \(-\lambda\) is a negative scalar. In the original GRL, a constant or varying value of \(-\lambda\) is utilized, which is determined by the training iterations, as described in [60]. However, this approach overlooks the fact that different training samples may exhibit varying levels of challenge during transfer learning. To address this limitation, this paper introduces a novel AdvGRL that incorporates adversarial mining for hard example. This is achieved by replacing the parameter \(\lambda\) with a new parameter \(\lambda_{adv}\) in Eq. (4) of GRL, resulting in the proposed AdvGRL. Notably, the value of \(\lambda_{adv}\) is determined as follows: \[\lambda_{adv}=\begin{cases}\min(\frac{\lambda_{0}}{L_{c}},\beta),&L_{c}< \alpha\\ \lambda_{0},&\mathrm{otherwise},\end{cases} \tag{5}\] where \(L_{c}\) represents the loss of the domain classifier. \(\alpha\) is a hardness threshold used to determine the difficulty level of the training sample. \(\beta\) is the overflow threshold implemented to prevent the generation of excessive gradients during back-propagation. In our experiment, we set \(\lambda_{0}=1\) as a fixed parameter. Namely, when the domain classifier's loss \(L_{c}\) is smaller, it indicates that the training sample's domain can be more easily identified by the classifier. In this case, the features associated with this sample are not the desired domain-invariant features, making it a more difficult example for domain adaptation. The relationship between \(\lambda_{adv}\) and \(L_{c}\) is visualized in Fig. 3. generalization. Fig. 2 illustrates the utilization of the AdvGRL in both the image-level and object-level DA modules. ### _Domain-level Metric Learning based Regularization_ A common transfer learning approach in many existing DA methods is to prioritize the transfer of features from a source domain \(S\) to a target domain \(T\). Hence, they often overlook the potential advantages that a third related domain can offer. To explore the potential advantages of incorporating a third related domain, we introduce an auxiliary domain for domain-level metric regularization that complements the source domain \(S\). We leverage advanced data augmentation techniques to create this auxiliary domain \(A\), which is particularly useful in autonomous driving scenarios where training data needs to be synthesized for different weather conditions based on existing clear-weather data. As a result, in our proposed architecture (as shown in Fig. 2), the source, auxiliary, and target domain images are regarded as aligned images, ensuring the enforcement of domain-level metric constraints across these three distinct domains. The global image-level features of the \(i\)-th training image for the source (\(S\)), auxiliary (\(A\)), and target (\(T\)) domains are defined as \(F_{i}^{S}\), \(F_{i}^{A}\), and \(F_{i}^{T}\), respectively. Our goal is to reduce the domain gap between \(S\) and \(T\) and ensure that the feature metric distance between \(F_{i}^{S}\) and \(F_{i}^{T}\) is closer compared to the distance between \(F_{i}^{S}\) and \(F_{i}^{A}\). This can be expressed as: \[d(F_{i}^{S},F_{i}^{T})<d(F_{i}^{S},F_{i}^{A}), \tag{6}\] where the metric distance between the corresponding features is denoted as \(d(,.)\). To implement this constraint, we can use a triplet structure where \(F_{i}^{S}\), \(F_{i}^{T}\), and \(F_{i}^{A}\) are treated as the anchor, positive, and negative, respectively. Therefore, the image-level constraint in Eq. (6) can be equivalently expressed as minimizing the image-level triplet loss: \[L_{img}^{R}=\max(d(F_{i}^{S},F_{i}^{T})-d(F_{i}^{S},F_{i}^{A})+\delta,0), \tag{7}\] where the margin constraint is denoted as \(\delta\), and in our experiments, \(\delta\) is set as \(1.0\). Equivalently, the \(i\)-th training image's \(j\)-th object-level features of \(S\), \(A\), and \(T\) are defined as \(f_{i,j}^{S}\), \(f_{i,j}^{A}\), and \(f_{i,j}^{T}\) respectively. To apply our proposed metric regularization to the object-level features, we further minimize the object-level triplet loss: \[L_{obj}^{R}=\max(d(f_{i,j}^{S},f_{i,j}^{T})-d(f_{i,j}^{S},f_{i,j}^{A})+\delta, 0). \tag{8}\] ### _Training Loss_ The overall training loss of the proposed network consists of several individual components. It can be expressed as follows: \[L=\gamma*(L_{img}+L_{obj}+L_{img}^{R}+L_{obj}^{R})+L_{cls}+L_{reg}, \tag{9}\] where \(L_{cls}\) and \(L_{reg}\) represent the loss of classification and the loss of regression respectively. A weight parameter \(\gamma\) is introduced to balance the Faster R-CNN loss and the domain adaptation loss, which is set as \(0.1\). During the training phase, the network can be trained in an end-to-end manner utilizing a standard SGD algorithm. During the testing phase, the object detection can be performed using the Faster R-CNN with the trained adapted weights. ### _General Domain Adaptive Detection with Proposed Method_ Our proposed method is designed to be versatile and adaptable to various domain adaptive object detection scenarios. Specifically, when dealing with scenarios where the target domain images are generated from the source domain with pixel-to-pixel correspondence, such as the Clear Cityscapes\(\longrightarrow\)Foggy Cityscapes, our method can be directly applied without any modifications. To utilize our method with unaligned datasets in real-world scenarios, where the target and source domains lack strict correspondence, such as the Cityscapes\(\longrightarrow\)KITTI, we can remove the \(L_{obj}^{R}\) loss, which eliminates the requirement for object alignment during training. This allows our method to be applied directly without the need for object-level alignment. ## IV Experiments ### _Benchmark_ **Cityscapes**[22]: It is a widely used computer vision dataset that focuses on urban street scenes. There are 2,975 training set and 500 validation set from 27 different cities. The dataset includes annotations for \(8\) different categories. All images in the Cityscapes dataset are 3-channel \(1024\times 2048\) images. **Foggy Cityscapes**[23]: It is a public benchmark dataset created by simulating different intensity levels of fog on the original Cityscapes images. This dataset uses a depth map and a physical model [23] to generate three levels of simulated fog. **Rainy Cityscapes**: We synthesize a rainy-weather dataset named as Rainy Cityscapes in this paper from the original Cityscapes dataset. Specifically, the training set of 3,475 images and the validation set of 500 images from Cityscapes are used to create the Rainy Cityscapes dataset by utilizing a novel data augmentation method called RainMix [61, 62]. To generate rainy Cityscapes images, we utilize a combination of techniques. First, we randomly sample a rain map from a publicly dataset of real rain streaks [63]. Next, we apply random transformations to the rain map using the RainMix technique. These transformations include rotation, zooming, translation, and shearing, which are randomly sampled and combined. Lastly, the rain maps after transformation are merged with the original source domain images, resulting in the generation of rainy Cityscapes images. An example illustrating this process can be seen in Figure 4. **Intensity levels of fog/rain**: For the Foggy Cityscapes and Rainy Cityscapes datasets, their number of images, resolution, and annotations are identical to those of Clear Cityscapes dataset. Based on physical model of [23], the different intensity levels of fog could be synthesized on the Foggy Cityscapes dataset. After obtaining the rain maps by RainMix [61], the intensity of rain maps could be further processed with different erosion levels. In these two ways, the different fog and rain levels (small, medium, large) can be synthesized, as shown in Fig. 5. Following the setting [24, 27, 52], the images with the highest intensity level of fog/rain are selected as the target domain for model training. The models trained with highest intensity level will be then used to test the performance on the validation sets of different fog/rain intensity levels (small, medium, large). ### _Experimental Setting_ **Dataset setting**: We conducted two main experiments in this paper: 1) Clear to Foggy Adaptation, denoted as Clear Cityscapes\(\rightarrow\)Foggy Cityscapes, the labeled training set of Clear Cityscapes [22] and the unlabeled training set of Foggy Cityscapes [23] are used as the source and target domains during training, respectively. Subsequently, the trained model was evaluated Foggy Cityscapes validation set to report the performance. Rainy Cityscapes training set is used as the Auxiliary Domain \(A\) in this Clear to Foggy Adaptation experiment. 2) Clear to Rainy Adaptation, denoted as the Clear Cityscapes\(\rightarrow\)Rainy Cityscapes, where the labeled training set of Clear Cityscapes [22] and the unlabeled training set of Rainy Cityscapes are used as the source and target domains during training, respectively. Then the trained model was evaluated on Rainy Cityscapes validation set to report the performance. Foggy Cityscapes training set is used as the Auxiliary Domain \(A\) in this Clear to Rainy Adaptation experiment. Additionally, we analyzed the transfer learning performance on different intensity levels of fog and rain (small, medium, and large). **Training setting**: We utilize ResNet-50 as the backbone for the Faster R-CNN [20]. Following in [20, 24], during training, We utilize back-propagation and stochastic gradient descent (SGD) to optimize all the deep learning methods in our approach. The initial learning rate of \(0.01\) for \(50,000\) iterations is used in all model training. Afterward, the learning rate is reduced to \(0.001\) and training continues for an additional \(20,000\) iterations. Weight decay is set as \(0.0005\) and momentum is set as \(0.9\) for all experiments. Each training batch consists of three images from source, target, and auxiliary domains respectively. For comparison purposes, we set the \(\lambda\) value in the original GRL (Equation (4)) to \(1\). In the AdvGRL (Equation (5)), the hardness threshold \(\alpha\) is set to \(0.63\), which is computed by averaging the parameters in Equation (1) with setting (\(P_{i}=0.7,G_{i}=1\) and \(P_{i}=0.3,G_{i}=0\)). **Evaluation metrics**: We calculate the Average Precision for each category and the mean Average Precision across all categories using an Intersection over Union threshold of \(0.5\). ### _Adaptation from Clear to Foggy_ Table I presents the results of our experiments on weather adaptation from clear to foggy. In comparison to other DA methods, our proposed DA method achieves the highest performance on Foggy Cityscape, with a mAP of \(42.3\%\), which outperforms the second-best method SCAN [69] by a margin of \(0.2\%\) in terms of mAP improvement. The proposed method effectively reduces the domain gap across various categories, _e.g._, bus got \(51.2\%\) and bicycle got \(39.1\%\) as the second best performance, and train got \(48.7\%\) as the best performance in AP, which is the highlight in Table I. While UMT got \(56.6\%\) in bus and \(34.1\%\) in truck, SAPN got \(40.7\%\) in bicycle, MeGA-CDA got \(49.0\%\) in rider, our proposed DA method exhibits similar performance across them with only minor differences. However, our proposed DA method achieves the highest overall mAP detection performance on Foggy Cityscapes among the recent DA methods. ### _Adaptation from Clear to Rainy_ In the Clear to Rainy adaptation, the only difference during training is the exchange of domains, where the unlabelled Rainy Cityscapes training set serves as the target domain, while the Foggy Cityscapes training set is used as the auxiliary domain. Table II presents the results of domain adaptation from clear to rainy weather. Due to the page limit, we choose the methods with the public available source code which perform very well in the Clear to Foggy Adaptation experiment as the comparison methods in this Clear to Rainy Adaptation experiment, _i.e._, DA-Faster [24], MS-DAYOLO [73]. Similar to the Clear to Foggy Adaptation, our proposed domain adaptation method got the best overall mAP (45.07%) detection performance on Rainy Cityscapes compared to the comparison methods. ### _Ablation Study of Components_ We conduct an analysis of the individual proposed components of our DA object detection method. The experiments are conducted on the Cityscapes\(\rightarrow\)Foggy Cityscapes Fig. 4: Illustration of synthesizing Rainy Cityscapes from the Cityscapes data: (a) the original image from Cityscapes [22], (b) rain map generated by RainMix [61], (c) synthesized rainy image. Fig. 5: Sample visualization results for the Foggy Cityscapes and Rainy Cityscapes validation sets with different intensity levels. and Cityscapes\(\rightarrow\)Rainy Cityscapes tasks, using the ResNet-50 backbone. The results of the ablation study are presented in Table III. In the first row of the table, image-level and object-level adaptation modules are labels as 'img' and 'obj', respectively. 'AdvGRL' and 'Reg' indicate the proposed Adversarial GRL and domain-level metric regularization, respectively. The 'img+obj+GRL' configuration represents the _Baseline_ model used in our experiments. We also evaluate two additional configurations: 'img+obj+AdvGRL' and 'img+obj+AdvGRL+Reg'. Additionally, we include the 'Source only' configuration, which refers to the Faster R-CNN model trained solely on labeled source domain images without any DA methods. The ablation study presented in Table III provides clear evidence of the positive impact of each proposed component in the DA method for both foggy and rainy weather scenarios. Furthermore, we provide qualitative visualization of the object detection results in Fig. 6. ### _Adaptation of Different-intensity Fog and Rain_ Moreover, both simulated Foggy and Rainy Cityscapes datasets contain three levels of intensity, namely Small, Medium, and Large as depicted in Fig. 5. Following the previous works [24, 27, 52], we only utilize the Large intensity level as the target domain during training for both fog and rain. After training, the trained models on the validation set of Rainy Cityscapes and Foggy Cityscapes with images of different intensity levels are evaluated. For the three intensity levels of fog and rain, as shown in Table IV, the 'Baseline' model after domain adaptation could get better detection performance compared to the 'Source only' without DA, while the Proposed Method could continue to further improve the performance compared to the 'Baseline' method. Alternatively, our proposed DA method could significantly mitigate the impact of fog and rain under Small, Medium, Large intensity levels. ### _Adaptation of Cross Cameras_ We conducted an experiment specifically targeting the real-world cross-camera adaptation for different autonomous driving datasets with varying camera settings. We applied our DA method for cross-camera adaptation _i.e._, Cityscapes dataset (source) \(\rightarrow\) KITTI dataset (target). To accommodate the unaligned nature of the datasets, we simply removed the \(L^{R}_{obj}\) term (Eq. 8) during the adaptation process. Following the previous work [24], we used the KITTI training set, consisting of 7,481 images as the target domain. Specifically, we evaluated the AP of the Car category on the target domain. Table V demonstrated the outstanding performance of our proposed DA method compared to recent comparison methods. ### _Feature Distribution Visualization via Adaptation_ To investigate the capability of our proposed DA method to overcome the domain shift (clear weather \(\rightarrow\) rainy/foggy weather), we visualize these domain feature distributions by utilizing t-SNE [74] before and after the domain adaptation in foggy and rainy weather. Fig. 7 obviously presents that our proposed DA method could align the feature distributions to bridge the domain gap (clear weather \(\rightarrow\) rainy/foggy weather). ### _Experiments on Different Parameters_ We analyze the detection performance on different hyper-parameters in Section III, _.i.e_, Eq.9 and Eq.5 for the Cityscapes\(\rightarrow\)Foggy Cityscapes case, and several hyper-parameters were investigated. First of all, \(mAP_{\gamma}\) can be obtained \(mAP_{0.1}=42.34,mAP_{0.01}=41.30,mAP_{0.001}=42.34,mAP_{0.001}=43.34,mAP_{0.001}=4 2.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.01}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.01}=44.34,mAP_{0.01}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.001}=44.34,mAP_{0.01}=44.34,mAP_{0. \(41.19\), where \(\gamma\) represents loss balance weight in Eq. 9. Then, in the AdvGRL (Eq. 5), the \((\alpha,\beta)\), where \(\beta\) represents the overflow threshold and \(\alpha\) represents hardness threshold are set as (a) \((0.63,30)\), (b) \((0.63,10)\), (c) \((0.54,30)\), and (d) \((0.54,10)\), where \(\alpha=0.54\) is obtained by averaging the values of Eq. 1 when \(P_{i}=0.9,G_{i}=1\) and \(P_{i}=0.1,G_{i}=0\). The corresponding detection mAP(s) are (a) 42.34, (b) 38.83, (c) 39.38, (d) 40.47, respectively. ### _Visualization of Hard Examples_ By utilizing \(\lambda_{adv}\) of the proposed AdvGRL, we can identify hard examples during the domain adaptation process. Fig.8 illustrates some of these hard examples. We compute the \(L_{1}\) distance between the features \(F_{i}^{S}\) and \(F_{i}^{T}\) obtained from the backbone of Fig.2. This distance is used as an approximation of the example's hardness (\(ah\)), where a smaller \(ah\) indicates a harder example for transfer learning. Intuitively, when the fog covers a larger number of objects, as illustrated by the bounding-box regions in Fig. 8, the task becomes more challenging. ### _Experiments on Pre-trained Models and Domain Randomization_ **Pre-trained Models:** In the experiment of Cityscapes\(\rightarrow\)Foggy Cityscapes, our proposed DA method utilizes a pre-trained Faster R-CNN as an initialization, and achieves a detection mean Average Precision (mAP) of 41.3, compared to a mAP of 42.3 achieved when our method is initialized without the pre-trained deep learning model. **Domain Randomization:** In the Cityscapes\(\rightarrow\)Foggy Cityscapes experiment, we explore two approaches for domain randomization to reduce the domain shift between the source and target domains. 1) The first approach involves regular data augmentation techniques such as color change, blurring, and salt \(\&\) pepper noises to construct the auxiliary domain. When our method is trained using this auxiliary domain, the detection mean Average Precision (mAP) achieved is 38.7, compared to our method's performance of 42.3 when using the auxiliary domain dataset _i.e._ rain synthesis Cityscapes dataset. 2) The second approach utilizes CycleGAN [79] to facilitate the transfer of image style between the Cityscapes training set and Foggy Cityscapes training set. We trained a Faster R-CNN with these generated images, which got 32.8 mAP. These findings emphasize the limitations of commonly employed domain randomization techniques in effectively addressing the DA challenge. ## V Conclusions In this paper, a novel domain adaptive object detection framework is presented, which is specifically designed for intelligent vehicle perception in foggy and rainy weather conditions. The framework incorporates both image-level and object-level adaptations to address the domain shift in global image style and local object appearance. An adversarial GRL is introduced for adversarial mining of hard examples during domain adaptation. Additionally, a domain-level metric regularization is proposed to enforce feature metric distance be Fig. 8: Visualization of hard examples mined by AdvGRL. Two mined hard examples and one easy example are shown from left to right. Fig. 7: Feature distribution visualization by t-SNE [74] before and after domain adaptation. Clear to Foggy Adaptation: (a) original distribution before adaptation, (b) aligned distribution after the proposed adaptation. Clear to Rainy Adaptation: (c) original distribution before adaptation, (d) aligned distribution after the proposed adaptation. It is recommended to view this figure in color. tween the source, target, and auxiliary domains. The proposed method is evaluated through transfer learning experiments from Cityscapes to Foggy Cityscapes, Rainy Cityscapes, and KITTI. The experimental results demonstrate the effectiveness of the proposed DA method in improving object detection performance. This research contributes significantly to enhancing intelligent vehicle perception in challenging foggy and rainy weather scenarios.
2305.08311
Dissipation induced Liouville-Majorana modes in open quantum system
In open systems, topological edge states quickly lose coherence and cannot be used in topological quantum computation and quantum memory. Here we show that for dissipative quantum spin (or fermionic) systems, topologically non-Hermitian Liouville-Majorana edge modes (LMEMs) can survive in the extended Liouville-Fock space, which is beyond the scope of topological modes defined in usual Hermitian system. By vectorizing the Lindblad equation of the system using the third quantization, we prove that it reduces to a series of non-Hermitian Kitaev chains in the extended Liouville-Fock space, and topologically LMEMs are protected due to its internal symmetry. Furthermore, we provide an explicit method for detecting these modes and prove that the purity of the density matrix characterizes the long-range correlation of LMEMs. The work opens new avenues of searching for novel stable topological states in open systems induced by quantum jumps.
Xing-Shuo Xu, Xiang-Fa Zhou, Guang-Can Guo, Zheng-Wei Zhou
2023-05-15T02:38:57Z
http://arxiv.org/abs/2305.08311v1
# Dissipation induced Liouville-Majorana modes in open quantum system ###### Abstract In open systems, topological edge states quickly lose coherence and cannot be used in topological quantum computation and quantum memory. Here we show that for dissipative quantum spin (or fermionic) systems, topologically non-Hermitian Liouville-Majorana edge modes (LMEMs) can survive in the extended Liouville-Fock space, which is beyond the scope of topological modes defined in usual Hermitian system. By vectorizing the Lindblad equation of the system using the third quantization, we prove that it reduces to a series of non-Hermitian Kitaev chains in the extended Liouville-Fock space, and topologically LMEMs are protected due to its internal symmetry. Furthermore, we provide an explicit method for detecting these modes and prove that the purity of the density matrix characterizes the long-range correlation of LMEMs. The work opens new avenues of searching for novel stable topological states in open systems induced by quantum jumps. _Introduction._ The realization and manipulation of topological quantum states in various systems have received sustained attention in many different fields of physics[1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Since topological phases possess nonlocal orders robust to local perturbations, this intrinsic stability makes them ideal platforms for topological quantum computation and quantum memory. Meanwhile, the system's novelty also enables the construction of various quantum devices that traditional materials can not cover[11; 12; 13]. On the other hand, topological phases are inevitably coupled to their surroundings in natural systems. The resulting quantum dissipation can destroy these phases and spoil the signals induced by their topological features[14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Therefore, searching for novel robust topological effects, even in dissipation, becomes essential to implement various topological phases of matter and quantum computing tasks within current systems [25; 26; 27; 28]. Topological physics in non-Hermitian dissipative systems has also been widely discussed recently[29; 30; 31; 32; 33; 34; 35; 36]. However, in most discussions, dissipation is characterized only by introducing an effective non-Hermitian Hamiltonian. The influence and back action of detections and quantum jumps on the system's dynamics are only less considered. For a dissipative system under the Markovian approximation, the general dynamics are governed by Lindblad equations[37; 38; 39; 40; 41; 42; 43; 44; 45; 46], where both the dissipators and the influence of quantum jumps are explicitly considered. Although topological Majorana modes can be stationary states of the system by carefully designing the dissipative Lindblad operators, in general cases, Majorana modes are unstable in the presence of dissipations[47; 48; 26; 45]. It is thus natural to ask: what topological properties will be stable in dissipative systems? Answering the question is a highly non-trivial task, as currently, solving the master equation for dissipative many-body systems is still a challenging task analytically and numerically[49; 50; 51]. Therefore, finding exactly solvable dissipative models with stable topological characteristics becomes a key ingredient in understanding non-trivial topological effects induced by dissipations, which is also less considered in current studies. In this work, we provide an analytically solvable model described by the Lindblad equation with site-dependent couplings and dissipations. Formally, this is achieved by vectorizing the density matrix, and mapping the Lindblad equation into a Schrogdinger-like equation in the extended Liouville-Fock space with effective non-Hermitian Hamiltonian[40; 41; 42; 48; 40]. Therefore, topological properties discussed for non-Hermitian Hamiltonian can also be transplanted to open quantum systems described by Lindblad equations. The main results can be summarized as follows. 1. We prove the model maps to a series of non-Hermitian Kitaev chains in the extended Liouville-Fock space. Moreover, for open boundaries, the system supports topological Liouville-Majorana edge modes (LMEM) beyond the scope of the usual Hermitian Majorana modes discussed in a closed system. 2. The proposed LMEMs are robust to symmetry-preserving disturbances and can be verified by fixed ratios of physical observables under time evolution. The correlations in LMEMs can also be distilled by quadratic forms of physical observables [52; 53; 54; 55; 56; 57]. 3. Our work also highlights the importance of quantum jumps for implementing novel topological states in dissipative systems. _The model and non-Hermitian Liouvillian._ We start by considering the Lindblad equation of the spin system subject to local dissipations \[i\dot{\rho}=[H,\rho]+i\sum_{j=1}^{N}(L_{j}\rho L_{j}^{\dagger}-\frac{1}{2}\{L_{j} ^{\dagger}L_{j},\rho\}), \tag{1}\] where the Hamiltonian and Lindblad operators read \[H=\sum_{j}^{N-1}J_{j}\sigma_{j}^{x}\sigma_{j+1}^{x},\ \ L_{j}=\sqrt{\gamma_{j}} \sigma_{j}^{z}. \tag{2}\] Here \(J_{j}\) is the coupling strength between nearest-neighboring spins, and \(\gamma_{j}\) is the local dephasing rates. We note that in current system, all nontrivial dissipative dynamics is attributed to the presence of quantum jump terms \(L_{j}\rho L_{j}^{\dagger}\), as the relevant non-Hermitian Hamiltonian contains only homogeneous dissipations due to \(L_{j}^{\dagger}L_{j}=L_{j}L_{j}^{\dagger}=\gamma_{j}I_{j}\). Without dissipation, the model can be solved by introducing the celebrated Jordan-Wigner transformation as \(\sigma_{j}^{x}=\prod_{k<j}(-iw_{2k-1}w_{2k})w_{2j-1},\sigma_{j}^{y}=\prod_{k <j}(-iw_{2k-1}w_{2k})w_{2j}\). Here \(w_{j}\) is the usual single-site Majorana fermion (MF) and satisfies \(\{w_{i},w_{j}\}=2\delta_{ij}\). The Hamiltonian can be written as \(H=\sum_{j}J_{j}iw_{2j}w_{2j+1}\), where two isolated edge MFs \(w_{1}\) and \(w_{2N}\) are decoupled with \(H\) as \([H,w_{1}]=[H,w_{2N}]=0\), and can be combined to form a Dirac fermion. Since \(w_{1}\) and \(w_{2N}\) are spatially separate, this fermionic excitation is nonlocal and robust to local perturbations, which can then be used as an ideal platform to encode a qubit for topological quantum computation. Throughout the article, we alternatively use the spin representation and Majorana fermion representation to discuss the problem. We also remind readers that although the specific physical content under these two representations differs greatly (topological edge states can only be discussed in the fermion representation, while the spin representation has no corresponding topological states), mathematically, they can be transformed into each other through Jordan-Wigner transformations. When the onsite dissipation (\(L_{j}=-iw_{2j-1}w_{2j}\)) is introduced, the aforementioned edge modes are no longer stable. Since the density matrix \(\rho\) can be written as the combinations of \(4^{N}\) Majorana operators \(w^{\{a\}}:=w_{1}^{a_{1}}w_{2}^{a_{2}}...w_{2N}^{a_{2N}}\) with \(a_{j}=(0,1)\), in order to find the solution of the model in this case, we employ the third quantization formalism proposed by Prosen [40; 41; 42; 48], and vectorize the density matrix \(\rho\rightarrow|\rho\rangle\) by introducing \(|w^{\{a\}}\rangle\rangle\) as the basis vectors of the extended Liouville-Fock space. The master equation can then be recast into (See Appendix A for details) a Schrodinger-like equation \(i|\dot{\rho}\rangle=\mathcal{L}|\rho\rangle\rangle\), with the corresponding non-Hermitian Liouvillian \[\mathcal{L} = -2i\sum_{j=1}^{N-1}J_{j}[c_{2j}^{\dagger}c_{2j+1}+c_{2j}c_{2j+1}^ {\dagger}] \tag{3}\] \[+i\sum_{j=1}^{N}\gamma_{j}[(2n_{2j-1}-1)(2n_{2j}-1)-1].\] The above equation represents a dissipative spinless Hubbard model in the extended Liouville-Fock space with interlaced hoppings and interactions. Compared with the Hermitian case, the size of the lattice has been doubled. Here \(c_{j}\) and \(c_{j}^{\dagger}\) are re-defined fermion operators in Liouville-Fock space, and satisfy the relation \(\{c_{i},c_{j}^{\dagger}\}=\delta_{ij}\) and \(\{c_{i},c_{j}\}=\{c_{i}^{\dagger},c_{j}^{\dagger}\}=0\). \(n_{j}=c_{j}^{\dagger}c_{j}\) is the number operator of fermion particle on lattice \(j\). The explicit action of \(c_{i}\) and \(c_{i}^{\dagger}\) on the density matrix reads \[(c_{2i-1}+c_{2i-1}^{\dagger})|\rho\rangle\rightarrow\prod_{j<i} \sigma_{j}^{z}\sigma_{i}^{x}\rho, \tag{4}\] \[(c_{2i}+c_{2i}^{\dagger})|\rho\rangle\rightarrow\prod_{j<i} \sigma_{j}^{z}\sigma_{i}^{y}\rho. \tag{5}\] The presence of local dissipations leads to imaginary nearest-neighboring interactions \(i\gamma_{j}\) between the nearest lattice pairs \((2j-1,2j)\). Without loss of generality, we assume \(J_{j}=J\) and \(\gamma_{j}=\gamma(\forall j)\) in the following. The non-Hermitian Liouvillian \(\mathcal{L}\) has internal symmetry, which allows us to simplify the model significantly. It is easy to check for each \(P_{j}=(2n_{2j}-1)(2n_{2j+1}-1)\) with \(j=(1,2,\cdots,N-1)\), we have \([P_{j},\mathcal{L}]=0\) and \([P_{j},P_{k}]=0\). Therefore the right eigenvectors of \(\mathcal{L}\) can be chosen as the common eigenvectors of all \(P_{j}\). Since \(P_{j}^{2}=I\), the corresponding eigenvalues \(p_{j}\) can only be \(+1\) or \(-1\). The whole Liouville-Fock space can then be divided into different subspaces labeled by the list \(\{p\}=\{p_{1},p_{2},\cdots,p_{N-1}\}\) with \((N-1)\)-entries. Since there are \(2^{N-1}\) different lists, the dimension of each subspace reads \(4^{N}/2^{N-1}=2\times 2^{N}\). Therefore, solving the eigensystem of \(\mathcal{L}\) is reduced to find all the eigenvectors of \(\mathcal{L}\) within _Effective non-Hermitian spin or Kitaev chains in Liouville-Fock space._ To illustrate the hidden topological Figure 1: Diagrammatic representation of Majorana fermion (MF) and Liouville-Majorana fermion (LMF) based on the third quantization. In \((a)\rightarrow(b)\), the Liouvillian of the system is obtained in the extended Liouville Fock space, where two isolated Hermitian MFs correspond to four isolated LMFs. Since two of these LMFs couple to the bulk modes due to dissipations with \(\gamma\neq 0\), there are only two isolated LMFs \(\kappa_{1},\kappa_{4N}\) in the system, as shown in \((b)\rightarrow(c)\). A LMF can be viewed as a _half-MF_ after mapping back to the original Hilbert space (\((c)\rightarrow(d)\)). features of the system, we employ two cascaded Jordan-Wigner transformations again (See Appendix A for details), and rewrite the Liouvillian \(\mathcal{L}\) of the system as \[\mathcal{L}=\sum_{j}^{N-1}iJ(P_{j}-1)\kappa_{4j-1}\kappa_{4j+2}\] \[+i\gamma\sum_{j}^{N}(i\kappa_{4j-2}\kappa_{4j-1}-1). \tag{6}\] Here \(\{\kappa_{k}\}\) represents another new-defined set of \(4N\) Liouville-Majorana fermions (LMFs) in Liouville-Fock space with \(k=1,\cdots,4N\). The specific dependencies of \(\kappa_{k}\) on \(c_{j}\) are tedious and will not be listed here (See Appendix A and B for details). Therefore, within each subblock defined by \(P_{j}=i\kappa_{4j}\kappa_{4j+1}\), \(\mathcal{L}\) takes the form of an effective non-Hermitian Kitaev chain with site-dependent couplings \(J(p_{j}-1)\) (\(2J\) or \(0\)) and dissipation rate \(i\gamma\). Equation (6) represents one of the main results of the current work. Although diagonalizing \(\mathcal{L}_{p}\) analytically for given \(\{p\}\) is still difficult, the effective coupling \(J_{j}(p_{j}-1)\) vanishes when \(p_{j}=1\). This means that the whole chain is broken at these sites. Solving the model is then reduced to the diagonalization of \(\mathcal{L}_{p}\) within each subchain, which thus greatly simplifies the calculation. Especially, in the subspace defined by \(p_{j}=1\) for \(1\leq j\leq N-1\), the effective Liouvillian is recast into \(\mathcal{L}_{p}=\sum_{j=1}^{N}i\gamma_{j}(i\kappa_{4j-2}\kappa_{4j-1}-1)\), which describes series of isolated dissipative coupled pairs of Liouville-Majorana operators. The stationary states of the whole system can also be found in this subspace satisfying \(i\kappa_{4j-2}\kappa_{4j-1}|\rho_{s}\rangle\rangle=|\rho_{s}\rangle\rangle\), whose general form can be written as \(\mathcal{L}=-\left(\begin{array}{c}I\pm\mathcal{L}\mathcal{A}\end{array} /\rho N\right.\) with \(\mathcal{H}_{\mathcal{L}}\) as the product of two subspaces \(\mathcal{H}_{\mathcal{L}}^{\prime}\otimes\mathcal{H}_{e}\), where \(\mathcal{H}_{\mathcal{L}}^{\prime}\) denotes the Fock subspace expanded by other Liouville-Majorana modes \(\kappa_{j}\) with \(j=2,\cdots,(4N-1)\). Therefore, an initial product state (See Appendix C and D for the detailed constructions) \[|\rho(0)\rangle\rangle=|\rho^{\prime}\rangle\rangle\otimes(a|1\rangle)+b|0 \rangle\rangle\rangle \tag{7}\] in the Liouville-Fock space remains unentangled during the evolution as \(|\rho(t)\rangle\rangle=[\exp(-i\mathcal{L}t)[\rho^{\prime}(0))\rangle] \otimes(a|1\rangle)+b|0\rangle\rangle\rangle\). We note that the LMEMs discussed here are very different from the conventional Majorana modes in Hermitian Kitaev chain. Specifically, LMEMs are defined in the extended Liouville-Fock space, while the conventional Majorana edge modes are defined instead in the original Hilbert space. This ensures that LMEMs can survive in the long-time limit, while the usual Hermitian Majorana modes are unstable and decay rapidly in the presence of dissipations. Meanwhile, in Hermitian system, the presence of topological Majorana modes enables us to define a 2-dimensional Hilbert space, where both qubit pure and mixed states can be well supported. However, in dissipative system, although the presence of LMEMs also enables the definition of Hilbert space in Liouville-Fock space, this does not indicate the existence of well-defined qubit subspace in the original Hilbert space defined by \(H\). Therefore, a general LMEMs can only be described as mixed states. This enables the exploration of nontrivial topological features in dissipative system based on mixed states. Finally, the correlation of LMEMs defined in the Liouville-Fock space does not correspond to a measurable observable directly as \[\langle\langle\rho|i\kappa_{1}\kappa_{4N}|\rho\rangle\rangle=\text{tr}(\rho \mathcal{M}\sigma_{1}^{x}\sigma_{N}^{x}\rho\sigma_{1}^{x}\sigma_{N}^{x}). \tag{8}\] This correlation can always be expressed as a quadratic form of appropriately chosen observables, as will be shown in latter discussions. Detection of topologically protected LMEMsThe presence of LMEMs can be easily manifested by considering an initial product state \(|\rho(0)\rangle\rangle\) shown in Eq.(7). To show this novel feature, we can choose two Hermitian operators \(\{X_{1},X_{2}\}\) such that both \(|X_{1}\rangle\rangle\) and \(|X_{2}\rangle\rangle\) are product in Liouville-Fock space, and satisfy \(|X_{1}\rangle\rangle=|X^{\prime}\rangle\rangle|\phi_{1}\rangle\) and \(|X_{2}\rangle\rangle=|X^{\prime}\rangle\rangle|\phi_{2}\rangle\rangle\) with \(|\phi_{i}\rangle\rangle\) the corresponding state vectors in \(\mathcal{H}_{e}\). In the Appendixes, we have provided the explicit method of constructing all these operators \(\{\rho,X_{1},X_{2}\}\) in the original spin basis. Then using the identity \[\frac{\langle X_{1}\rangle}{\langle X_{2}\rangle}=\frac{\langle\langle X_{1}| \rho\rangle\rangle}{\langle\langle X_{2}|\rho\rangle\rangle}=\frac{\langle \langle\phi_{1}|(a|1\rangle\rangle+b|0\rangle\rangle\rangle)}{\langle\langle \phi_{2}|(a|1\rangle\rangle+b|0\rangle\rangle\rangle)}, \tag{9}\] we conclude that the ratio \(\langle X_{1}\rangle/\langle X_{2}\rangle\) is time independent, and determined completely by the edge modes. However, if the edge modes and the bulk modes are coupled, or the initial state is entangled in Liouville-Fock space, \(\langle X_{1}\rangle/\langle X_{2}\rangle\) can be time-dependent and tends to a stable value only in the long-time limit. Figure 2: The reduced non-Hermitian Kitaev chain within the subspace defined by \(\{p_{1},p_{2},\cdots,p_{N-1}\}\). For specific given \(\{p\}\), this chain is broken at the lattice site satisfying \(p_{j}=1\), and becomes an assembly of subchains with shorter length. Figure 3 shows the evolution of the ratio defined in Eq.(9) for different initial states. For initial bulk-edge product state \(\rho_{0}=[I+\sum_{j=2}^{N-1}0.2(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{z})] (I+0.5\mathcal{M})/2^{N}\) and even \(N\), the two observables can be chosen as \(X_{1}=\sum_{j=2}^{N-1}(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{z})\) and \(X_{2}=X_{1}\mathcal{M}\). The numerical calculation shows that \(\langle X_{1}\rangle/\langle X_{2}\rangle\) is fixed during the evolution, as depicted by solid lines in Fig.(3a). However, for non-product initial state \(\rho_{0}^{\prime}=(I+\sum_{j=2}^{N-1}0.1\sigma_{j}^{x}\sigma_{j+1}^{x})(I+0.5 \mathcal{M})/2^{N}+0.1\sigma_{1}^{z}(I-0.5\mathcal{M})/2^{N}\), the dashed lines in Fig.(3a) shows that the ratio \(\langle X_{1}\rangle/\langle X_{2}\rangle\) changes along with \(t\), which indicates the entanglement of the edge and bulk modes in this case. The edge modes are topologically protected by the internal symmetry of the system. For any perturbations that can be characterized by introducing additional Hamiltonian \(H^{\prime}\), or dissipators \(L_{j}^{\prime}\) into the Lindblad equation, the edge modes are decoupled from the bulk modes as long as the corresponding Lindbladians in Liouville-Fock space commute with \(\kappa_{1}\) and \(\kappa_{4N}\). Using spin language, these operators can be chosen such that \[[H^{\prime}(L^{\prime}),\sigma_{1}^{x}]=[H^{\prime}(L^{\prime}), \sigma_{N}^{x}]=[H^{\prime}(L^{\prime}),\prod_{j=1}^{N}\sigma_{j}^{z}]=0 \tag{10}\] For comparative purposes, in figure (3b), we also plot the evolution of \(\langle X_{1}\rangle/\langle X_{2}\rangle\) for Lindblad equation with \[H=\sum_{j=1}^{N-1}J_{j}\sigma_{j}^{x}\sigma_{j+1}^{x}+\sum_{j=2} ^{N-1}b_{j}\sigma_{j}^{z}+u\sum_{j=1}^{N}\sigma_{j}^{x}, \tag{11}\] the dissipator \(L_{j}=\gamma_{j}\sigma_{j}^{z}(j=1,\cdots,N)\) and \(L_{j}^{\prime}=\gamma_{j}^{\prime}\sigma_{j}^{x}\sigma_{j+1}^{x}(j=1,\cdots,N-1)\). For the initial bulk-edge product state \(\rho_{0}\), the calculation shows that \(\langle X_{1}\rangle/\langle X_{2}\rangle\) remains fixed for all coefficients \(\{J_{j},b_{j},\gamma_{j},\gamma_{j}^{\prime}\}\) randomly distributed between \(0\) and \(1\) when \(u=0\), which proves the robustness of edge modes under symmetry-preserving perturbations. For nonzero \(u=2\), \(\langle X_{1}\rangle/\langle X_{2}\rangle\) changes along with \(t\) as the edge modes couples to the bulk due to the perturbations. _Purity as the detection of Long-range correlation in Liouville-Fock space._ For the initial state \(|\rho(0)\rangle\) shown in Eq.(7), since \(i\kappa_{1}\kappa_{4N}|\rho\rangle=|\rho^{\prime}\rangle\rangle\otimes(a|1 \rangle-b|0\rangle)\), the correlation defined by \(\langle\langle\rho|i\kappa_{1}\kappa_{4N}|\rho\rangle\rangle\) can be written as \[\langle\langle\rho|i\kappa_{1}\kappa_{4N}|\rho\rangle\rangle= \frac{|a|^{2}-|b|^{2}}{|a|^{2}+|b|^{2}}\langle\langle\rho|\rho\rangle\rangle \propto\text{Tr}\big{(}\rho^{2}\big{)}, \tag{12}\] where \(\text{Tr}\big{(}\rho^{2}\big{)}=\langle\langle\rho|\rho\rangle\rangle\) is the purity of the state \(\rho\). After inserting the completeness relation in Liouville-Fock space, we have \[\langle\langle\rho|\rho\rangle\rangle=\sum_{\mu}\frac{\langle \langle\rho|O_{\mu}\rangle\rangle\langle\langle O_{\mu}|\rho\rangle\rangle}{2 ^{N}}=\sum_{\mu}\frac{|\langle O_{\mu}\rangle|^{2}}{2^{N}} \tag{13}\] with \(O_{\mu}\) the usual \(N\)-spin Pauli operators (See In the Appendix E for details). Hence, the correlation \(\langle\langle\rho|i\kappa_{1}\kappa_{4N}|\rho\rangle\rangle\) can be expressed as a quadratic form of observables defined by \(O_{\mu}\). For dissipative systems, the dynamics in the long time limit is mainly determined by eigenvectors in the expansion of \(|\rho(t)\rangle\rangle=\sum_{j}e^{-i\lambda_{j}t}|\rho_{j}^{\prime}\rangle \rangle\otimes(a|1\rangle)+b|0\rangle\rangle)\) with minimal \(|\text{Im}(\lambda_{j})|\) as \(\text{Im}(\lambda_{j})|\) as \(\text{Im}(\lambda_{j})\leq 0\). This indicates that the summation in Eq.(13) can be well approximated by choosing a subset \(\{O_{m}(m=1,\cdots,M)\}\) with many fewer observables (\(M\ll N\)), which can then simplify detection in experiment. In Fig. 4, we plot the evolution of \(\langle\langle\rho|\rho\rangle\rangle\) for the initial product state \(\rho_{0}=[I+0.3\mathcal{M}(\sigma_{1}^{z}+\sigma_{1}^{y}\sigma_{2}^{x}+\sigma_ {1}^{y}\sigma_{3}^{x}+\sigma_{2}^{z}\sigma_{2}^{x}\sigma_{3}^{x})](I+0.4 \mathcal{M})/2^{N}\). This state has nonzero components in subspaces defined by \(\{p_{j}=1(\forall j)\}\) and \(\{p_{j\neq 1}=1(\forall j),p_{1}=-1\}\). The Lindblad spectra \(\{\lambda_{i}\}\) show exceptional points as we increase the dissipation rate \(\gamma\) Figure 4: (a) Evolution of the purity \(\text{Tr}\big{(}\rho^{2}\big{)}\) for an initial bulk-edge product state. The dotted line is the approximated purity obtained by Eq.(14), which is compared with the precise values obtained by solving the Lindblad equation directly (solid line). The two results match well for larger \(\gamma t\). (b) and (c) are the real and imaginary parts of Liouville spectrum for \((N=6,J=2)\). The circles are the eigenvalues within the subspace defined by \(\{p_{j\neq 2}=1(\forall j),p_{2}=-1\}\). as shown in Fig. 4b and 4c. In addition, the system supports numerous quasi-stable state \(\mathrm{Im}(\lambda)\to 0^{-}\) as \(\gamma\rightarrow\infty\). The correlation \(\langle\langle\rho|i\kappa_{1}\kappa_{4N}|\rho\rangle\rangle\) can then be obtained as \[\langle\langle\rho|i\kappa_{1}\kappa_{4N}|\rho\rangle\rangle\sim\Big{(}1+ \langle\sigma_{1}^{y}\sigma_{2}^{x}\rangle^{2}+\langle\sigma_{1}^{z}\rangle^{ 2}\Big{)}. \tag{14}\] Hence the long-time evolution of purity for the given state can be obtained by detecting only short-range correlations defined by \(\langle\sigma_{1}^{y}\sigma_{2}^{x}\rangle\) and \(\langle\sigma_{1}^{z}\rangle\). For larger decay rate \(\gamma\), the result fits the exact result well, as shown by dashed line in Fig. 4a. _Discussion and conclusion._To summarize, by solving an exactly solvable model of open system described by Lindblad master equations, we find a topologically protected Liouville-Majorana modes hidden in the Liouvillian. We proved that generally, the mode corresponds to mixed states of the system, which is different from the case in Hermitian system, where it can be described in terms of pure states. The mode is robust and stable in the whole dynamic process, which is also different from the stationary state of the Liouville equation. The work opens up the research of nontrivial topological states defined in the extended Liouville-Fock space and extends the exploration of topological physics for mixed states in general dissipative systems. We thank Prof. X.-W. Luo for helpful discussions. This work was funded by National Natural Science Foundation of China (Grants No. 11974334, and No. 11774332), and Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301200).XFZ also acknowledges support from CAS Project for Young Scientists in Basic Research (Grant No.YSBR-049). In this Appendix, we present the explicit derivation of the effective Liouvillian in the extended Liouville Fock space using the third quantization formalism. Furthermore, the explicit construction of the bulk-edge product vectors in Liouville-Fock space is provided, and the relevant forms in the original spin basis are discussed. Finally, the construction methods and concrete forms of probe operators are discussed in detail. Appendix A Liouville-Fock space and Non-Hermitian effective Liouvillian based on Prosen's third quantization For open system with Markov approximation, the dynamics of its density matrix \(\rho\) is governed by the following Lindblad master equation \[i\dot{\rho}=\hat{\phi}_{L}[\rho]=[H,\rho]+i\sum_{j}(L_{j}\rho L_{j}^{\dagger}- \frac{1}{2}\{L_{j}^{\dagger}L_{j},\rho\}) \tag{15}\] which describes the non-unitary time evolution of the system subject to the external environment. Here the first term represents the unitary dynamics where \(H\) is the Hamiltonian of the system. \(L_{j}\) is the corresponding Lindblad operator describing the \(j\)-th dissipation channel with the decay rate \(\gamma_{j}\). By regarding \(\rho\) as a vector \(|\rho\rangle\rangle\) and due to the linearity of the system, we can rewrite the equation as \[i|\dot{\rho}\rangle\rangle=\mathcal{L}|\rho\rangle\rangle, \tag{16}\] which takes similar form as the usual Schrondinger equation with the effective non-Hermitian Liouvillian \(\mathcal{L}\). Generally speaking, the explicit form of \(\mathcal{L}\) depends on how we vectorize the matrix \(\rho\). Specifically, for quadratic spin/fermi system, the vectorization process can be easily discussed based on Majorana representation. Especially, Prosen has introduced the third quantization formalism in [40; 41; 42], which allow us to solve this dissipated system in an elegant and systematic manner. For \(N\)-spin/fermi system in 1D, the corresponding density matrix can be written using Majorana operators as \[\rho=\frac{1}{2^{2N}}\sum c_{a_{1},a_{2},a_{3}...a_{2N}}w_{1}^{a_{1}}w_{2}^{a _{2}}w_{3}^{a_{3}}...w_{2N}^{a_{2N}} \tag{17}\] where \(w_{j}\) are Majorana operators satisfying the anti-commutation relation \(\{w_{j},w_{k}\}=2\delta_{jk}\),\(a_{j}=\{0,1\}\) represents the excitation number of the \(w_{j}\), and \(c_{a_{1},a_{2},a_{3}...a_{2N}}\) are the real coefficients. For spin-\(1/2\) system discussed in the main text, this is always possible due to the Jordan-Wigner transformation \[\sigma_{j}^{x} =\prod_{k<j}(-iw_{2k-1}w_{2k})w_{2j-1}, \tag{18}\] \[\sigma_{j}^{y} =\prod_{k<j}(-iw_{2k-1}w_{2k})w_{2j}. \tag{19}\] For later convenience, we define \[w^{\{a\}}:=w_{1}^{a_{1}}w_{2}^{a_{2}}...w_{2N}^{a_{2N}}, \tag{10}\] and \(n_{a}:=\sum_{j}a_{j}\) represents the number of Majorana operators in the basis vector \(|w^{\{a\}}\rangle\rangle\). The vectorization of the master equation can be implemented by associating a Hilbert space \(\mathcal{H}_{\mathcal{L}}\), namely Liouville-Fock space, with the basis defined as \[|w^{\{a\}}\rangle\rangle:=|w_{1}^{a_{1}}w_{2}^{a_{2}}...w_{2N}^{a_{2N}}\rangle\rangle. \tag{11}\] to the \(4^{n}\)-dimensional space of operators \(w^{\{a\}}\). In the meantime, since the Hamiltonian and relevant Lindbladians can also be written as the combinations of Majorana operators \(w^{\{a\}}\), the Liouville superoperator \(\mathcal{L}\) can then be expressed as an operator in this newly-defined Liouville-Fock space. Specifically, for each Majorana operator \(w_{k}\) shown in \(H\) or \(L_{i}\) acting on the basis \(|w^{\{a\}}\rangle\rangle\), we can introduce the fermions operators \(c_{j}\) and \(c_{j}^{\dagger}\) as \[c_{j}^{\dagger}|w_{1}^{a_{1}}w_{2}^{a_{2}}...w_{2N}^{a_{2N}}\rangle\rangle= \delta_{0,a_{j}}|w_{j}w_{1}^{a_{1}}w_{2}^{a_{2}}...w_{2N}^{a_{2N}}\rangle, \tag{12}\] \[c_{j}|w_{1}^{a_{1}}w_{2}^{a_{2}}...w_{2N}^{a_{2N}}\rangle\rangle=\delta_{1,a_ {j}}|w_{j}w_{1}^{a_{1}}w_{2}^{a_{2}}...w_{2N}^{a_{2N}}\rangle, \tag{13}\] with the standard canonical anticommutation relations \[\{c_{j},c_{k}\}=0,\{c_{j},c_{k}^{\dagger}\}=\delta_{jk},\{c_{j}^{\dagger},c_{ k}^{\dagger}\}=0. \tag{14}\] For \(N\)-site spin/fermi system, since the dimension of the Fock space \(\mathcal{H}_{\mathcal{L}}\) is \(4^{N}\), we have \(2N\) fermion operators \(c_{j}\) with \(j=(1,2,\cdots,2N)\). The master equation of the system can be written using majorana operators as \[i|\dot{\rho}\rangle\rangle=-i\sum_{j=1}^{N-1}J_{j}(w_{2j}w_{2j+1}\rho-\rho w_{ 2j}w_{2j+1})+i\sum_{j=1}^{N}\gamma_{j}(w_{2j-1}w_{2j}\rho w_{2j}w_{2j-1}-\rho). \tag{15}\] Based on the above discussions, one can verify that after mapping into the Liouville-Fock space, operators acting on \(\rho\) can be re-expressed using fermionic operators as \[\omega_{j}\rho \Longrightarrow(c_{j}+c_{j}^{\dagger})|\rho\rangle\rangle, \tag{16}\] \[\rho\omega_{i}\omega_{j} \Longrightarrow(c_{j}-c_{j}^{\dagger})(c_{j}-c_{j}^{\dagger})| \rho\rangle. \tag{17}\] Using these substitutions, we can immediately obtain the relevant Liouvillian \(\mathcal{L}\) which reads \[\mathcal{L}=-2i\sum_{j=1}^{N-1}J_{j}(c_{2j}^{\dagger}c_{2j+1}+c_{2j}c_{2j+1}^{ \dagger})-i\sum_{j=1}^{N}\gamma_{j}+i\sum_{j=1}^{N}\gamma_{j}\big{(}2n_{2j-1} -1\big{)}(2n_{2j}-1), \tag{18}\] where \(n_{j}=c_{j}^{\dagger}c_{j}\) is the number operator on site \(j\). Since \(\mathcal{L}\) commutes with all \(P_{j}=(2n_{2j}-1)(2n_{2j+1}-1)\) for \(j=(1,2,\cdots,N-1)\), and \(P_{j}^{2}=I\), the right eigenvectors of \(\mathcal{L}\) can be chosen as the common eigenvectors of all \(P_{j}\), where the corresponding eigenvalues \(p_{j}\) can only be \(+1\) or \(-1\). The whole Liouville-Fock space can then be divided into different subspaces labeled by the list \(\{p\}=\{p_{1},p_{2},\cdots,p_{N-1}\}\) with \((N-1)\)-entries. To obtain the effective interactions of \(\mathcal{L}\), we introduce another set of Jordan-Wigner transformations (JW-I) in Liouville-Fock space as \(c_{2i-1}^{\dagger}=\frac{1}{2}\prod_{j=1}^{2i-2}Z_{j}(X_{2i-1}-iY_{2i-1})\) and \(c_{2i}^{\dagger}=\frac{1}{2}\prod_{j=1}^{2i-1}Z_{j}(Y_{2i}-iX_{2i})\), and map the system into an effective spin model defined as \[\mathcal{L}=\sum_{j}^{N-1}J(P_{j}-1)Y_{2j}Y_{2j+1}-i\gamma\sum_{i}^{N}(Z_{2j-1 }Z_{2j}+1). \tag{19}\] Here \(\{X_{k},Y_{k},Z_{k}\}\) are the local Pauli matrices defined in Liouville-Fock space at site \(k\), and we have set the homogeneous decay rates as \(\gamma_{j}=\gamma\). Therefore, within each subblock denoted by \(\{p\}\), \(\mathcal{L}\) takes the form of a non-Hermitian spin mode with site-dependent couplings \(J(p_{j}-1)\) and dissipation rate \(i\gamma\). To illustrate the hidden topological features of the system, we employ the Jordan-Wigner transformation (JW-II) again and define the local Liouville-Majorana operators as \(\kappa_{2i-1}=-\prod_{j=1}^{i-1}X_{j}Z_{i}\) and \(\kappa_{2i}=\prod_{j=1}^{i-1}X_{j}Y_{i}\), and finally we arrive at \[\mathcal{L}=\sum_{j}^{N-1}iJ(P_{j}-1)\kappa_{4j-1}\kappa_{4j+2}+i\gamma\sum_{j }^{N}(i\kappa_{4j-2}\kappa_{4j-1}-1) \tag{20}\] with \(P_{j}=i\kappa_{4j}\kappa_{4j+1}\). Therefore, for given \(\{p\}\), \(\mathcal{L}\) reduces to an effective non-Hermitian Kitaev chain with site-dependent couplings. We stress that although both \(\omega_{j}\) and \(\kappa_{j}\) are Majorana operators (MOs), they are defined in different spaces. Specifically, \(\omega_{j}\) is the MO defined in the original Hilbert space, and \(\kappa_{j}\) is another type of MO defined in the extended Liouville-Fock space (denoted by \(\mathcal{H}_{\mathcal{L}}\) in the paper). For \(N\)-site chain, we have \(2N\)\(\omega\)-type MOs, but \(4N\)\(\kappa\)-type Liouville-MOs. So generally speaking, one \(\omega\)-type MO maps to two \(\kappa\)-type MOs. In this sense, we claim that a Liouville-Majorana fermion can be viewed as a half-Majorana fermion in the original Hilbert space. In the spin basis defined in Eq.(1), the Liouville-Majorana edge modes discussed in the paper can only be described as mixed states, which is different from the case for the usual Majorana mode. ## Appendix B The spectra and dynamical features of Liouvillian \(\mathcal{L}\) To explore the dynamical properties of system, we consider the eigenmatrices and eigenvalues of the Liouville superoperator \(\hat{\phi}_{L}\) and its counterpart \(\mathcal{L}\) in the Liouville Fock space \(\mathcal{H}_{\mathcal{L}}\) \[\hat{\phi}_{L}[\rho_{m}]=\lambda_{m}\rho_{m}\rightarrow\mathcal{L}|\rho_{m} \rangle\rangle=\lambda_{m}|\rho_{m}\rangle\rangle. \tag{10}\] For a master equation in the Lindblad form, it has been shown that the spectrum \(\{\lambda_{m}\}\) satisfies the following properties which are useful for later discussions. First, since the imaginary part of \(\lambda_{m}\) is linked with the dissipation dynamics towards stationary states, we have \(\mathrm{Im}[\lambda_{m}]\leq 0\). The stationary state \(\rho_{ss}\) of the system corresponds to the eigenmatrix \(\rho_{0}\) with \(\lambda_{0}=0\). So we have \(\rho_{ss}=\rho_{0}/\mathrm{Tr}[\rho_{0}]\). Additionally, if stationary states are degenerate, the system can evolve towards different steady states depending on the initial conditions. Second, since \(\rho\) is Hermitian, and \(\hat{\phi}_{L}[\sigma^{\dagger}]=-(\hat{\phi}_{L}[\sigma])^{\dagger}\) for any matrix \(\sigma\), the eigenvalues must come in anti-complex conjugate pairs \(\{\lambda_{m},-\lambda_{m}^{*}\}\). Therefore, if \(\lambda_{m}\) is pure imaginary, the eigenmatrix \(\rho_{m}\) must be Hermitian and vice versa. Finally, if \(\mathrm{Im}[\lambda_{m}]\neq 0\), since the Liouvillian evolution is trace-preserving, the eigenmatrix evolves as \(e^{-i\lambda_{m}t}\rho_{m}\to 0\) when \(t\rightarrow\infty\). This leads to \(\mathrm{Tr}[\rho_{m}]=0\). Equipped with the eigensystem of the Lindblad equation, we can then discuss the dynamics of the system in a more convenient manner. Since any physical state of the system can always be decomposed as \[\rho=g_{0}\rho_{ss}+\sum_{m\neq 0}g_{m}\rho_{m}, \tag{11}\] the time-evolution of \(\rho(t)\) in Liouville-Fock space \(\mathcal{H}_{\mathcal{L}}\) can then be simplified as \[|\rho(t)\rangle\rangle=g_{0}|\rho_{ss}\rangle\rangle+\sum_{m\neq 0}g_{m}e^{-i \lambda_{m}t}|\rho_{m}(t)\rangle\rangle. \tag{12}\] We stress that the dynamical properties of a quantum system with the effective Liouvillian \(\mathcal{L}\) is very different from the usual non-Hermitian system solely driven by an effective non-Hermitian Hamiltonian \(H_{e}=H-i\gamma\sum_{m}L_{m}^{\dagger}L_{m}/2\). In the later case, the effect of quantum jump \(L_{m}\rho L_{m}^{\dagger}\) has been neglected. We also note that the non-Hermiticity of \(H_{e}\) can result in many novel effects. For instance, pseudo-Hermitian or PT-symmetric \(H_{e}\) has been widely discussed in the past decades, which gives rise to rich exotic phenomena in different subjects of physics. However, in many cases, this jump term \(L_{m}\rho L_{m}^{\dagger}\) cannot be dropped and can change the dynamical behavior of the system dramatically. ## Appendix C Bulk-edge product vectors in Liouville-Fock space In our system, the two edge Liouville-Majorana operators \(\kappa_{1}\) and \(\kappa_{4N}\) can be used to define the Dirac fermionic operator \(d_{e}=\frac{1}{2}(\kappa_{1}+i\kappa_{4N})\) and \(d_{e}^{\dagger}=\frac{1}{2}(\kappa_{1}-i\kappa_{4N})\). The corresponding number operator reads \(d_{e}^{\dagger}d_{e}\) and satisfies the following properties after acting on its local Fock basis \[d_{e}^{\dagger}d_{e}|1\rangle\rangle=|1\rangle\rangle,\hskip 28.452756ptd_{e}^{ \dagger}d_{e}|0\rangle\rangle=0. \tag{13}\] Since \(d_{e}\) and \(d_{e}^{\dagger}\) commute with the Liouvillian \(\mathcal{L}\), \(|0\rangle\rangle\) and \(|1\rangle\rangle\) correspond to the two local dark states of the system, and defined as the basis of the local Fock space denoted by \(\mathcal{H}_{e}\). For the remaining Liouville-Majorana operators \(\kappa_{j}\) with \(2\leq j\leq(4N-1)\), they can be combined similarly to define \(2N-1\) Dirac fermionic operators with the corresponding Fock space denoted by \(\mathcal{H}_{\mathcal{L}}^{\prime}\). Therefore, the whole Liouville-Fock space \(\mathcal{H}_{\mathcal{L}}\) can then be expressed as the tensor product of \(\mathcal{H}^{\prime}_{\mathcal{L}}\) and \(\mathcal{H}_{e}\). Using these notations, we can then rewrite the state vector \(|\rho\rangle\rangle\) in Liouville-Fock space as \[|\rho\rangle\rangle=|\psi_{1}\rangle\rangle|1\rangle\rangle+|\psi_{0}\rangle \rangle|0\rangle\rangle. \tag{100}\] For operators acting on \(|\rho\rangle\rangle\) in Liouville-Fock space, they can be mapped to the corresponding linear operations in the original Hilbert space defined by the spin basis. For later convenience, we list the explicit correspondence as follows \[d^{\dagger}_{e}d_{e}|\rho\rangle\rangle \rightarrow\frac{1}{2}(\rho+\mathcal{M}\sigma^{x}_{1}\sigma^{x}_{N }\rho\sigma^{x}_{1}\sigma^{x}_{N}), \tag{101}\] \[d_{e}|\rho\rangle\rangle \rightarrow\frac{1}{2}\mathcal{M}\sigma^{x}_{1}(\mathcal{M} \sigma^{x}_{1}\sigma^{x}_{N}\rho\sigma^{x}_{N}\sigma^{x}_{1}+\rho)\mathcal{M} \sigma^{x}_{1},\] (102) \[d^{\dagger}_{e}|\rho\rangle\rangle \rightarrow-\frac{1}{2}\mathcal{M}\sigma^{x}_{1}(\mathcal{M} \sigma^{x}_{1}\sigma^{x}_{N}\rho\sigma^{x}_{N}\sigma^{x}_{1}-\rho)\mathcal{M }\sigma^{x}_{1}. \tag{103}\] where \(\mathcal{M}=(-1)^{N}\prod_{j=1}^{N}\sigma^{z}_{j}\). One can check that if \(|\rho\rangle\rangle=|\rho_{1}\rangle\rangle=|\psi_{1}\rangle\rangle|1\rangle\rangle\), then we have \[d^{\dagger}_{e}d_{e}|\rho_{1}\rangle\rangle=|\rho_{1}\rangle\rangle\to \mathcal{M}\sigma^{x}_{1}\sigma^{x}_{N}\rho_{1}\sigma^{x}_{1}\sigma^{x}_{N}= \rho_{1}. \tag{104}\] Similarly, if \(|\rho\rangle\rangle=|\rho_{0}\rangle\rangle=|\psi_{0}\rangle\rangle|0\rangle\rangle\), we have \[d^{\dagger}_{e}d_{e}|\rho_{0}\rangle\rangle=|\rho_{0}\rangle\rangle\to \mathcal{M}\sigma^{x}_{1}\sigma^{x}_{N}\rho_{0}\sigma^{x}_{1}\sigma^{x}_{N}=- \rho_{0}. \tag{105}\] This also indicates that if \(\rho_{i}\) is hermitian, then we must have \([\rho_{i},\prod_{j}^{N}\sigma^{z}_{j}]=0\). To obtain the explicit form in Liouville-Fock space for a given density matrix, we consider the following \(N\)-body Pauli operator in the original Hilbert space \(\hat{O}=\sigma^{\mu_{1}}_{1}\otimes\sigma^{\mu_{2}}_{2}\otimes...\otimes \sigma^{\mu_{N}}_{N}\) with \(\mu_{i}\in\{0,x,y,z\}\) and \(\sigma^{0}=I\) the usual identity matrix. The relevant state vector in Liouville-Fock space reads \[|\hat{O}\rangle\rangle=|\hat{O}_{1}\rangle\rangle|1\rangle\rangle+|\hat{O}_{0 }\rangle\rangle|0\rangle\rangle. \tag{106}\] In order to show that \(|\hat{O}\rangle\rangle\) can be written as a product state in Liouville-Fock space, we define the following two projectors \[P_{+}=\frac{1}{2}(I+\mathcal{M}),\ P_{-}=\frac{1}{2}(I-\mathcal{M})\] with \(P_{\pm}^{2}=I\). Since \(\hat{O}\) is commuted (\(\delta_{o}=+1\)) or anti-commuted (\(\delta_{o}=-1\)) with \(\prod_{i}^{N}\sigma^{z}_{i}\sigma^{z}_{1}\sigma^{x}_{N}\) as \[\hat{O}\prod_{i=1}^{N}\sigma^{z}_{i}\sigma^{x}_{1}\sigma^{x}_{N}=\delta_{o} \prod_{i=1}^{N}\sigma^{z}_{i}\sigma^{x}_{1}\sigma^{x}_{N}\hat{O}, \tag{107}\] we have \[\mathcal{M}\sigma^{x}_{1}\sigma^{x}_{N}\hat{O}P_{\pm}\sigma^{x}_{1}\sigma^{x} _{N}=\delta_{o}\hat{O}P_{\pm}\mathcal{M}=\delta_{o}\hat{O}P_{\pm}. \tag{108}\] Using Eq.(104) and (105), we conclude that \(|\hat{O}P_{+}\rangle\rangle\) and \(|\hat{O}P_{-}\rangle\rangle\) can be written as \[|\hat{O}P_{\pm}\rangle\rangle=|\hat{O}_{\pm}\rangle\rangle|\frac{1\pm\delta_{ o}}{2}\rangle\rangle, \tag{109}\] where \(|\hat{O}_{\pm}\rangle\rangle\) represent the corresponding state vectors in \(\mathcal{H}^{\prime}_{\mathcal{L}}\), whose explicit forms are irrelevant to the latter discussion. Therefore, the vectors related to \(\hat{O}\) and \(\hat{O}\prod_{i}^{N}\sigma^{z}_{i}\) then read \[|\hat{O}\rangle\rangle =|\hat{O}_{+}\rangle\rangle|\frac{1+\delta_{o}}{2}\rangle\rangle +|\hat{O}_{-}\rangle\rangle|\frac{1-\delta_{o}}{2}\rangle\rangle, \tag{110}\] \[|\hat{O}\mathcal{M}\rangle\rangle =|\hat{O}_{+}\rangle\rangle|\frac{1+\delta_{o}}{2}\rangle\rangle-| \hat{O}_{-}\rangle\rangle|\frac{1-\delta_{o}}{2}\rangle\rangle. \tag{111}\] In order to show that both \(|\hat{O}\rangle\rangle\) and \(|\hat{O}\mathcal{M}\rangle\rangle\) can be written as product vectors in Liouville-Fock space, we need to show that \(|\hat{O}_{+}\rangle\rangle\propto|\hat{O}_{-}\rangle\rangle\). This can be achieved by noticing that \[d_{e}|\hat{O}P_{+}\rangle\rangle=\frac{1+\delta_{o}}{2}|\hat{O}_{+}\rangle \rangle|0\rangle\rangle, \tag{112}\] which is non-zero only when \(\delta_{o}=+1\). The corresponding matrix form in the original Hilbert space reads \[-\frac{1}{2}\mathcal{M}\sigma_{1}^{x}(\mathcal{M}\sigma_{1}^{x} \sigma_{N}^{x}\hat{O}P_{+}\sigma_{N}^{x}\sigma_{1}^{x}+\hat{O}P_{+})\sigma_{1}^ {x}\mathcal{M}=-\frac{1+\delta_{o}}{2}\sigma_{N}^{x}\hat{O}P_{+}\sigma_{N}^{x} =-\frac{1+\delta_{o}}{2}\sigma_{N}^{x}\hat{O}\sigma_{N}^{x}P_{-}. \tag{100}\] By setting \(\delta_{o}=+1\) and noticing \(\sigma_{N}^{x}\hat{O}=\gamma_{o}\sigma_{N}^{x}\hat{O}\) with \(\gamma_{o}=\pm 1\), we have \[|\hat{O}_{+}\rangle\rangle|0\rangle\rangle=-\gamma_{o}|\hat{O}_{-}\rangle \rangle|0\rangle\rangle, \tag{101}\] which leads to \(|\hat{O}_{+}\rangle\rangle=-\delta_{o}\gamma_{o}|\hat{O}_{-}\rangle\rangle\). We note that similar result can also be obtained if we consider \[d_{e}^{\dagger}|\hat{O}P_{+}\rangle\rangle=\frac{1-\delta_{o}}{2}| \hat{O}_{+}\rangle\rangle|1\rangle\rangle, \tag{102}\] for \(\delta_{o}=-1\). The corresponding matrix form reads \[-\frac{1}{2}\mathcal{M}\sigma_{1}^{x}(-\mathcal{M}\sigma_{1}^{x }\sigma_{N}^{x}(\hat{O}+\hat{O}\mathcal{M})\sigma_{N}^{x}\sigma_{1}^{x}+\hat{O }+\hat{O}\mathcal{M})\sigma_{1}^{x}\mathcal{M}=-\frac{\delta_{o}-1}{2}\sigma _{N}^{x}\hat{O}P_{+}\sigma_{N}^{x}=-\frac{\delta_{o}-1}{2}\gamma_{o}\hat{O}P_ {-}. \tag{103}\] After writing back to the Liouville-Fock space, we again obtain \(|\hat{O}_{+}\rangle\rangle=-\delta_{o}\gamma_{o}|\hat{O}_{-}\rangle\rangle\). Summing up all the above discussions, we conclude that both \(|\hat{O}\rangle\rangle\) and \(|\hat{O}\mathcal{M}\rangle\) are product and read \[|\hat{O}\rangle\rangle =|\hat{O}_{+}\rangle\rangle\bigg{[}\frac{1+\delta_{o}}{2}\rangle \rangle-\delta_{o}\gamma_{o}|\frac{1-\delta_{o}}{2}\rangle\rangle\bigg{]}, \tag{104}\] \[|\hat{O}\mathcal{M}\rangle\rangle =|\hat{O}_{+}\rangle\rangle\bigg{[}\frac{1+\delta_{o}}{2}\rangle \rangle+\delta_{o}\gamma_{o}|\frac{1-\delta_{o}}{2}\rangle\rangle\bigg{]}. \tag{105}\] where other relevant coefficients are defined as follows \[\sigma_{N}^{x}\hat{O} =\gamma_{o}\sigma_{N}^{x}\hat{O}, \tag{106}\] \[\hat{O}\prod_{i}^{N}\sigma_{i}^{z}\sigma_{1}^{x}\sigma_{N}^{x} =\delta_{o}\prod_{i}^{N}\sigma_{i}^{z}\sigma_{1}^{x}\sigma_{N}^{x} \hat{O}. \tag{107}\] The above derivation also indicates that the \(4^{N}\) operators \(\hat{O}_{\mu}\) can be divided into \(2^{2N-1}\) different pairs up to a constant phase factors as \((\hat{O}_{\mu},\hat{O}_{\mu}\mathcal{M})\). For any two different pairs \((\hat{O}_{1},\hat{O}_{1}\mathcal{M})\) and \((\hat{O}_{2},\hat{O}_{2}\mathcal{M})\), since \[\text{tr}(\hat{O}_{i}\hat{O}_{j}\mathcal{M})=0,\ \ \text{tr}(\mathcal{M}\hat{O}_{i} \hat{O}_{j}\mathcal{M})=2^{N}\delta_{ij}, \tag{108}\] we have \[\langle\langle\hat{O}_{i}|\hat{O}_{j}\rangle\rangle=2^{N}\delta_{ ij},\ \ \langle\langle\hat{O}_{i,+}|\hat{O}_{j,+}\rangle\rangle=2^{N-1}\delta_{ij}. \tag{109}\] Given the state vector in Liouville-Fock space shown as Eq.(102), we also can easily obtain the matrix form in the usual Hilbert space using the following maps \[\delta_{o}=+1:\left\{\begin{array}{l}|\hat{O}_{+}\rangle\rangle|1 \rangle\rangle\rightarrow\hat{O}\hat{P}_{+},\\ |\hat{O}_{+}\rangle\rangle|0\rangle\rightarrow-\gamma_{o}|\hat{O}_{-}\rangle \rangle|0\rangle\rangle=-\gamma_{o}\hat{O}P_{-}, \tag{110}\] \[\delta_{o}=-1:\left\{\begin{array}{l}|\hat{O}_{+}\rangle\rangle|0 \rangle\rightarrow\hat{O}P_{+},\\ |\hat{O}_{+}\rangle\rangle|1\rangle\rangle\rightarrow\gamma_{o}|\hat{O}_{-} \rangle\rangle|0\rangle\rangle=\gamma_{o}\hat{O}P_{-}. \tag{111}\] Appendix D Bulk-edge product states in Liouville-Fock space and the corresponding density operators in the original Hilbert space For the system consider in the main text, the general form of the stationary states \(\rho_{s}\) can be written as the combination of \((\hat{O},\hat{O}\mathcal{M})\) with \(\hat{O}=I\) and \(\delta_{o}=\gamma_{o}=1\). This means \[\rho_{s}=\frac{1}{2^{N}}(I+\zeta\mathcal{M})=\frac{1}{2^{N}}\bigg{[}(1+\zeta) IP_{+}+(1-\zeta)IP_{-}\bigg{]}, \tag{112}\] where \(\zeta\) is real and satisfies \(|\zeta|\leq 1\) to ensure the positivity of \(\rho_{s}\). The corresponding vector in Liouville-Fock space reads \[|\rho_{s}\rangle\rangle=\frac{1}{2^{N}}|I_{+}\rangle\rangle\bigg{[}(1+\zeta)|1 \rangle\rangle-(1-\zeta)|0\rangle\rangle\bigg{]}. \tag{101}\] For a given initial state vector \(|\rho(t=0)\rangle\rangle\) in Liouville-Fock space, if \(|\rho(t=0)\rangle\rangle=|\rho^{\prime}\rangle\rangle\otimes(a|1\rangle)+b|0 \rangle\rangle)\) is product, then the state vector \(|\rho(t)\rangle\rangle\rangle\) remains unentangled in Liouville-Fock space under time evolution. Since the system tends to its stationary state defined by Eq.(101), we conclude that the product state can always be rewritten as \[|\rho\rangle\rangle=|\rho_{+}\rangle\rangle\otimes[(1+\zeta)|1\rangle\rangle-(1- \zeta)|0\rangle\rangle]/2^{N}, \tag{102}\] where \(|\rho_{+}\rangle\rangle\) can be written as \[|\rho_{+}\rangle\rangle=|I_{+}\rangle\rangle+\sum\chi_{m}|\hat{O}_{m,+}\rangle\rangle, \tag{103}\] where both the coefficients \(\chi_{m}\) and operators \(\hat{O}_{m}\) should be carefully chosen so that corresponding \(\rho\) in the original Hilbert space represents a valid density matrix of the system. We note that the operators \(\hat{O}\) can be classified into different groups according to the corresponding factors \((\delta_{o},\gamma_{o})\) defined in Eq.(100) and (101). Therefore, due to the two-valued properties of \(\delta_{o}\) and \(\gamma_{o}\), all the operators \(\hat{O}\) can be divided into four categories \((\hat{A},\hat{B},\hat{C},\hat{D})\) and are listed as follows: 1. \(\hat{A}:(\delta_{o},\gamma_{o})=(1,1)\) \[|\hat{A}+\zeta\hat{A}\mathcal{M}\rangle\rangle=|\hat{A}(1+\zeta\mathcal{M}) \rangle\rangle=|\hat{A}_{+}\rangle\rangle[(1+\zeta)|1\rangle\rangle-(1-\zeta) |0\rangle\rangle];\] (104) 2. \(\hat{B}:(\delta_{o},\gamma_{o})=(-1,1)\) \[|-\zeta\hat{B}+\hat{B}\mathcal{M}\rangle\rangle=|\hat{B}\mathcal{M}(1-\zeta \mathcal{M})\rangle=|\hat{B}_{+}\rangle\rangle[-(1+\zeta)|1\rangle\rangle+(1- \zeta)|0\rangle\rangle];\] (105) 3. \(\hat{C}:(\delta_{o},\gamma_{o})=(1,-1)\) \[|\zeta\hat{C}+\hat{C}\mathcal{M}\rangle\rangle=|\hat{C}\mathcal{M}(1+\zeta \mathcal{M})\rangle=|\hat{C}_{+}\rangle\rangle[(1+\zeta)|1\rangle\rangle-(1- \zeta)|0\rangle\rangle];\] (106) 4. \(\hat{D}:(\delta_{o},\gamma_{o})=(-1,-1)\) \[|\hat{D}-\zeta\hat{D}\mathcal{M}\rangle\rangle=|\hat{D}(1-\zeta\mathcal{M}) \rangle\rangle\rangle=|\hat{D}_{+}\rangle\rangle[(1-\zeta)|0\rangle\rangle-(1+ \zeta)|1\rangle\rangle].\] (107) We also note that to ensure the Hermiticity of \(\rho\), these operators \(\{\hat{A},\hat{B},\hat{C},\hat{D}\}\) also should be chosen to commute with \(\mathcal{M}\). Therefore, the most general form of \(|\rho_{+}\rangle\rangle\) reads \[|\rho_{+}\rangle\rangle=|I_{+}\rangle\rangle+\sum_{i}a_{i}|\hat{A}_{i,+} \rangle\rangle+\sum_{j}b_{j}|\hat{B}_{j,+}\rangle\rangle+\sum_{k}c_{k}|\hat{C}_{ k,+}\rangle\rangle+\sum_{l}d_{l}|\hat{D}_{l,+}\rangle\rangle, \tag{108}\] where all the coefficients \(a_{i}\), \(b_{j}\), \(c_{k}\), and \(d_{l}\) are real. The corresponding density matrix can be obtained accordingly and reads \[\rho=\frac{1}{2^{N}}\bigg{[}(I+\sum_{i}a_{i}\hat{A}+\sum_{k}c_{k}\hat{C_{k}} \mathcal{M})(I+\zeta\mathcal{M})-(\sum_{j}b_{j}\hat{B}_{j}\mathcal{M}+\sum_{l }d_{l}\hat{D}_{l})(I-\zeta\mathcal{M})\bigg{]}, \tag{109}\] where both the coefficients \((a_{i},b_{j},c_{k},d_{l})\) and the operators \((\hat{A}_{i},\hat{B}_{j},\hat{C}_{k},\hat{D}_{l})\) are carefully chosen so that \(\rho\) is positive definite. In the special case with \(b_{j}=d_{l}=0\) for all \(j\) and \(l\), the positivity of \(\rho\) is reduced to find \((a_{i},c_{k})\) and \((\hat{A}_{i},\hat{C}_{k})\) such that \((I+\sum_{i}a_{i}\hat{A}_{i}+\sum_{k}c_{k}\hat{C}_{k}\mathcal{M})\) is positive defined, as shown in the main text. For general case, to ensure the positivity of \(\rho\), a sufficient condition can be chosen such that both \((I+\sum_{i}a_{i}\hat{A}_{i}+\sum_{k}c_{k}\hat{C}_{k}\mathcal{M})\) and \(-(\sum_{j}b_{j}\hat{B}_{j}\mathcal{M}+\sum_{l}d_{l}\hat{D}_{l})\) are positive operators. We also note that any Hermitian observable operator \(\hat{X}\) which maps to a product vector in Liouville-Fock space can also be constructed following the above discussions. For instance, all operators defined in Eq.(109) are product in Liouville-Fock space. If we choose the two operators \(\hat{X}_{1}\) and \(\hat{X}_{2}\) as \((\hat{X}_{1},\hat{X}_{2})=(\hat{O},\hat{O}\mathcal{M})\), then the ratio \(\langle\hat{X}_{1}\rangle/\langle\hat{X}_{2}\rangle\) can be simplified as \[\frac{\langle X_{1}\rangle}{\langle X_{2}\rangle}=\frac{\langle \langle X_{1}|\rho\rangle\rangle}{\langle\langle X_{2}|\rho\rangle\rangle}= \frac{\delta_{0}+\gamma_{0}+\delta_{0}\zeta(\delta_{0}-\gamma_{0})}{\delta_{0 }-\gamma_{0}+\delta_{0}\zeta(\delta_{0}+\gamma_{0})}=\delta_{0}\zeta^{-\delta _{0}}, \tag{101}\] where in the last step, we have used the two-valued properties of \(\delta_{o}\) and \(\gamma_{o}\). Therefore \(\langle\hat{X}_{1}\rangle/\langle\hat{X}_{2}\rangle\) is time-independent during the evolution for the initial product state if the edge mode is decoupled form all the bulk modes in Liouville-Fock space. This can be used to clarify the existence of LMEMs in this dissipative system. The edge modes are topologically protected by the internal symmetry of the system. The influences of perturbations on the system can be characterised by introducing additional interaction \(H^{\prime}\) to the Hamiltonian \(H\), or new dissipator \(L^{\prime}\) into the Lindblad equation. The edge modes are decoupled from the bulk modes as long as the corresponding Lindbladians in Liouville-Fock space are commuted with \(\kappa_{1}\) and \(\kappa_{4N}\), namely, \([X_{H^{\prime}},\kappa_{1}]=[X_{H^{\prime}},\kappa_{4N}]=0\). Since \[\kappa_{1}|\rho\rangle\rangle \rightarrow-\sigma_{1}^{x}\mathcal{M}\rho\mathcal{M}\sigma_{1}^{x},\] \[\kappa_{4N}|\rho\rangle\rangle \rightarrow i\sigma_{N}^{x}\rho\mathcal{M}\sigma_{N}^{x}, \tag{102}\] \[X_{H^{\prime}}|\rho\rangle\rangle \rightarrow[H^{\prime},\rho],\] \[X_{L^{\prime}}|\rho\rangle\rangle \rightarrow 2L^{\prime\dagger}\rho L^{\prime}-L^{\prime}L^{\prime\dagger} \rho-\rho L^{\prime}L^{\prime\dagger}.\] Using the spin language, we can rewrite \([X_{H^{\prime}},\kappa_{1}]|\rho\rangle\rangle=0\) as \[[\mathcal{M}\sigma_{1}^{x}H^{\prime}\sigma_{1}^{x}\mathcal{M}-H^{\prime}, \rho]=0, \tag{103}\] which is valid for any given density matrix \(\rho\). This leads to the following constraints for \(H^{\prime}\) as \[[H^{\prime},\sigma_{N}^{x}]=[H^{\prime},\sigma_{1}^{x}]=[H^{\prime},\mathcal{ M}]=0. \tag{104}\] Similar discussions also hold for additional dissipator \(L^{\prime}\) by noticing \([X_{L^{\prime}},\kappa_{1}]|\rho\rangle\rangle=0\), and we have \[2(\mathcal{M}\sigma_{1}^{x}L^{\prime\dagger}\sigma_{1}^{x} \mathcal{M}\rho\mathcal{M}\sigma_{1}^{x}L^{\prime}\sigma_{1}^{x}\mathcal{M}-L^ {\prime\dagger}\rho L^{\prime}) -(\mathcal{M}\sigma_{1}^{x}L^{\prime}L^{\prime\dagger}\sigma_{1}^ {x}\mathcal{M}-L^{\prime}L^{\prime\dagger})\rho\] \[-\rho(\mathcal{M}\sigma_{1}^{x}L^{\prime}L^{\prime\dagger}\sigma_ {1}^{x}\mathcal{M}-L^{\prime}L^{\prime\dagger})=0. \tag{105}\] To ensure that the above identity holds for any density matrix \(\rho\), we have \[[L^{\prime},\sigma_{N}^{x}]=[L^{\prime},\sigma_{1}^{x}]=[L^{\prime},\mathcal{ M}]=0. \tag{106}\] In the main text, the existence of LMEMs is verified for different initial states and observables. Both of them can be re-expressed as bulk-edge product vectors in Liouville-Fock space. Specifically, for \(N=8\) and \(\rho_{0}=[I+\sum_{j=2}^{N-1}0.2(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{z}) ](I+0.5\prod_{j=1}^{N}\sigma_{j}^{z})/2^{N}\), all the corresponding operators \(M_{j}=\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{z}\) (\(2\leq j\leq N-1\)) satisfy \((\delta_{M},\gamma_{M})=(1,1)\) and belongs to \(A\)-category discussed above. The relevant vector of \(\rho_{0}\) is product in Liouville-Fock space and reads \[|\rho_{0}\rangle\rangle=(|I_{+}\rangle\rangle+0.2\sum_{j=2}^{N-1}|M_{j,+} \rangle))(1.5|1\rangle\rangle-0.5|0\rangle\rangle)/2^{N}, \tag{107}\] with \(\zeta=0.5\), \(M_{j}=\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{z}\). Similarly, using the following maps \[\sigma_{j}^{z} \longrightarrow|\sigma_{j,+}^{z}\rangle)(|1\rangle\rangle-|0 \rangle)),(j\neq 1,N) \tag{108}\] \[\sigma_{j}^{x}\sigma_{j+1}^{x} \longrightarrow|(\sigma_{j}^{x}\sigma_{j+1}^{x})_{+}\rangle)(|1 \rangle\rangle-|0\rangle)). \tag{109}\] We can find that the relevant state vectors in Liouville-Fock space for the observables \(X_{1}=\sum_{j=2}^{N-1}(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{j}^{z})\) and \(X_{2}=X_{1}\mathcal{M}\) can be written as \[|X_{1}\rangle\rangle=|M_{+}\rangle\rangle(|1\rangle)-|0\rangle)), \hskip 28.452756pt|X_{2}\rangle\rangle=|M_{+}\rangle\rangle(|1\rangle \rangle+|0\rangle))), \tag{110}\] where \[|M_{+}\rangle\rangle=|\sum_{j=2}^{N-1}(\sigma_{j}^{x}\sigma_{j+1}^{x}+\sigma_{ j}^{z})_{+}\rangle\rangle. \tag{111}\] Appendix E Correlation \(\langle\langle i\kappa_{1}\kappa_{4N}\rangle\rangle\) in Liouville-Fock space and the Purity of \(\rho\) Since any density matrix \(\rho\) can be expanded using pairs of Hermitian operators \(\{\hat{O}_{j},\hat{O}_{j}\mathcal{M}\}\), the corresponding state vector in Liouville-Fock space can always be written as \[|\rho\rangle\rangle=\sum_{j}r_{j}|\rho_{j}\rangle\rangle=\sum_{j}r_{j}|\hat{O} _{j,+}\rangle\rangle(a_{j}|1\rangle\rangle+b_{j}|0\rangle\rangle). \tag{10}\] Therefore the occupation number \(\langle\langle\rho(t)|c_{e}^{\dagger}c_{e}|\rho(t)\rangle\rangle\) of the edge mode for the given vector \(|\rho(t)\rangle\rangle\) in Liouville-Fock space reads \[\langle\langle\rho(t)|c_{e}^{\dagger}c_{e}|\rho(t)\rangle\rangle=\sum_{ij}r_{ i}^{*}r_{j}\langle\langle\rho_{i}|c_{e}^{\dagger}c_{e}|\rho_{j}\rangle \rangle=\sum_{ij}r_{i}^{*}r_{j}a_{i}^{*}a_{j}\langle\langle\hat{O}_{i,+}|\hat{ O}_{j,+}\rangle\rangle. \tag{11}\] Using the relations \(\langle\langle\hat{O}_{i,+}|\hat{O}_{j,+}\rangle\rangle=2^{N-1}\delta_{ij}\), and \(i\kappa_{1}\kappa_{4N}=2c_{e}^{\dagger}c_{e}-1\), we immediately obtain \[\langle\langle\rho(t)|i\kappa_{1}\kappa_{4N}|\rho(t)\rangle\rangle=2^{N-1} \sum_{i}r_{i}^{2}(a_{i}^{2}-b_{i}^{2}). \tag{12}\] Meanwhile, the purity \(\mathrm{tr}(\rho^{2})\) of the density matrix \(\rho\) can be re-expressed in Liouville-Fock space as \[\langle\langle\rho(t)|\rho(t)\rangle\rangle=\sum_{ij}r_{i}^{*}r_{j}\langle \langle\rho_{i}|\rho_{j}\rangle\rangle=2^{N-1}\sum_{i}r_{i}^{2}(a_{i}^{2}+b_{i }^{2}). \tag{13}\] This means that for bulk-edge product state in Liouville-Fock space with \((a_{j},b_{j})=(a,b)\) for all \(j\), the correlation \(\langle\langle i\kappa_{1}\kappa_{4N}\rangle\rangle=\langle\langle\rho(t)|i \kappa_{1}\kappa_{4N}|\rho(t)\rangle\rangle\) is directly linked with \(\mathrm{tr}(\rho^{2})\), and satisfies \[\langle\langle i\kappa_{1}\kappa_{4N}\rangle\rangle=\frac{a^{2}-b^{2}}{a^{2} +b^{2}}\mathrm{tr}(\rho^{2}). \tag{14}\] For the initial state discussed in the main text \[\rho_{0}=[I+0.3\prod_{j=1}^{N}\sigma_{j}^{z}(\sigma_{1}^{z}+\sigma_{1}^{y} \sigma_{2}^{x}+\sigma_{1}^{y}\sigma_{3}^{x}+\sigma_{2}^{z}\sigma_{2}^{x} \sigma_{3}^{x})](I+0.4\prod_{j=1}^{N}\sigma_{j}^{z})/2^{N}, \tag{15}\] the corresponding vector in Liouville-Fock space can be written as \[|\rho_{0}\rangle\rangle=|\rho_{0,+}\rangle\rangle\otimes[(1+\zeta)|1\rangle \rangle-(1-\zeta)|0\rangle)]/2^{N} \tag{16}\] with \(\zeta=0.4\) and \[|\rho_{0,+}\rangle\rangle=|I_{+}\rangle\rangle+0.3(|(\sigma_{1}^{z})_{+} \rangle)+|(\sigma_{1}^{y}\sigma_{2}^{x})_{+}\rangle\rangle+|(\sigma_{1}^{y} \sigma_{3}^{x})_{+}\rangle\rangle+|(\sigma_{2}^{z}\sigma_{2}^{x}\sigma_{3}^{x} )_{+}\rangle)). \tag{17}\] This state has nonzero components in subspaces defined by \(\{p_{j}=1(\forall j)\}\) and \(\{p_{j\neq 1}=1(\forall j),p_{1}=-1\}\). For larger \(\gamma\), the excited states for eigenvalue \(\lambda\) with the minimum \(|\mathrm{Im}(\lambda)|>0\) are degenerate in the subspace \(\{p_{j\neq 1}=1(\forall j),p_{1}=-1\}\) and read \[\rho_{1}^{0} = (\alpha\sigma_{1}^{z}+\sigma_{1}^{y}\sigma_{2}^{x})\prod_{j=1}^{N }\sigma_{j}^{z}, \tag{18}\] \[\rho_{1}^{1} = \alpha\sigma_{1}^{y}\sigma_{2}^{x}+\sigma_{1}^{z} \tag{19}\] with \(\alpha=(i\gamma\pm\sqrt{\gamma^{2}-J^{2}})/J\). The above excited states and the stable states can then be viewed as the combinations of following operators \[M=\{I,\prod_{i}\sigma_{i}^{z},\sigma_{1}^{y}\sigma_{2}^{x},\sigma_{1}^{z}\} \cup\{I,\prod_{i}\sigma_{i}^{z},\sigma_{1}^{y}\sigma_{2}^{x},\sigma_{1}^{z}\} \prod_{i}\sigma_{i}^{z}\] The purity \(\mathrm{tr}(\rho^{2})\) can be approximated as \[\langle\langle\rho|\rho\rangle\rangle=\mathrm{tr}(\rho^{2})\simeq\frac{1}{2^ {N}}\sum_{O_{i}\in M}\langle O_{i}\rangle^{2}. \tag{20}\] For the given initial state \(|\rho_{0}\rangle\rangle\), since the following relations hold \[\langle\sigma_{1}^{y}\sigma_{2}^{x}\prod_{i}\sigma_{i}^{z}\rangle=\zeta\langle \sigma_{1}^{y}\sigma_{2}^{x}\rangle,\ \ \ \ \langle\sigma_{1}^{z}\prod_{i}\sigma_{i}^{z}\rangle=\zeta \langle\sigma_{1}^{z}\rangle,\ \ \ \ \langle\prod_{i}\sigma_{i}^{z}\rangle=\zeta, \tag{100}\] we finally have \[\langle\langle\rho|i\kappa_{1}\kappa_{2N}|\rho\rangle\rangle\simeq\frac{1}{2^{ N-1}}\zeta\Big{(}1+\langle\sigma_{1}^{z}\rangle^{2}+\langle\sigma_{1}^{y} \sigma_{2}^{x}\rangle^{2}\Big{)}. \tag{101}\]
2308.00969
NuSTAR Observations of Abell 665 and 2146: Constraints on Non-Thermal Emission
Observations from past missions such as RXTE and Beppo-SAX suggested the presence of inverse Compton (IC) scattering at hard X-ray energies within the intracluster medium of some massive galaxy clusters. In subsequent years, observations by, e.g., Suzaku, and now NuSTAR, have not been able to confirm these detections. We report on NuSTAR hard X-ray searches for IC emission in two massive galaxy clusters, Abell 665 and Abell 2146. To constrain the global IC flux in these two clusters, we fit global NuSTAR spectra with three models: single (1T) and two-temperature (2T) models, and a 1T plus power law component (T$+$IC). The temperature components are meant to characterize the thermal ICM emission, while the power law represents the IC emission. We find that the 3-30 keV Abell 665 and 3-20 keV Abell 2146 spectra are best described by thermal emission alone, with average global temperatures of $kT = (9.15\pm 0.1)$ keV for Abell 665 and $kT = (8.29\pm 0.1)$ keV for Abell 2146. We constrain the IC flux to $F_{\rm NT} < 0.60 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ and $F_{\rm NT} < 0.85 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ (20-80 keV) for Abell 665 and Abell 2146, respectively both at the 90% confidence level. When we couple the IC flux limits with 1.4 GHz diffuse radio data from the VLA, we set lower limits on the average magnetic field strengths of $>$0.14 $\mu$G and $>$0.011 $\mu$G for Abell 665 and Abell 2146, respectively.
Randall Rojas Bolivar, Daniel Wik, Ayşegül Tümer, Fabio Gastaldello, Julie Hlavacek-Larrondo, Paul Nulsen, Valentina Vacca, Grzegorz Madejski, Ming Sun, Craig Sarazin, Jeremy Sanders, Damiano Caprioli, Brian Grefenstette, Niels-Jorgen Westergaard
2023-08-02T06:58:31Z
http://arxiv.org/abs/2308.00969v1
# _NuSTAR_ Observations of Abell 665 and 2146: Constraints on Non-thermal Emission ###### Abstract Observations from past missions such as _RXTE_ and _Beppo-SAX_ suggested the presence of inverse Compton (IC) scattering at hard X-ray energies within the intracluster medium of some massive galaxy clusters. In subsequent years, observations by, e.g., _Suzaku_, and now _NuSTAR_, have not been able to confirm these detections. We report on _NuSTAR_ hard X-ray searches for IC emission in two massive galaxy clusters, Abell 665 and Abell 2146. To constrain the global IC flux in these two clusters, we fit global _NuSTAR_ spectra with three models: single (1T) and two-temperature (2T) models, and a 1T plus power law component (T+IC). The temperature components are meant to characterize the thermal ICM emission, while the power law represents the IC emission. We find that the 3-30 keV Abell 665 and 3-20 keV Abell 2146 spectra are best described by thermal emission alone, with average global temperatures of \(kT=(9.15\pm 0.1)\) keV for Abell 665 and \(kT=(8.29\pm 0.1)\) keV for Abell 2146. We constrain the IC flux to \(F_{\rm NT}<0.60\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) and \(F_{\rm NT}<0.85\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) (20-80 keV) for Abell 665 and Abell 2146, respectively both at the 90% confidence level. When we couple the IC flux limits with 1.4 GHz diffuse radio data from the VLA, we set lower limits on the average magnetic field strengths of \(>\)0.14 \(\mu\)G and \(>\)0.011 \(\mu\)G for Abell 665 and Abell 2146, respectively. galaxies: clusters: general -- galaxies: clusters: individual (Abell 665, Abell 2146) -- intergalactic medium -- magnetic fields -- radiation: non-thermal -- X-rays: galaxies: clusters ## 1 Introduction Galaxy cluster mergers are the most energetic events in the Universe since the Big Bang, combining two or more clusters to masses on the order of \(\sim\)10\({}^{15}\) M\({}_{\odot}\) and releasing upwards of 10\({}^{65}\) ergs in kinetic energy (Markevitch et al., 1999). These mergers heat the cluster plasma, and introduce shocks and turbulence into the intracluster medium (ICM). These shock fronts cause compressed magnetic field lines in the ICM which can (re)-accelerate existing relativistic particles through first order Fermi acceleration. There must be relativistic particles already present in order for this to occur because Fermi acceleration at weak merger shocks can only efficiently accelerate particles already exceeding thermal energies. That is to say, these shocks have low Mach numbers (\(M_{s}<3\)), so their acceleration efficiency is low, thus preventing particles from being accelerated directly from the thermal pool (Kang, 2017). Such shocks lead to the production of radio relics. Turbulence produced in the merger also (re)-accelerates electrons, leading to radio halos (Brunetti & Jones, 2014). The same accelerated relativistic electrons radiating synchrotron emission in the radio would be expected to produce non-thermal emission in the form of inverse Compton (IC) scattering of the cosmic microwave background in the hard X-ray bands. Measuring the non-thermal flux from these collisions is crucial, since not including a potential source of pressure could bias mass estimates of clusters based on hydrostatic equilibrium (e.g., Bahcall & Cen, 1993; Vikhlinin et al., 2009; Ettori et al., 2019). Additionally, the ratio of the upper limit of the IC flux (\(F_{X}\)) to the radio flux (\(F_{R}\)) can provide a lower limit on the average magnetic field strength \(B\) of the cluster. For a single relativistic electron, the ratio \(F_{R}/F_{X}\) is the ratio of energy densities, \(U\), of the fields that the electron is scattering: \[\frac{F_{R}}{F_{X}}=\frac{U_{B}}{U_{\rm CMB}}=\frac{B^{2}/8\pi}{aT_{\rm CMB}^{4 }}\,. \tag{1}\] Extending this to a power law energy distribution of electrons emitting IC and synchrotron emission at different energies and momenta, we obtain the following expression for the magnetic field strength required to account for both: \[B=C(p)(1+z)^{(p+5)/(p+1)}\times\left(\frac{F_{R}}{F_{X}}\right)^{2/(p+1)}\left( \frac{\nu_{R}}{\nu_{X}}\right)^{(p-1)/(p+1)}\,, \tag{2}\] where \(p\) is the index of electron distribution (\(N(E)\propto E^{-p}\) and related to \(\alpha\), the spectral index by \(p=2\alpha+1\)) and \(C(p)\) is a proportionality constant (Rybicki & Lightman, 1979; Longair, 1994). IC emissions from nearby clusters, like the Coma Cluster, have been detected by _RXTE_(Rephaeli & Gruber, 2002) and _Beppo-SAX_(Fusco-Femiano et al., 2004). These results prompted further investigations by _Suzaku_ and _Swift_, but the latter attempts failed (Ota, 2012). A later study done by Gastaldello et al. (2015) using _NuSTAR_ provided a less stringent upper limit restricted to the core of the cluster due to limitations of the telescope's field of view (FOV), which prevents it from capturing the entirety of the radio halo, and bright thermal component. Mosaic observations have been taken and are currently being studied to provide a more accurate limit. Several other clusters reported in Rephaeli et al. (2008) have marginal claims, however some, such as Abell 2163 (Rephaeli et al., 2006), have been ruled out using _NuSTAR_(Rojas Bolivar et al., 2021). In addition to the aforementioned Coma and Abell 2163 studies, _NuSTAR_ has provided upper limits on IC emission in the Bullet Cluster (Wik et al., 2014) and Abell 523 (Cova et al., 2019). Abell 665 (hereafter, A665, \(z\sim 0.1819\)(Franx, 1993)) is the only cluster in the Abell catalog to receive a richness class of 5, meaning that it contains at least 300 individual galaxies that are no fainter by 2 magnitudes of the third brightest galaxy (Abell et al., 1989). X-ray data taken from _ROSAT_ suggests that the cluster is going through a merger, composed of two similar mass clusters at core crossing (Gomez et al., 2000). The cluster is host to a giant radio halo (Moffet & Birkinshaw, 1989; Vacca et al., 2010). X-ray observations by _Chandra_ show a possible shock upstream from the core that correlates with the radio emission and a temperature jump from 8 keV to 15 keV (Markevitch & Vikhlinin, 2001; Govoni et al., 2004). This temperature jump was later shown in Dasadia et al. (2016) to correspond to a Mach number of \(M_{s}\sim 3\), the second largest measured behind the Bullet Cluster. Temperature measurements done by Hughes & Tanaka (1992) determined the temperature to be \(kT=8.26^{+0.95}_{-0.81}\) keV using the _Ginga_ satellite. A spectral analysis performed by Million & Allen (2009) using _Chandra_ suggests a possible detection of IC scattering, with a non-thermal 0.6-7 keV flux of \(F_{NT}\sim 4.2^{+1.4}_{-1.2}\)\(\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\). Abell 2146 (hereafter, A2146, \(z\sim 0.2323\)(White et al., 2015)), is a rare type of cluster in that it has two clearly observable shock fronts (Russell et al., 2010) at a favorable inclination along the plane of the sky (Hlavacek-Larrondo et al., 2018), in a similar manner to that of the Bullet Cluster. The merger between two roughly equal mass subclusters (King et al., 2016) is estimated to have the first core passage less than 0.1 Gyr ago (Russell et al., 2010). A2146 was thought of as an anomaly among merging clusters, since prior to a study done using the VLA by Hlavacek-Larrondo et al. (2018) it was thought to lack large diffuse radio emission. This study reveals a faint, extended radio structure, with the upstream shock containing a radio relic and the bowshock containing a radio halo. Measurements by _Chandra_ estimate the global temperature of the cluster to be \(7.5\pm 0.3\) keV (Russell et al., 2010). There have been no published results of non-thermal studies done on this cluster despite its ideal orientation, likely because of its faint radio halo until it was measured by Hlavacek-Larrondo et al. (2018). In this paper, we present three deep _NuSTAR_(Nuclear Spectroscopic Telescope Array; Harrison et al. (2013)) observations, one of A2146 and two of A665, in order to measure non-thermal IC fluxes and use them in conjunction with data obtained from the VLA (Very Large Array) in the 1.4 GHz range to constraint the lower limit of the magnetic field strength. In Section 2, we discuss our data reduction procedures. In Section 3, we discuss how we analyzed the data and determine which model best describes the emission from the galaxy clusters. In Section 4, we compare our results with previous IC searches and discuss the implications of our results. Appendix A provides an in depth explanation of the background characterization and background modelling for these clusters and in Appendix B some information regarding an issue that we've noticed with the gain that has progressively increased as _NuSTAR_ ages. For this paper, all errors shown are at the 90% confidence level. ## 2 Observations and Data Reduction ### X-ray A665 was observed by _NuSTAR_ in segmented observations due to observation windows, the first for a total raw exposure time of 97 ks and the second, reoriented in order to better capture the shock front, for a total raw exposure time of 91 ks, including periods when the cluster was occulted by the Earth. The observations were performed between May 10th, 2017 and May 14th, 2017. The A2146 observation occurred between November 19th, 2018 and November 24th, 2018 for a total raw exposure time of 285 ks. The standard pipeline processing from HEASoft version 6.26 and NuSTARDAS version 1.9.4 were used to filter the data. The first step in the procedure was to remove high background periods from the data since our analysis is very sensitive to the background and any variations within it. Typically, this is achieved automatically by turning on STRICT mode, which detects when _NuSTAR_ has passed through the South Atlantic Anomaly (SAA), and the TENTACLE flag, which filters for time intervals when the detectors have increased count rates when passing through the SAA, within nupipeline.1 Footnote 1: [https://heasarc.gsfc.nasa.gov/docs/nustar/analysis/nustar_swguide.pdf](https://heasarc.gsfc.nasa.gov/docs/nustar/analysis/nustar_swguide.pdf) We instead chose to manually filter our data, as this automatic filtering can be too strict and remove good data. Manual filtering is achieved by turning off the aforementioned flags and extracting good time intervals (GTIs) using light curves created for both the FPMA and FPMB telescopes by lcfilter.2 The light curves are binned in bins of 100 s and from these bins we manually find and exclude time intervals where count rates are higher than the local distribution. We repeat our exclusions three times: first to delete counts corresponding to high background due to the presence of the SAA in the 50-160 keV energy range, then again in the same energy range to more stringently reduce residual SAA contributions, and finally in the 1.6-20 keV energy range to exclude high background counts from possible solar activity. After our manual filtering, our exposure time was reduced to 91 ks in the first A665 observation, 85 ks in the second, and 255 ks in the A2146 observation. Figure 1 shows our final light curves after filtering. Footnote 2: [https://github.com/danielrewik/reduce](https://github.com/danielrewik/reduce) The next step in our data reduction procedure was to take the newly filtered GTIs and reprocess the data using nupipeline. Following reprocessing, we created images of the clusters using XSELECT and produced exposure maps with nuexpomap. Products for spectral fitting, namely the spectra, response matrices, and auxiliary response files ((PHAs, RMFs and ARFs, respectively), we produced using nuproducts. The clean, smoothed, combined, 4-25 keV energy band images of the clusters and the source regions used for analysis in the following section are shown in Figure 2. Background analysis was done following the procedures outlined in Wik et al. (2014) and Rojas Bolivar et al. (2021). We present the specific background modelling for these clusters in Appendix A. ## 3 Analysis In order to exclude possible point sources that might have had an effect on our spectral analysis, we generated images of the clusters in different energy bands by using xselect to filter the PHA column. Figure 3 shows 3-8, 8-15, 15-30, and 30-40 keV exposure-corrected, background-subtracted images. The background was modelled using the nuskybgd routine and is discussed in detail in Appendix A. These images show no evidence of of major bright point sources that can contaminate our spectra. ### Spectra Spectra were extracted using the regions shown in Figure 2 with nuproducts and fit using XSPEC. The fitted spectra for each of the clusters are shown in Figure 4. In both clusters, the FPMA and FPMB spectra vary negligibly. For A665, we jointly fit both observations, tying together each of the model parameters, except for model normalizations. All our spectra are grouped by 30 counts per channel. We use the modified Cash statistic (statistic cstat in XSPEC) to find best-fit parameters for each model (Arnaud, 1996), as this allows us to cut down on the time to fit the data while avoiding loss of information (Rojas Bolivar et al., 2021). Additionally, the Cash statistic does not bias our result like \(\chi^{2}\) would as we are working with Poissonian data. For A2146, all spectra were fit between 3 and 20 keV instead of 30 keV due to difficulties fitting the data above 20 keV. ### Models In clusters with radio emission in the form of radio relics and radio halos like A665 and A2146, IC emission must be present, since the two necessary ingredients--relativistic electrons and the CMB--are known to be inhabit the ICM (Liang et al., 2002). When the IC emission is weak with respect to the thermal emission, as may be the case with these clusters, the model we select to fit the data needs to be able to separate the different types of emission. We used three different models to attempt to characterize the emission of these clusters: a single temperature (1T), a two temperature (2T), and a single temperature plus a power law (T+IC) (Ota et al., 2014; Wik et al., 2014; Rojas Bolivar et al., 2021). The thermal emission is represented by the APEC model (AtomDB version 3.0.9) within XSPEC, which contains as parameters the temperature, abundance, redshift, and normalization. We allow the metal abundances to be free during the fitting process and we use the abundance table wilm(Wilms et al., 2000). Additionally, as shown in Rojas Bolivar et al. (2021), we can ignore including a foreground absorption model like phabs or tbgas due to _NuSTAR_'s effective area not being sensitive to energies below 3 keV. We also allowed the redshift to be free, which changed the redshifts from 0.189 to 0.201 for A665, and from 0.232 to 0.259 for A2146. Details about why leaving the redshift free results in a better fit are discussed in Appendix B. The results of our fits are discussed are presented in Table 1 and discussed in detail in the following sections. #### 3.2.1 Single Temperature In a merging cluster scenario, one would expect spatial temperature variations across the cluster due to the merger disturbing the gas. While most likely not resulting in the best fitting model to characterize the emission from the cluster, we can still gain some information from fitting a 1T model to the data, namely the average temperature of the galaxy cluster. Previous work using _Ginga_ data gave an average cluster temperature of \(8.26^{+0.95}_{-0.81}\) keV for A665 (Hughes and Tanaka, 1992). Our 1T fit using _NuSTAR_ results in \(kT=9.15\pm 0.1\) keV (statistical uncertainty only), within range of the _Ginga_ result but more precise. The C-stat value for this fit is 2945 with 2681 degrees of freedom (dof). For A2146 a temperature of \(7.5\pm 0.3\) keV was measured using _Chandra_ in Russell et al. (2010). Our best fit temperature came out to be \(kT=8.29\pm 0.1\) keV, slightly higher than the _Chandra_ temperature. The 2.5\(\sigma\) disagreement is expected since the merging cluster is far from isothermal (Russell et al., 2022) and _NuSTAR_ has a harder response than _Chandra_. The C-stat value for this fit is 871 with 841 dof. #### 3.2.2 Two Temperature Truly determining the temperature structure of merging clusters is difficult. We know that in galaxy cluster mergers the gas is typically distributed non-isothermally, such as in Abell 2163 (Bourdin et al., 2011) and in A2146 (Russell et al., 2022). Past attempts at Figure 1: Filtered light curves for FPMA (top panel) and FPMB (bottom panel) telescopes following the process described in Section 2.1. The light curves have been filtered in the 50-160 keV energy range to eliminate SAA background contributions as well as the 1.6-20 keV range to remove solar activity background contributions. The set of light curves are in order from left to right as A665 (OBSID: 70201002002), A665 (OBSID: 70201003002), and A2146 (OBSID: 70401001002). Figure 2: A false color (faint to bright represented by black to blue to green to yellow to red to white) combined (A+B) log scaled images from 4–25 keV, smoothed by a Gaussian kernel with \(\sigma=3\) pix, and stretched to show features in the outer parts of the FOV. The source regions from which spectra were extracted are shown as the black circles. Superimposed in blue are radio contours obtained from Vacca et al. (2010) for A665 and Hlavacek-Larrondo et al. (2018) for A2146. For A665, we display the total intensity radio contours at 1.4 GHz (combined VLA data in C and D configurations). For A2146, we display low-resolution 1-2 GHz contours. Top (left to right): A665 (OBSID: 70201002002) and A665 (OBSID: 70201003002) Bottom: A2146. Figure 3: A665 (OBSID 70201002002 on left and OBSID 70201003002 in the middle) and A2146 (right) in different energy bands. In each panel; top left: 3–8 keV; top right: 8–15 keV; bottom left: 15–30 keV; bottom right: 30–40 keV. Each image has been background subtracted and exposure corrected. They are presented in a log scale from 0 counts s\({}^{-1}\) pix\({}^{-1}\) (in black) to 20+ counts s\({}^{-1}\) pix\({}^{-1}\) (in white) and smoothed by a Gaussian kernel with \(\sigma\) = 3 pix. There are fewer cluster counts present in the higher energy images and there are no obvious morphological changes with respect to the lower energy images, which are dominated by thermal photons. measuring the temperature structure in merging galaxy clusters are often biased by which energy bands are favored by a telescope's effective area, calibrations, and projection effects. With _NuSTAR_, so long as there are not large amounts of features within the thermal continuum, we can use a 2T model to characterize non-isothermal gas while also taking into account that the telescope is weighted towards hotter temperatures (Rojas Bolivar et al., 2021). In A665, we find the higher (\(T_{\rm H}\)) and lower (\(T_{\rm L}\)) values for the two temperature components to be \(T_{\rm H}=9.34^{+2.9}_{-1.6}\) and \(T_{\rm L}=4.47^{+4.3}_{-2.6}\) keV, with a C-stat value of is 2933 with 2679 dof, suggesting a slightly better fit than the 1T model. In A2146, \(T_{\rm H}=10.9^{+1.7}_{-1.8}\) and \(T_{\rm L}=4.26^{+1.4}_{-1.3}\) keV and the C-stat value is 860 with 839 dof. #### 3.2.3 T+Ic In the case where a significant portion of the galaxy cluster emission is in the form of non-thermal, IC emission, the 2T model would likely show that with an unphysically hot \(T_{\rm H}\). In this scenario, it would be expected that the T+IC model would better fit the spectrum. While containing a single temperature component, the overall shape of the spectrum, especially at the highest energies, would be well fit by XSPEC's power law component that models the non-thermal emission within our bandpass. For both clusters, we adopted a photon index value of \(\Gamma=2\), as Feretti et al. (2004) and Hlavacek-Larrondo et al. (2018) measured a radio spectral index of \(\alpha\sim 1\). We do not allow the photon index to be free, as previous similar work with Abell 2163 has shown that photon index will behave like the \(T_{\rm L}\) component in the 2T model (Rojas Bolivar et al., 2021). Our temperature and the power law normalization parameters were left to be free. In A665 and A2146, the best-fit temperatures obtained from our T+IC model are \(kT=9.12\pm 0.2\) keV and \(kT=7.3\pm 0.3\) keV respectively, comparable with the single temperature model. The similar temperatures make sense because there are high energy photons that the hard non-thermal component accounts for. To obtain an estimate of the upper limit for the non-thermal flux, we used the upper limit on the confidence range for the power law normalization. From this, we estimated the power law flux from 20-80 keV for the T+IC model to be \(F_{\rm NT}<0.595\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) and \(F_{\rm NT}<0.85\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) with a 90% confidence level for A665 and A2146 respectively. The C-stat values for these models are 2938 with 2680 dof and 868 with 840 dof, which means that these models don't fit the data as well as the 2T models do. These results imply that the emission from these clusters is purely thermal in nature. Theoretically, a 2T+IC model would provide the most stringent constraints on both thermal and non-thermal emission from a merging galaxy cluster. In practice, however, the parameters are not well constrained, and either the second temperature component or the IC component becomes suppressed or adjusts to fit a few errant residuals, usually at the lowest or highest energy ends of the spectrum or around the Fe complex. This behavior results in a model effectively equivalent to either the 2T or T+IC. Thus, the uncertainty on the power law flux in the 2T+IC case ends up being equivalent to that in the T+IC model. When fitting such a model, the best fit temperature for one of the two thermal components becomes unphysically low and in essence becomes a T+IC model. In this case, the low temperature component is making small corrections at the lowest energies in the spectrum, where the instrument is somewhat less well-calibrated. When forcing both temperature components to be present, the IC emission becomes artificially suppressed based on the way in which the temperature components are constrained. The T+IC model, while more conservative, removes all the assumptions needed to be made to get a lower limit from the 2T+IC modelling. Thus the addition of an extra thermal component provides no advantage over a single temperature and power law model, which can already account for average thermal emission. In all of the spectra, the residuals at higher energies, where the spectrum is mostly background, have small error bars compared to the larger scatter. This is because the error bars are purely statistical and computed assuming that the background models used are accurate. In the 20 keV regime where the residuals show a large scatter they are underneath the background and not as accurate due to the imperfect modelling of the fluorescent lines here. The systematic error, typically on the order of several percent, is not well characterized and can affect these data points. The scatter of these residuals would be more in line with the size of their error bars if the systematic uncertainty, which hasn't been quantified in detail, were included in quadrature. #### 3.2.4 Preferred Model Including Systematic Uncertainties We summarize the results of the previous section in Table 1. What we saw from the different model C-stat values was that for both A665 (2933 with 2679 dof) and A2146 (860 with 840 dof), the 2T model best describes the data. This is consistent with previous temperature measurements of the clusters that show non-isothermal gas distributions (Dasadia et al., 2016; Russell et al. 2022). While the nominal C-stat values suggest the 2T model as the best-fitting model, we still have not taken into account background systematics that can greatly affect our models. It is well understood that the various background components _NuSTAR_ observes have varying degrees of systematic uncertainties associated with them (see Appendix A.2). To model the systematic errors, we created a distribution of best-fit temperatures for each model by randomly generating 1000 realizations of the background and then fitting the data again. Each new realization of the background had the normalizations of these three components shifted randomly, following a Gaussian distribution within the range of their systematic uncertainty. The distributions of the new best-fit parameters for A665 are shown in Figure 5 and for A2146 in Figure 6. The 1T model's shape is dependent solely on the temperature, meaning that any shifts in the background will have minimal effects on the temperature. The small change in temperature can be seen by the narrow width of the red histograms in Figures 5 and 6, where the systematic errors end up being comparable to the statistical errors. In the 2T and T+IC model scenarios, the extra parameter introduced will increase the effects of the background systematics within the model. With the inclusion of background uncertainties in the 2T model, we see a much greater effect on the temperature parameters, primarily \(T_{\rm H}\). A lower background, for example, when compared to the nominal values results in a higher \(T_{\rm H}\) since the spectrum is now turning over at a higher energy and vice versa. The \(T_{\rm L}\) temperature then adjusts along with the \(T_{\rm H}\) temperature, either increasing or decreasing to correct the low energy portion of the spectrum. The much more evident effects on the background on this model can be seen in the green histograms in Figures 5 and 6. The T+IC model behaves slightly differently than the 2T model when we include background systematics. The temperature parameter of this model is similar to the 1T model in that it is responsible for the shape of the spectrum. In this scenario, it is instead the power law normalizations that vary more with background changes. Any change to the normalization of the power law due Figure 4: Global fits to the A665 (_upper panels_) and A2146 spectra (_lower panels_) with 1T (_left panels_), 2T (_middle panels_), and T+IC (_right panels_) models. For the A665 spectra, black indicates FPMA and red indicates FPMB for Obs. ID 0201002002, whereas for Obs. ID 0201002002, FPMA and FPMB are indicated by green and orange, respectively. For the A2146 spectra, black indicates FPMA and red indicates FPMB. For both A665 and A2146, crosses show the data and the background is denoted with asterisks. The dashed curves correspond to the model components to visualize their contributions to the composite model. For plotting purposes, adjacent bins are grouped to ensure a detection significance of at least 10\(\sigma\), with maximum 20 bins. to shifts in the background results in a change to the overall IC flux measured, as the IC flux closely resembles the shape of the background. What we observe is that there are only small variations despite sometimes large changes to the background. Were there significant amounts of IC emission in these clusters, we would clearly see it here as the background would have great effects at higher energies. What we are seeing is that the T+IC model actually begins to behave like the \(T_{\rm L}\) parameter in the 2T model, where the IC flux behaves like a thermal component that corrects the model at low energies. These results are presented in the blue histograms in Figures 5 and 6. With our analysis so far, we have ruled out any detectable presence of IC scattering assuming our nominal background is the true background. In reality, the true background may vary by some amount from the nominal background (see Appendix A for more details). These variations must be taken into account in order to fully rule out the possibility of IC emission. To randomly vary the background components, we used the systematic uncertainties described in Appendix A as the standard deviation and then used the new backgrounds to create best fit models and compare them using their C-stat values. What we find is that even after taking into account systematic uncertainties, the 2T model is always preferred for describing the spectra over the T+IC model. This is reflected in our C-stat distribution histogram in Figures 5 and 6. While the difference in C-stat values varies somewhat, it is centered around the nominal values of 8 for A2146 and 5 for A665, with no iterations suggesting a T+IC detection. Out of our three models, we have found that the 2T model best describes the spectra of both A665 and A2146. This does not, however, tell us if the 2T model is the best fitting model for the data. The magnitude of the C-stat solely depends on the number of bins used and the data values, so it does not provide any information about the goodness-of-fit for a model to a set of data (Kaastra, 2017). It could still be the case that all of these models are poor fits to the data. To rule this out, we use the ftest in XSPEC to quantify how reasonable it is to add an extra model component from the T+IC model to the 2T model. We find F-test probabilities of 0.033 and 0.005 for A665 and A2146, meaning that there is a 97.3% and 99.5% chance, respectively, that the 2T model is truly the better model to fit the data compared to the T+IC model. Of course, the ICM of any cluster has a multi-temperature structure, but the spectral resolution and statistical quality of our data allows a 2T model to describe it adequately. ## 4 Summary and Discussion _NuSTAR_ observed A665 twice, once for a time period of 97 ks and the second time for a period of 91 ks, which were then cleaned down to 91 ks and 85 ks after a manual filtering process. The same was done for A2146, which was observed for a period of 285 ks and Figure 5: Top: The distribution of best-fit parameter values for the 1T (red), 2T (green), and T+IC (blue) models using the 1000 realizations of the background (as described in Section A.2) for A2146. Parameters shown are the temperatures of each model and the IC norm. Bottom: The distribution of the difference in C-stat values (\(\Delta C\)) between the T+IC and 2T models from fits using the 1000 realizations of the background. The 2T model is preferred in all iterations for describing the _NuSTAR_-observed spectra over the T+IC model, with the latter not having any realization favoring it over the 2T model. Therefore, we conclude that the data clearly disfavor the addition of a non-thermal component. reduced to 255 ks. After modelling and subtracting the background from our spectra, the data for both clusters was modelled with 1T, 2T, and T+IC models. Allowing for systematic background uncertainties, we showed that the 2T model was the best fitting model and also ruled out the presence of a large IC flux for both clusters. ### Non-thermal Emission The 90% upper limits 20-80 keV flux of non-thermal emission coming from A665 and A2146 is \(F_{\rm NT}<0.595\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) and \(F_{\rm NT}<0.85\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) respectively. From our T+IC model we can set 90% upper limits on the 20-80 keV flux of non-thermal emission coming from A665 and A2146 of \(F_{\rm NT}<0.595\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) and \(F_{\rm NT}<0.85\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) respectively. Based on the statistical tests and comparisons described in Section 3.2.4, we can safely rule out the presence of large amounts of IC scattering when comparing the 2T and T+IC models. The data is best fit by a two temperature component, purely thermal model. The variation in power law normalizations shown in the right panel of Figures 5 and 6, which translate to variations in IC fluxes, is due to the IC flux acting as an extra, lower temperature component in the T+IC model. The addition of this power law component does not fit the data as well as the 2T model as shown in C-stat comparisons the bottom panels of the previously mentioned figures. It should be noted that the C-stat difference is lower for A665 most likely due to the faintness of the cluster. For A665, our IC flux upper limit is an order of magnitude lower than the one found by Million and Allen (2009) (black triangle in Figure 7). Although they agree that a single temperature model does not entirely describe the cluster emission, they conclude that the temperature and metallicity variations across the cluster play an important role in the detection of non-thermal-like components in spectral fits. Sanders et al. (2005) used the same calibration files for Chandra as used by Million in the aforementioned paper, where they also found evidence for non-thermal like emission in Perseus. It was later pointed out by Molendi and Gastaldello (2009) that errors in the effective areas used in those calibration files inflated the apparent significance of the non-thermal component in their models. In addition, due to the susceptibility of _Chandra_ to galactic foreground, variations in the foreground column density of atomic gas, and an additional power law cannot be easily disentangled (see Rojas Bolivar et al. (2021) for more details). For A2146, there are no previous IC flux measurements that have been published. Including A665 and A2146, there are now a total of six published IC flux upper limits using _NuSTAR_ as of this work. Each of these six clusters that have been studied have various other upper limits or detections done with several other observatories, such as _Chandra_, _Suzaku_, _RXTE_, _Beppo-SAX_, _Swift_, and _XMM-Newton_. While no one has compiled IC flux measurements for _NuSTAR_ in the same way that Ota (2012) did for _Suzaku_, we wanted to design a plot similar to Ota et al. (2014) for _NuSTAR_ measurements compared to the other observatories. Figure 7 shows a plot of the non-thermal fluxes for the six clusters as a function of the _NuSTAR_ gas temperature. The overall trend that can be seen in the figure is that _NuSTAR_ has consistently pro \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & Temperature & Abundance & Norm\({}^{\textit{d}}\) & \(kT\) or \(\Gamma\) & Norm or IC Flux\({}^{\textit{b}}\) & & \\ Galaxy Cluster & Model & (keV) & (Solar) & (\(10^{-2}\) cm\({}^{-5}\)) & (keV or...) & (\(10^{-2}\) or \(10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\)) & C-stat\({}^{\textit{C}}\) & dof \\ \hline Abell 665 & 1T & \(9.15\pm 0.1,0.3\) & \(0.39\pm 0.03,0.03\) & \(1.6\pm 0.1,0.1\) &... &... & \(2945^{+131}_{-124}\) & 2681 \\ & 2T & \(9.34^{+2.9+1.9}_{-1.6,-2.1}\) & \(0.40^{+0.13+0.12}_{-0.19,-0.08}\) & \(1.1^{+0.2,+0.4}_{-0.4,-0.3}\) & \(4.47^{+4.3,+1.8}_{-2.6,-1.7}\) & \(0.6^{+0.6,+0.3}_{-0.3,-0.2}\) & \(2933^{+129}_{-132}\) & 2679 \\ & T+IC & \(9.12\pm 0.2,0.4\) & \(0.40\pm 0.04,0.02\) & \(1.1\pm 0.2,0.1\) & 2 (fixed) & \(0.468^{+0.127+0.11}_{-0.134}\) & \(2938^{+121}_{-125}\) & 2680 \\ Abell 2146 & 1T & \(8.29\pm 0.1,0.1\) & \(0.53\pm 0.06,0.04\) & \(5.6\pm 0.3,0.2\) &... &... & \(871^{+194}_{-1194}\) & 841 \\ & 2T & \(10.9^{+7.4+1.9}_{-1.8,-1.9}\) & \(0.58^{+0.08+0.14}_{-0.11,-0.07}\) & \(3.4^{+0.6+0.4}_{-0.4,-0.3}\) & \(4.26^{+1.4,+1.9}_{-1.3,-1.2}\) & \(2.8^{+0.7+0.5}_{-0.5,-0.6}\) & \(860^{+118}_{-122}\) & 839 \\ & T+IC & \(7.3\pm 0.3,0.2\) & \(0.63^{+0.04+0.08}_{-0.10}\) & \(4.7\pm 0.2,0.3\) & 2 (fixed) & \(0.600^{+0.251+0.132}_{-0.212,-0.114}\) & \(868^{+115}_{-111}\) & 840 \\ \hline \end{tabular} \({}^{a}\) Normalization of the APEC model, given by \((10^{-14}/[4\pi(1+z)^{2}D_{A}^{2}])\int n_{e}n_{H}dV\) where \(z\) is the redshift, \(D_{A}\) is the angular diameter distance, \(n_{e}\) is the electron density, \(n_{H}\) is the ionized hydrogen density, and \(V\) is the volume of the cluster. \({}^{b}\) 20–80 keV \({}^{c}\) Distribution of C-stat values from the 1000 realizations shown. \end{table} Table 1: This table contains the results of our fits using the 1T, 2T, and T+IC models for A665 and A2146. The redshift for all fits was allowed to be free. Nominally, the redshifts for A665 and A2146 are 0.189 and 0.232 respectively. When allowed to change, they became 0.201 and 0.259. See Appendix B for more information. Errors are presented as statistical followed by systematic. vided the most stringent constraints on the IC flux as well as only providing upper limits and no detections. It is important to note that the fluxes reported using _Chandra_ were all provided in the 0.6-7 keV range and the _NuSTAR_ and _Swift_ Bullet Cluster fluxes from 50-100 keV and 20-100 keV respectively. These have all been converted to 20-80 keV in the plot for the sake of comparison between the rest of the measurements. It is also important to note that estimating the flux depends on the various assumptions made by each author, such as how the thermal component was modelled, what power law index was chosen for the non-thermal component, different apertures, and different extraction regions for spectra, among other things. ### Cluster Magnetic Field As descibed in Section 1, with an upper limit on the IC flux, we can set a lower limit on the average magnetic field strength \(B\) using Equation 2. A total diffuse radio flux of 43.1 mJy inside our global extraction region was determined from VLA observations at 1.4 GHz for A665 (Vacca et al., 2010) and 1.5 mJy for A2146 (Hlavacek-Larrondo et al., 2018). With the T+IC model we obtain lower limits of 0.14 \(\mu\)G and 0.011 \(\mu\)G for A665 and A2146, respectively. Vacca et al. (2010) obtained an equipartition estimate for the magnetic field in A665 of \(B=1.3\)\(\mu\)G. In their work, they calculated the magnetic field strength assuming local equipartition of energy density between relativistic particles and the intracluster magnetic field. To set this condition, the magnetic field energy contributions should equal the relativistic particle contributions. There is another estimate of the magnetic field strength in A665 done by Feretti et al. (2004). In their estimate they also apply equipartition and calculate the magnetic Figure 6: Top: The distribution of best-fit parameter values for the 1T (red), 2T (green), and T+IC (blue) models using the 1000 realizations of the background (as described in Section A.2) for A665. Parameters shown are the temperatures of each model and the IC norm. Bottom: The distribution of the difference in C-stat values (\(\Delta C\)) between the T+IC and 2T models from fits using the 1000 realizations of the background. The 2T model is preferred in all iterations for describing the _NuSTAR_-observed spectra over the T+IC model, with the latter not having any realization favoring it over the 2T model. Therefore, we conclude that the data clearly disfavor the addition of a non-thermal component. Figure 7: This figure shows the 20–80 keV non-thermal flux for 6 clusters as measured by _NuSTAR_ (blue), _BeppoSAX_(red), _Chandra_ (black), _RXTE_ (green), _Suzaku_+_XMM-Newton_ (purple), _NuSTAR_+_XMM-Newton_ (magenta), and _Swift_ (orange). Results for Abell 2163 (star) are taken from Rojas Bolivar et al. (2021), Million & Allen (2009), Feretti et al. (2001), and Rephaeli et al. (2006). The Abell 665(downward triangle) flux is also from Million & Allen (2009). The Bullet Cluster (square) measurements are from Wik et al. (2014) and Ajello et al. (2010). Coma Cluster (diamond) results are taken from Gastaldello et al. (2015), Fusco-Femiano et al. (2004), Rephael & Gruber (2002), Wik et al. (2009), and Wik et al. (2011). Abell 523 (circle) results are from Cova et al. (2019). All data points are plotted with the _NuSTAR_ measured temperature, although some have been slightly shifted for clarity. field strength to be \(B=0.55\)\(\mu\)G. For A2146, there are no available magnetic field strength estimates, likely due to the fact that until recently (Hlavacek-Larrondo et al., 2018), the extended radio emission was difficult to measure (Russell et al., 2011). The magnetic field limit for A2146 is an order of magnitude lower than that of A665 and other clusters with IC upper limits, such as those presented in Figure 7, which is simply a result of its weak diffuse radio emission. As such, it is not an ideal target for an IC search, but since the radio halo was only recently discovered, this work provides the first IC-based limit on the magnetic field in A2146. The limit in A665 is more typical for IC searches although it still falls below estimates using equipartition arguments. The importance of _NuSTAR_'s ability to provide these magnetic field strength limits is that they run counter or rule out past detections of IC emission, which implied low magnetic field strengths. Such lower limits still allow for the possibility of high magnetic field strengths, which could be dynamically important, especially in cluster outskirts. Studies done on Abell 3667 suggest that that strong magnetic field strengths in the cluster outskirts could create a 20-30% pressure contribution to hydrostatic equilibrium (Sarazin et al., 2016; Finoguenov et al., 2010). This extra pressure contribution is currently omitted in many HSE models and may possibly resolve the \(\sim\)20% discrepancy between mass measurements done using weak lensing analysis and X-ray measurements (Biffi et al., 2016). Rotation measure (RM) synthesis estimates provide another method for measuring the magnetic field in clusters, giving volume-average strengths a few times higher (e.g., Bonafede et al., 2010). However, RM-estimated magnetic fields are weighted by electron density, so correlations between density and field strength could bias the value of \(B\). Lower limits from IC searches thus provide an important constraint on magnetic fields in clusters. ### Future Work In future work, we would like to limit our non-thermal emission search to local regions within the clusters, using _XMM-Newton_ data to include soft X-rays and joint-fit them to the _NuSTAR_ data. If the diffuse IC emission is more localized within the ICM, this would provide enough sensitivity to detect it. Additionally, the inclusion of the soft X-ray data may tighten constraints on the measured parameters within our three models presented earlier. Another topic for exploration with these clusters is the detection of a possible non-thermal bremmstrahlung, corresponding with a population of suprathermal electrons (Bykov et al., 2019). Combining the _NuSTAR_ data with _LOFAR_ radio data could bridge the gap between the thermal and non-thermal electron distributions as well as provide insight into the non-thermal Sunyaev-Zeldovich effect (Blasi et al., 2000; Petrosian et al., 2008). This work made use of data from the _NuSTAR_ mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by NASA. RARB and DRW gratefully acknowledge support from NASA grants NNX17AH31G and 80NSSC18K1638. Basic research in radio astronomy at the Naval Research Laboratory is supported by 6.1 Base funding. This research has made use of the _NuSTAR_ Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). VV acknowledges support from INAF mainstream project "Galaxy Clusters Science with LOFAR 1.05.01.86.05". We thank the referee for useful comments.
2305.17231
A solvable model for graph state decoherence dynamics
We present an exactly solvable toy model for the continuous dissipative dynamics of permutation-invariant graph states of $N$ qubits. Such states are locally equivalent to an $N$-qubit Greenberger-Horne-Zeilinger (GHZ) state, a fundamental resource in many quantum information processing setups. We focus on the time evolution of the state governed by a Lindblad master equation with the three standard single-qubit jump operators, the Hamiltonian part being set to zero. Deriving analytic expressions for the expectation values of observables expanded in the Pauli basis at all times, we analyze the nontrivial intermediate-time dynamics. Using a numerical solver based on matrix product operators, we simulate the time evolution for systems with up to 64 qubits and verify a numerically exact agreement with the analytical results. We find that the evolution of the operator space entanglement entropy of a bipartition of the system manifests a plateau whose duration increases logarithmically with the number of qubits, whereas all Pauli-operator products have expectation values decaying at most in constant time.
Jérôme Houdayer, Haggai Landa, Grégoire Misguich
2023-05-26T19:43:57Z
http://arxiv.org/abs/2305.17231v2
**A solvable model for graph state decoherence dynamics** ## Abstract **We present an exactly solvable toy model for the continuous dissipative dynamics of permutation-invariant graph states of \(N\) qubits. Such states are locally equivalent to an \(N\)-qubit Greenberger-Horne-Zeilinger (GHZ) state, a fundamental resource in many quantum information processing setups. We focus on the time evolution of the state governed by a Lindblad master equation with the three standard single-qubit jump operators, the Hamiltonian part being set to zero. Deriving analytic expressions for the expectation values of observables expanded in the Pauli basis at all times, we analyze the nontrivial intermediate-time dynamics. Using a numerical solver based on matrix product operators we simulate the time evolution for systems with up to 64 qubits and verify a numerically exact agreement with the analytical results. We find that the evolution of the operator space entanglement entropy of a bipartition of the system manifests a plateau whose duration increases logarithmically with the number of qubits, whereas all Pauli-operator products have expectation values decaying at most in constant time.** ## 1 Introduction Graph states were introduced by Briegel and Raussendorf in 2001 [1] as special entangled states of \(N\) qubits. These states with multipartite entanglement play an important role in quantum information theory because they can be employed as a resource in a measurement-based quantum computation framework [2, 3], they can be used in error correction codes [4] and for quantum communications [5]. In particular, permutation-invariant graph states [6], which are locally equivalent to an \(N\)-qubit Greenberger-Horne-Zeilinger (GHZ) state, are the subject of extensive research [7, 8, 9, 10] and their creation and characterization serve as one of a few standard benchmarks of quantum computation hardware [11, 12, 13]. The use of graph states for information processing in current quantum devices will inevitably have to face uncontrolled decoherence processes and some aspects of graph state entanglement under the presence of decoherence have already been investigated [14, 15, 16, 17]. Most of these previous works focused on discrete evolutions of the density matrix via completely positive maps (noisy channels). In this work, we introduce and discuss a model of a graph state realized with qubits (or spin-one-half particles), which decoheres continuously in time as described by a Lindblad master equation for the density operator [18, 19]. We account for the three most prevalent local jump operators (dissipators); Two jump terms describe the incoherent transitions from \(|0\rangle\) to \(|1\rangle\) (and vice versa), and the third one is the so-called dephasing term. The initial state is a pure (graph) state, and it evolves into a mixed state under the action of the dissipation. Although it is a many-body problem, the structure of the model is simple enough that the expectation values of any observable can here be computed exactly by solving the equations of motion for the expectation values of product of Pauli matrices. We complement our analytic treatment with the use of a numerical Lindblad solver [20, 21], which is internally based on the C++ ITensor library [22] (see also [23] for a review on available numerical methods for this type of problem). In the solver, the state of the system - a many-body density matrix \(\rho\) - is stored in the form a matrix-product operator (MPO). Since \(\rho\) is in general a matrix of size \(2^{N}\times 2^{N}\), a brute-force numerical simulation of the Lindblad dynamics generally becomes very demanding beyond a dozen of qubits. Taking advantage of the fact that the states produced along the time evolution are only mildly correlated in the present model, the MPO approach allows us to reach a very high accuracy with modest computing resources (i.e. low MPO bond dimension) even with as many as \(N=64\) qubits. Other numerical approaches would also be efficient in the context of the current setup [6, 24]. The presented model can be viewed as a toy model illustrating the basic mechanisms at play and gaining an understanding of the dominant dynamical behavior in similar setups. It may also be used as a starting point for more realistic studies with different graph states and some Hamiltonian terms competing with the dissipators. With our numerical approach we are able to calculate global quantities that are not immediately accessible analytically, and observe an interesting scaling with size of bipartite correlations in the system. ## 2 Notations and definition of the model We consider a system composed of \(N\) qubits (with basis states \(|0\rangle=|\uparrow\rangle\) and \(|1\rangle=|\downarrow\rangle\)), indexed with \(i=1\ldots N\). We denote the usual Pauli operators by \(\sigma^{x}\), \(\sigma^{y}\) and \(\sigma^{z}\), or alternatively by \(X\), \(Y\) and \(Z\). Additionally, we define \(\sigma^{\pm}=\frac{1}{2}(\sigma^{x}\pm i\sigma^{y})\). For completeness, we start by recalling the definition of a graph state. A graph state is an entangled state that can be produced by the symmetrical 2-qubit gate "controlled-\(Z\)" (CZ), which is defined by its matrix \[CZ=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{pmatrix} \tag{1}\] in the basis \(\{|00\rangle,|10\rangle,|01\rangle,|11\rangle\}\). Given an undirected graph \(G(V,E)\) where \(V=1\ldots N\) represents the qubits and \(E\subset V\times V\) is the set of edges, the corresponding graph state \(|g\rangle\) is defined by \[|g\rangle=\prod_{(i,j)\in E}CZ(i,j)|++\cdots+\rangle, \tag{2}\] where \(|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\). It is well known [1], that the graph state \(|g\rangle\) is characterized by its stabilizers \[S_{i}=\sigma^{x}_{i}\prod_{j|(i,j)\in E}\sigma^{z}_{j}, \tag{3}\] through \[S_{i}|g\rangle=|g\rangle. \tag{4}\] In the present study, we focus on the case where the system is _invariant under any permutation of the \(N\) qubits_. We start at \(t=0\) from the only non-trivial fully symmetrical graph state, that is the graph state associated to the complete graph. A complete graph is a graph where all possible edges are present, so that each vertex is linked to the \(N-1\) other vertices. Our initial state (at \(t=0\)) is thus given by \[|g\rangle=\prod_{i<j}CZ(i,j)|++\cdots+\rangle, \tag{5}\] As a side remark we note that, thanks to the transformation rules of graph states under the action of local Clifford (LC) gates [15], the complete graph is LC-equivalent to the star graph.1 In turn, the star graph state can be transformed into the \(N\)-qubit Greenberger-Horne-Zeilinger (GHZ) state [25] by application of Hadamard gates to all qubits except the center of the star. The complete graph is thus LC-equivalent to the GHZ state. Footnote 1: The star graph has a central vertex connected to all the other vertices. We consider a time evolution generated by a Lindblad equation (see for example [26]) where the Hamiltonian part is set to zero. The state of the system is described by its density matrix \(\rho\) whose time evolution is given by \[\frac{\partial}{\partial t}\rho=\mathcal{D}[\rho], \tag{6}\] where \(\mathcal{D}\) is the dissipator, a linear superoperator acting on \(\rho\). We consider three possible terms in the dissipator \(\mathcal{D}=\mathcal{D}_{0}+\mathcal{D}_{1}+\mathcal{D}_{2}\). They are given by \[\mathcal{D}_{0}[\rho]=g_{0}\sum_{i}\left(\sigma_{i}^{+}\rho \sigma_{i}^{-}-\frac{1}{2}\{\sigma_{i}^{-}\sigma_{i}^{+},\rho\}\right), \tag{7}\] \[\mathcal{D}_{1}[\rho]=g_{1}\sum_{i}\left(\sigma_{i}^{-}\rho \sigma_{i}^{+}-\frac{1}{2}\left\{\sigma_{i}^{+}\sigma_{i}^{-},\rho\right\} \right).\] (8) \[\mathcal{D}_{2}[\rho]=g_{2}\sum_{i}\left(\sigma_{i}^{z}\rho \sigma_{i}^{z}-\rho\right). \tag{9}\] where \(\{,\}\) is the anticommutator.2\(\mathcal{D}_{0}\) (resp. \(\mathcal{D}_{1}\)) then corresponds to an incoherent transition toward the state \(|0\rangle\) (resp. \(|1\rangle\)). \(\mathcal{D}_{2}\) corresponds to a dephasing in the \(xy\)-plane. Footnote 2: \(\{A,B\}=AB+BA\). In the next section, we show how to compute exactly the evolution of the expectation value of any product of Pauli matrices in this model. From this result one can then obtain the expectation value of any observable as a function of time. In Sec. 4, we will then compare these analytical results with numerical simulations based on a matrix product operator (MPO) representation of the density matrix. Sec. 5 provides some concluding remarks. ## 3 Closed-form observable dynamics ### Observable dynamics To compute the evolution of the mean value of a given (time-independent) observable \(O\), we start from the fact that \(\langle O\rangle=\operatorname{Tr}(\rho O)\) and we use Eq. 6 to get \[\begin{split}\frac{\partial}{\partial t}\langle O\rangle& =\frac{\partial}{\partial t}\operatorname{Tr}(\rho O)=\operatorname{Tr} \left(\frac{\partial\rho}{\partial t}O\right)\\ &=\operatorname{Tr}(\mathcal{D}[\rho]O)\\ &=\operatorname{Tr}(\mathcal{D}_{0}[\rho]O)+\operatorname{Tr}( \mathcal{D}_{1}[\rho]O)+\operatorname{Tr}(\mathcal{D}_{2}[\rho]O).\end{split} \tag{10}\] First let us consider \(\mathcal{D}_{0}\) \[\begin{split}\text{Tr}(\mathcal{D}_{0}[\rho]O)&=g_{0} \sum_{i}\text{Tr}\bigg{[}(\sigma_{i}^{+}\rho\sigma_{i}^{-}-\frac{1}{2}\{\sigma_{ i}^{-}\sigma_{i}^{+},\rho\})O\bigg{]}\\ &=g_{0}\sum_{i}\text{Tr}\bigg{[}\rho\left(\sigma_{i}^{-}O\sigma_ {i}^{+}-\frac{1}{2}\{\sigma_{i}^{-}\sigma_{i}^{+},O\}\right)\bigg{]}\\ &=g_{0}\sum_{i}\langle\Lambda_{0}^{i}[O]\rangle,\end{split} \tag{11}\] where the superoperator \(\Lambda_{0}^{i}\) is given by \[\Lambda_{0}^{i}[O]=\sigma_{i}^{-}O\sigma_{i}^{+}-\frac{1}{2}\{\sigma_{i}^{-} \sigma_{i}^{+},O\}. \tag{12}\] Similarly, we have \[\text{Tr}(\mathcal{D}_{1}[\rho]O) =g_{1}\sum_{i}\langle\Lambda_{1}^{i}[O]\rangle, \tag{13}\] \[\text{Tr}(\mathcal{D}_{2}[\rho]O) =g_{2}\sum_{i}\langle\Lambda_{2}^{i}[O]\rangle, \tag{14}\] with \[\Lambda_{1}^{i}[O] =\sigma_{i}^{+}O\sigma_{i}^{-}-\frac{1}{2}\{\sigma_{i}^{+}\sigma _{i}^{-},O\}, \tag{15}\] \[\Lambda_{2}^{i}[O] =\sigma_{i}^{z}O\sigma_{i}^{z}-O. \tag{16}\] It is clear that if \(O\) does not operate on qubit \(i\) then \(\Lambda_{*}^{i}[O]=0\). Moreover, for an operator \(O_{i}\) acting on qubit \(i\) only, \(\Lambda_{*}^{i}[O_{i}]\) commutes with operators acting on the other qubits. The nontrivial action of the \(\Lambda_{*}^{i}\) can then be summarized by the following relations: \[\Lambda_{0}[\sigma^{x}] =-\frac{1}{2}\sigma^{x} \Lambda_{1}[\sigma^{x}] =-\frac{1}{2}\sigma^{x} \Lambda_{2}[\sigma^{x}] =-2\sigma^{x}\] \[\Lambda_{0}[\sigma^{y}] =-\frac{1}{2}\sigma^{y} \Lambda_{1}[\sigma^{y}] =-\frac{1}{2}\sigma^{y} \Lambda_{2}[\sigma^{y}] =-2\sigma^{y}\] \[\Lambda_{0}[\sigma^{z}] =1-\sigma^{z} \Lambda_{1}[\sigma^{z}] =-1-\sigma^{z} \Lambda_{2}[\sigma^{z}] =0,\] where the qubit index of the Pauli operators are identical in the l.h.s and r.h.s and have been omitted for brevity. ### Observables at \(t=0\) To lighten the notations, we will write \(X_{i}\) instead of \(\sigma_{i}^{x}\) and likewise for \(Y\) and \(Z\). As our system is invariant under any permutation of the qubits, specific indices are irrelevant. In the following, we will note \(\langle X\rangle=\langle\sigma_{i}^{x}\rangle\) and likewise for \(\langle Y\rangle\) and \(\langle Z\rangle\). More generally, we note \(\langle X^{n}Y^{m}Z^{l}\rangle=\langle\sigma_{1}^{x}\ldots\sigma_{n}^{x} \sigma_{n+1}^{y}\ldots\sigma_{n+m}^{y}\sigma_{n+m+1}^{z}\ldots\sigma_{n+m+l}^{ z}\rangle\) which is independent of the actual order of the operators or the specific indices as long as they are all different. When indices are necessary, we will add them, for example \(\langle X_{1}Z_{1}Y^{2}Z\rangle\) is the same as \(\langle X_{1}Z_{1}Y_{2}Y_{3}Z_{4}\rangle\). Likewise, \(\langle(X_{i}Z_{i})^{2}(Y_{i}X_{j})^{2}Z\rangle\) is the same as \(\langle X_{1}Z_{1}X_{2}Z_{2}Y_{3}X_{3}Y_{4}X_{4}Z_{5}\rangle\). To compute the expectation value of a product of Pauli operators at \(t=0\), that is on the complete graph state \(|g\rangle\), we start with two remarks. First the stabilizer \(S=XZ^{N-1}\) leaves \(|g\rangle\) unchanged (see Eqs. 3 and 4) so that \[\langle XZ^{N-1}\rangle=\langle g|XZ^{N-1}|g\rangle=\langle g|g\rangle=1. \tag{17}\] Second, since \(Z\) commutes with \(CZ\) and since \(CZ^{2}=1\), we have for \(n>0\) \[\begin{split}\langle Z^{n}\rangle&=\langle g|Z^{n}|g \rangle\\ &=\langle+\cdots+|Z^{n}|+\cdots+\rangle\\ &=\langle+\cdots+|-\cdots-+\cdots+\rangle\\ &=0.\end{split} \tag{18}\] We start by computing observables of the form \(\langle X^{n}Z^{l}\rangle\). To do this, we introduce the stabilizer at one of the indices of the \(X\). So for \(n>0\) \[\begin{split}\langle X^{n}Z^{l}\rangle&=\langle X_{ 1}X^{n-1}Z^{l}X_{1}Z^{N-1}\rangle\\ &=\langle(X_{i}Z_{i})^{n-1}Z^{N-l-n}\rangle\\ &=(-i)^{n-1}\langle Y^{n-1}Z^{N-l-n}\rangle\\ &=(-1)^{\frac{n-1}{2}}\langle Y^{n-1}Z^{N-l-n}\rangle.\end{split} \tag{19}\] To be real, this of course must be \(0\) when \(n\) is even. We can do the same for \(\langle Y^{m}Z^{l}\rangle\), which gives for \(m>0\) \[\begin{split}\langle Y^{m}Z^{l}\rangle&=\langle Y_{ 1}Y^{m-1}Z^{l}X_{1}Z^{N-1}\rangle\\ &=\langle Y_{1}X_{1}(Y_{i}Z_{i})^{m-1}Z^{N-l-m}\rangle\\ &=i^{m-2}\langle X^{m-1}Z^{N-l-m+1}\rangle\\ &=(-1)^{\frac{n}{2}-1}\langle X^{m-1}Z^{N-l-m+1}\rangle.\end{split} \tag{20}\] And again this must be \(0\) if \(m\) is odd. Substituting Eq. 19 in Eq. 20, we can conclude that for even \(m\) \[\langle Y^{m}Z^{l}\rangle=\langle Y^{m-2}Z^{l}\rangle=\langle Z^{l}\rangle, \tag{21}\] which is \(0\) if \(l>0\) and \(1\) otherwise. We can also finish the computation for \(X\) for odd \(n\) \[\begin{split}\langle X^{n}Z^{l}\rangle&=(-1)^{\frac {n-1}{2}}\langle Y^{n-1}Z^{N-l-n}\rangle\\ &=(-1)^{\frac{n-1}{2}}\langle Z^{N-l-n}\rangle,\end{split} \tag{22}\] which is \(1\) if \(n+l=N\), and zero otherwise. The last product we have not yet computed is the general one \(\langle X^{n}Y^{m}Z^{l}\rangle\) with \(n>0\) and \(m>0\). \[\begin{split}\langle X^{n}Y^{m}Z^{l}\rangle&=\langle X _{1}X^{n-1}Y^{m}Z^{l}X_{1}Z^{N-1}\rangle\\ &=\langle(X_{i}Z_{i})^{n-1}(Y_{j}Z_{j})^{m}Z^{N-n-m-l}\rangle\\ &=i^{m-n+1}\langle X^{m}Y^{n-1}Z^{N-n-m-l}\rangle\\ &=(-1)^{\frac{n-n+1}{2}}\langle X^{m}Y^{n-1}Z^{N-n-m-l}\rangle. \end{split} \tag{23}\] This can be nonzero only if \(n+m\) is even. But if \(m=1\) there are no \(Y\) left and this is zero according to Eq. 22. And if \(m>1\), the right-hand side can be nonzero only if \(n+m-1\) is even (thus \(n+m\) is odd) which we already excluded. Thus, \(\langle X^{n}Y^{m}Z^{l}\rangle=0\) if \(n>0\) and \(m>0\). To summarize, only the following products of Pauli operators have nonzero mean values in the complete graph state: \[\langle X^{2n+1}Z^{N-2n-1}\rangle=(-1)^{n}\qquad\text{ and }\qquad\langle Y^{2n} \rangle=1. \tag{24}\] ### Solution of the equations of motion We start with an example to show how the equation of motion leads to some differential equations. Here we consider \(\langle XZ\rangle\) in the case where only \(g_{0}\) is not zero. \[\begin{split}\frac{\partial}{\partial t}\langle XZ\rangle& =\frac{\partial}{\partial t}\langle X_{1}Z_{2}\rangle\\ &=\operatorname{Tr}\left[\mathcal{D}_{0}[\rho]\langle X_{1}Z_{2 }\rangle\right]\\ &=g_{0}\sum_{i}\langle\Lambda_{0}^{i}[X_{1}Z_{2}]\rangle\\ &=g_{0}\left(\langle\Lambda_{0}^{1}[X_{1}]Z_{2}\rangle+\langle X _{1}\Lambda_{0}^{2}[Z_{2}]\rangle\right)\\ &=g_{0}\left(-\frac{1}{2}\langle X_{1}Z_{2}\rangle+\langle X_{1} (1-Z_{2})\rangle\right)\\ &=g_{0}\left(-\frac{3}{2}\langle XZ\rangle+\langle X\rangle \right).\end{split} \tag{25}\] Now the general formula for \(\frac{\partial}{\partial t}\langle X^{n}Y^{m}Z^{l}\rangle\) and all dissipators: each \(X\) or \(Y\) gives a term \((-g_{0}/2-g_{1}/2-2g_{2})\langle X^{n}Y^{m}Z^{l}\rangle\) and each \(Z\) gives the two terms \[(-g_{0}-g_{1})\langle X^{n}Y^{m}Z^{l}\rangle,\qquad(g_{0}-g_{1})\langle X^{n}Y ^{m}Z^{l-1}\rangle. \tag{26}\] Globally, we obtain for \(l>0\) \[\frac{\partial}{\partial t}\langle X^{n}Y^{m}Z^{l}\rangle =-(\alpha(n+m)+\beta l)\langle X^{n}Y^{m}Z^{l}\rangle+\gamma l \langle X^{n}Y^{m}Z^{l-1}\rangle, \tag{27}\] \[\frac{\partial}{\partial t}\langle X^{n}Y^{m}\rangle =-\alpha(n+m)\langle X^{n}Y^{m}\rangle, \tag{28}\] where \[\alpha=\frac{g_{0}+g_{1}}{2}+2g_{2},\qquad\qquad\beta=g_{0}+g_{1},\qquad \qquad\gamma=g_{0}-g_{1}. \tag{29}\] Here, \(\alpha\) is the rate of dephasing (decoherence) associated to \(X\) and \(Y\) (with its inverse equal to the characteristic \(T_{2}\) timescale), \(\beta\) is the decay parameter associated to \(Z\) - the inverse of the relaxation time \(T_{1}\), \(\gamma\) is the global drive towards the steady state, and \(\gamma/\beta\) determines the thermal steady state population (value of \(\langle Z\rangle\)) reached in the limit of large time. In the case where \(\gamma=0\) (that is \(g_{0}=g_{1}\)), all observables have a simple exponential decay. These equations can be solved by recurrence starting at \(l=0\) using the initial conditions of the previous section. Indeed, at \(l=0\) (that is no \(Z\)), we get \[\langle Y^{2n}\rangle=\mathrm{e}^{-2ant}, \tag{30}\] and all the others products without \(Z\) give zero because they start at \(0\) and stay there. Now we can increase \(l\) and get \[\langle Y^{2n}Z^{l}\rangle=\left(\frac{\gamma}{\beta}\left(1-\mathrm{e}^{- \beta t}\right)\right)^{l}\mathrm{e}^{-2ant}. \tag{31}\] Note that if \(\gamma=0\) or \(\gamma=\beta=0\) then \(\langle Y^{2n}Z^{l}\rangle=0\) for \(l>0\). Finally, for \(\langle X^{2n+1}Z^{N-2n-1}\rangle\), the equations are directly solved and we obtain \[\langle X^{2n+1}Z^{N-2n-1}\rangle=(-1)^{n}\mathrm{e}^{-((2n+1)(\alpha-\beta)+ N\beta)t}, \tag{32}\] with all the others being identically zero. It is interesting to note that the stabilizers, that characterize the initial graph state decay very rapidly, i.e. with a timescale inversely proportional to the size of the system. This reflects some relative fragility of the graph state correlations in the present dissipative context, and may be related to the extensive number of neighbors in the initial complete graph. ### Reduced density matrices The 2-qubit density matrix can be obtained from the two-point correlations computed previously. Writing \(z_{\pm}=1\pm\langle Z\rangle=1\pm\frac{\gamma}{B}\left(1-e^{-\beta t}\right)\) (see Eq. 31 with \(n=0\) and \(l=1\)) and \(y^{2}=\langle YY\rangle=e^{-2at}\) (see Eq. 30 with \(n=1\)), this matrix reads (for \(N>2\) only) \[\rho_{2}=\frac{1}{4}\begin{pmatrix}z_{+}^{2}&0&0&-y^{2}\\ 0&z_{+}z_{-}&y^{2}&0\\ 0&y^{2}&z_{+}z_{-}&0\\ -y^{2}&0&0&z_{-}^{2}\end{pmatrix}. \tag{33}\] Likewise for 3 qubits and \(N>3\), we have \[\rho_{3}=\frac{1}{8}\begin{pmatrix}z_{+}^{3}&0&0&-y^{2}z_{+}&0&-y^{2}z_{+}&-y ^{2}z_{+}&0\\ 0&z_{-}z_{+}^{2}&y^{2}z_{+}&0&y^{2}z_{+}&0&0&-y^{2}z_{-}\\ 0&y^{2}z_{+}&z_{-}z_{+}^{2}&0&y^{2}z_{+}&0&0&-y^{2}z_{-}\\ -y^{2}z_{+}&0&0&z_{-}^{2}z_{+}&0&y^{2}z_{-}&y^{2}z_{-}&0\\ 0&y^{2}z_{+}&y^{2}z_{+}&0&z_{-}z_{+}^{2}&0&0&-y^{2}z_{-}\\ -y^{2}z_{+}&0&0&y^{2}z_{-}&0&z_{-}^{2}z_{+}&y^{2}z_{-}&0\\ -y^{2}z_{+}&0&0&y^{2}z_{-}&0&y^{2}z_{-}&z_{-}^{2}z_{+}&0\\ 0&-y^{2}z_{-}&-y^{2}z_{-}&0&-y^{2}z_{-}&0&0&z_{-}^{3}\end{pmatrix}. \tag{34}\] More generally, reduced density matrices for larger subsystem can be obtained by noting that each matrix element is the expectation value of a product of \(N\) operators which are of the type \((Z_{i}+1)/2\) (if the matrix element connects two states where \(Z_{i}=1\)), \((1-Z_{i})/2\) (matrix element between two states where \(Z_{i}=-1\)), or \(\sigma_{i}^{\pm}\) (matrix element between two states with opposite \(Z_{i}\)). It can be checked that \(\rho_{2}\) is separable at all times. We also checked that \(\rho_{3}\) and \(\rho_{4}\) (not shown) show no negativity, whatever the time \(t\). Although the systems has some global multipartite entanglement (at least for sufficiently short times), the separability of reduced density matrices is a known property of graph states and GHZ states [15]. It is presumably the case at all times in the present model, unless one considers the system _globally_ (all qubits). ## 4 Computational results ### Dissipative dynamics of observables In this section we present the dynamics of Pauli observables calculated from the analytic expressions of Eqs. 30-32, together with numerical simulation results from the MPO solver. In App. A we give more details on the numerical simulations of the density matrix dynamics. To explore the parameter space, we studied five representative cases varying the values of \(g_{0}\), \(g_{1}\) and \(g_{2}\). The values used are shown in Tab. 1, together with some characterization of the environments that would produce such parameter values. First, we look at Eq. 30 in the left panel of Fig. 1. The numerical results are in perfect agreement with the theory. Since the expectation value of a product of an odd number of \(Y\) vanishes, \(\langle YY\rangle\) is a _connected_ correlation, and it shows a decay with a characteristic timescale \(\thicksim 1/\alpha\). Now we turn to Eq. 31, first in a simple case for the single qubit observable \(\langle Z\rangle\). The result is displayed in the right panel of Fig. 1 (this corresponds to \(n=0\) and \(l=1\).). The cases 2 and 5 are not shown since they have \(\gamma=0\) and thus \(\langle Z\rangle=0\). \(\langle Z\rangle\) displays a relaxation toward the steady state value \(\langle Z\rangle_{t\to\infty}=\frac{\gamma}{B}=\frac{g_{0}-g_{1}}{g_{0}+g_{1}}\). When \(g_{1}=0\) this is simply a relaxation toward the \(|0\rangle\) state. We also note that effect of the correlations in the system are invisible in this observable, in the sense that the exact same behavior would be observed whatever the initial state provided that \(\langle Z\rangle=0\) on all qubits at time \(0\). In the more complex case where \(n\neq 0\), we cannot scale all the cases on one curve, so we chose to show the dependence in the number of qubits for one case. In the left panel of Fig. 2, we show the time evolution of \(\langle YYZ\rangle\) (that is \(n=1\) and \(l=1\)) for case 1 and different number of qubits. Again the simulations are in perfect agreement with the theory. This observable has a non-monotonous time evolution for a simple reason: from Eq. 31 we see this 3-point observable factorizes into \(\langle YYZ\rangle=\langle YY\rangle\langle Z\rangle\), that is a product of a decreasing function by an increasing function. Cases 3 and 4 have similar behaviors, whereas cases 2 and 5 have \(\gamma=0\) and thus \(\langle YYZ\rangle=0\). Finally, we also checked the expectation value of the stabilizer of the complete graph state, that is Eq. 32 with \(n=1\). The results are displayed in the right panel of Fig. 2, they are again in perfect agreement to the theory. The initial value is 1, as it should since the initial state is an eigenstate of the stabilizer for the eigenvalue 1. We then observe an exponential decay with a characteristic timescale given by \((\alpha+(N-1)\beta)^{-1}\) and which decreases with the number of qubits. This size-dependence of the decay rate can be viewed as a consequence of the fact that this specific observable involves all the qubits and reflects some global correlations in the system. \begin{table} \begin{tabular}{|l|l|r r r|r r r|} \hline & Environment & \(g_{0}\) & \(g_{1}\) & \(g_{2}\) & \(\alpha\) & \(\beta\) & \(\gamma\) \\ \hline case 1 & Spontaneous emission only & 1 & 0 & 0 & 0.5 & 1 & 1 \\ case 2 & Pure dephasing & 0 & 0 & 1 & 2 & 0 & 0 \\ case 3 & Low temperature, low dephasing & 0.9 & 0.1 & 0.1 & 0.7 & 1 & 0.8 \\ case 4 & Generic dissipative rates & 0.6 & 0.4 & 0.25 & 1 & 1 & 0.2 \\ case 5 & Infinite temperature with dephasing & 1 & 1 & 1 & 3 & 2 & 0 \\ \hline \end{tabular} \end{table} Table 1: Different sets of physical parameters used in the simulations. Figure 1: Left: Time evolution of \(\langle YY\rangle\) in the different parameter cases with 64 qubits. The rescaled time \(\alpha\cdot t\) in the horizontal axis allows the collapse of the curves associated to different sets of parameters. The line corresponds to Eq. 30 in the case \(n=1\). Right: same for \(\langle Z\rangle\). For this quantity the relevant rescaling of the time is \(\beta\cdot t\). The line corresponds to Eq. 31 in the case \(n=0\) and \(l=1\). Cases 2 and 5 are not shown as they have \(\gamma=0\) and thus \(\langle Z\rangle=0\). ### Operator space entanglement entropy The method to compute the von Neumann entanglement entropy associated to a given bipartition of a graph state is explained in Ref. [4]. In the case of the complete graph state the result is \(S_{\text{vN}}=\ln 2\) whatever the bipartition (for two non-empty subsystems). This result is also easy to obtain using the fact that the complete graph state is LC-equivalent to the GHZ state. For a mixed state \(\rho\), it is interesting to consider the operator space entanglement entropy (OSEE) [27], a quantity that naturally arises in simulations of the density matrix dynamics represented using matrix product operators. It can be defined by considering the vectorization \(|\rho\rangle\rangle\) of \(\rho\), which is a pure state in an enlarged Hilbert of dimension 4 per site (spanned by the 3 Pauli matrices plus the identity matrix). The OSEE associated to a given bipartition into two subsystems \(A\) and \(B\) is by definition the von Neumann entanglement \(\text{OSEE}^{\text{A}|B}=S^{(A)}_{\text{vN},|\rho\rangle})=S^{(B)}_{\text{vN},|\rho\rangle})\) associated to this partition of the vectorized pure state \(|\rho\rangle\rangle\). The OSEE quantifies the total amount of correlations between the two subsystems. We note also that the OSEE alone does not indicate whether the correlations are mostly classical or quantum. For a pure state \(|g\rangle\), the associated density matrix is \(\rho=|g\rangle\langle g|\) and OSEE of \(\rho\) for a given bipartition is by construction twice the von Neumann entropy associated to the same bipartition in \(|g\rangle\). So, in our model, the OSEE at time \(t=0\) is \(2\ln 2\) for any nontrivial bipartition of the \(N\) qubits. At long times the system reaches a product state (if \(g_{1}=0\) it simply corresponds to all the qubits in state \(|0\rangle\)) which is a state with bond dimension equal to 1 and vanishing OSEE. At intermediate finite times \(t>0\), the OSEE must therefore decrease from its initial value and eventually converge to 0 for any bipartition of the system. It is, however, no longer independent of the bipartition and in the rest of the paper, we will focus of the bipartition in two subsystems of equal size (\(N/2\)). The dynamics of the OSEE for this bipartition in two equal halves of system is shown in Fig. 3. The interesting feature is the appearance of a very clear plateau at \(\text{OSEE}=\ln 2\), whose duration grows with the number of qubits. To explore this behavior, we determined numerically the scaling laws for both the time at Figure 2: Left: Time evolution of \(\langle YYZ\rangle\) for case 1 for different values of \(N\). The line corresponds to Eq. 31 in the case \(n=1\) and \(l=1\). Cases 3 and 4 have similar behaviors, whereas cases 2 and 5 have \(\gamma=0\) and thus \(\langle YYZ\rangle=0\). As discussed in the text, the dynamics of this observable correspond to the product of two exponentials, making it nonmonotonous. Right: Time evolution of the stabilizer \(\langle XZ^{N-1}\rangle\). The rescaled time in the horizontal axis ensures that the different simulations (cases \(1,\cdots,5\)) fall onto the same curve, clearly showing the scaling of the \(\beta\) contribution of the decay rate with the system size. The line corresponds to Eq. 32 for \(n=1\). Figure 4: Time evolution of the OSEE between the two halves of the system for different values of \(N\) in case 1 (same data as Fig. 3). Left: time rescaled by a factor \(N\). The collapse of the curves at early times shows that the initial drop of the OSEE takes place over a timescale proportional to \(1/N\). Right: same data with time shifted by \(\delta\ln N\), here \(\delta=1\). In this panel the collapse of the curves at the end of the plateau illustrates the fact that the duration of the plateau is proportional to \(\delta\ln N\). Figure 3: Time evolution of the OSEE between the two halves of the system for different number of qubits in case 1. A plateau in the OSEE value at intermediate times, whose duration grows (logarithmically) with the number of qubits is clear. which the plateau begins and the time at which it ends. We observe a \(1/N\) behavior at early times and \(\ln N\) behavior for the time at the end of the OSEE plateau. The corresponding plots are shown in Fig. 4. At early times, the behavior is similar for cases 3, 4 and 5 with a \(1/N\) behavior (left panel Fig. 4) and a plateau at \(\text{OSEE}=\ln 2\). Case 2 is different with a plateau that starts at \(t=0\), see Fig. 5. We suppose that this behavior is related to the fact that \(\beta=0\) in this case. At late times the \(\ln N\) behavior seems universal, with different values of the prefactor \(\delta\) (up to some finite size effects the OSEE depends only on \(t-\delta\ln N\) at late times). In case 2, the data collapse is particularly striking (see right panel of Fig. 5). Comparing the values of \(\delta\) to the parameters in each case, we see that for all our cases the value of \(\delta\) is compatible with \(\delta=1/(2\alpha)\). It is remarkable that the OSEE stays essentially constant for a duration that increases with the number of qubits, although the correlations decrease exponentially at best in constant time. A large enough system can be in a regime where \(1/\beta\ll t\) and \(t\lesssim\delta\ln(N)\). In case 1 (\(g_{0}\) only) the first condition implies that \(\langle Z\rangle\) is arbitrary close to 1, while the second condition puts the system in the OSEE plateau, with OSEE\(\simeq\ln(2)\). In other words, the system can have an arbitrary low density of qubits in the \(|1\rangle\) state and, still, some sizeable correlation between the two halves of the system. We checked that such plateau is absent if the initial state is a graph state with a lower connectivity. As an example, Fig. 6 shows the OSEE in a case where the initial state is a graph state constructed from a periodic one-dimensional lattice with \(N\) sites (a ring). For a subsystem \(A\) of the form \(A=[1,\cdots,n]\) with \(N-2\geq n\geq 2\) the von Neumann entanglement entropy is equal to \(S_{\text{\tiny{NN}}}^{A}=2\ln(2)\) in a such ring graph state [4], hence the value \(\text{OSEE}=4\ln(2)\) at time \(t=0\). The correlations are plausibly only short-ranged (with a finite correlation length) in that case, so that the correlations between the two halves of the bipartition essentially come from the qubits close to the boundaries/cutting between the two halves. Thus, when the system size becomes significantly larger than the correlation length the OSEE becomes independent of \(N\). The OSEE then drops to zero over a characteristic timescale which is independent of the system size, contrary to the cases where the initial state is a complete graph. ## 5 Conclusion In this paper we introduced an exactly solvable toy model for the decoherence of a graph state. We have considered the time evolution of the complete graph state under a Lindblad equation Figure 5: Time evolution of the OSEE between the two halves of the system for different number of qubits in case 2. Left: data plotted as a function of time. Right: data plotted as a function of time shifted by \(\delta\ln N\), here \(\delta=0.25\). All the curves are essentially on top of one another and are indistinguishable at the scale of the figure. with general single-qubit dissipators and a vanishing Hamiltonian. Exploiting the permutation invariance of the model allowed obtaining simple closed formulae for the expectation values of any product of Pauli operators at any time. The method can in principle be extended to other types of graph states, although the number of equations will grow for less symmetric situations. The availability of analytic solutions would be valuable as a guiding tool in understanding the dissipative mechanisms acting in numerical studies of complex setups. We have compared the theoretical results with a numerical solver that was pushed up to 64 qubits. The results are in perfect agreement with the theory, showing that the MPO approach is adapted for simulating this type of problem. MPO representations are often thought of as being appropriate for one-dimensional geometries, but the present example shows that it can also efficiently handle some problems with many qubits and in a high space dimension when the correlations are not too strong, at it is here. A peculiar long-lasting plateau was identified in the OSEE between the two halves of the system, pointing to the presence of some nonlocal long-lived correlations (possibly classical) in this setup. Despite the dissipative processes acting everywhere on the qubits, the correlations are seen to survive for a time that increases with the system size. In a future study it would be interesting to compute exactly the OSEE to \(t>0\) in this model. ## Acknowledgements G.M. thanks Elie Gouzien for useful discussions about graph states, and is supported by the PEPR integrated project EPIQ ANR-22-PETQ-0007 part of Plan France 2030. Figure 6: Time evolution of the OSEE between the two halves of the system, for a simulation of a _ring_ graph state (initialized at \(t=0\)), with the dissipator \(\mathcal{D}_{0}\). The data for \(N=16\), \(32\) and \(64\) are almost on top of each other, showing rapid convergence to a limiting curve in the large \(N\) limit. Contrary to the cases where the initial state is a complete graph, these curve do not exhibit any plateau. In this simulation the MPO bond dimension is \(16\) or below. Further simulation results To encode \(\rho\) as an MPO, the bond dimension that one needs to use is closely related to the (exponential of the) OSEE. As mentioned above the initial state has \(S_{\text{vN}}=\ln 2\) and OSEE \(=2\ln 2\) for all bipartitions. As a pure state the initial state can be written exactly has a matrix-product state with bond dimension equal to 2, and as a density matrix \(\rho\) it can be represented exactly by an MPO of bond dimension equal to 4. It turns out that the effect of the dissipation does not increase the required bond dimension. The data indicate that an exact description of \(\rho\) is possible with a bond dimension equal to 4 in this model, even for \(t>0\). Note however that due to small numerical errors the bond dimension was sometimes observed to be above 4 in the simulations (but never exceeding 15). All the simulations used a time step \(\tau=0.004\).
2310.15848
On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms
Artificial Intelligence (AI) has made its way into various scientific fields, providing astonishing improvements over existing algorithms for a wide variety of tasks. In recent years, there have been severe concerns over the trustworthiness of AI technologies. The scientific community has focused on the development of trustworthy AI algorithms. However, machine and deep learning algorithms, popular in the AI community today, depend heavily on the data used during their development. These learning algorithms identify patterns in the data, learning the behavioral objective. Any flaws in the data have the potential to translate directly into algorithms. In this study, we discuss the importance of Responsible Machine Learning Datasets and propose a framework to evaluate the datasets through a responsible rubric. While existing work focuses on the post-hoc evaluation of algorithms for their trustworthiness, we provide a framework that considers the data component separately to understand its role in the algorithm. We discuss responsible datasets through the lens of fairness, privacy, and regulatory compliance and provide recommendations for constructing future datasets. After surveying over 100 datasets, we use 60 datasets for analysis and demonstrate that none of these datasets is immune to issues of fairness, privacy preservation, and regulatory compliance. We provide modifications to the ``datasheets for datasets" with important additions for improved dataset documentation. With governments around the world regularizing data protection laws, the method for the creation of datasets in the scientific community requires revision. We believe this study is timely and relevant in today's era of AI.
Surbhi Mittal, Kartik Thakral, Richa Singh, Mayank Vatsa, Tamar Glaser, Cristian Canton Ferrer, Tal Hassner
2023-10-24T14:01:53Z
http://arxiv.org/abs/2310.15848v4
# On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms ###### Abstract Artificial Intelligence (AI) has made its way into various scientific fields, providing astonishing improvements over existing algorithms for a wide variety of tasks. In recent years, there have been severe concerns over the trustworthiness of AI technologies. The scientific community has focused on the development of trustworthy AI algorithms. However, machine and deep learning algorithms, popular in the AI community today, depend heavily on the data used during their development. These learning algorithms identify patterns in the data, learning the behavioral objective. Any flaws in the data have the potential to translate directly into algorithms. In this study, we discuss the importance of _Responsible Machine Learning Datasets_ and propose a framework to evaluate the datasets through a _responsible rubric_. While existing work focuses on the post-hoc evaluation of algorithms for their trustworthiness, we provide a framework that considers the data component separately to understand its role in the algorithm. We discuss responsible datasets through the lens of fairness, privacy, and regulatory compliance and provide recommendations for constructing future datasets. After surveying over 100 datasets, we use 60 datasets for analysis and demonstrate that none of these datasets is immune to issues of _fairness_, _privacy preservation_, and _regulatory compliance_. We provide modifications to the "datasheets for datasets" with important additions for improved dataset documentation. With governments around the world regularizing data protection laws, the method for the creation of datasets in the scientific community requires revision. We believe this study is timely and relevant in today's era of AI. ## I Introduction With the proliferation of artificial intelligence (AI) and machine learning (ML) techniques, different nation-level projects and _technology for good_ programs are touching the lives of billions. These systems have provided incredibly accurate results ranging from face recognition of a million faces [1] to beating eight world champions at Bridge [2]. It has achieved superlative performance in comparison with experienced medical practitioners in identifying pneumonia and analyzing heart scans, among other medical problem domains [3, 4]. Recently, art generated by an AI algorithm won a fine arts competition [5]. While the systems are broadly accelerating the frontiers of smart living and smart governance, they have also shown to be riddled with problems such as bias in vision and language models, leakage of private information in social media channels, and adversarial attacks, including deepfakes. This problematic behavior has been affecting the trustworthiness of AI/ML systems. This has led to the design of the _Principles of Responsible AI_ which focus on designing systems that are safe, trustworthy, reliable, reasonable, privacy-preserving, and fair [6]. Among the different stages of an AI system development pipeline, data collection and annotation is one of the most important ingredients which can have a significant impact on the system. Current AI algorithms are deemed _data-hungry_ and tend to be extremely data-driven, and any irregularities in the datasets utilized during the development of these algorithms can directly impact the learning process. Several researchers have demonstrated that non-responsible use of datasets can lead to challenges such as fairness of the model and leakage of private information such as identity information or other sensitive attributes. Certain gender and race subgroups are shown to be under-represented in face-based image datasets [7, 8] while some datasets contain objects specific to certain geographies or specific contexts [9, 10]. Many algorithms have also been shown to suffer from spurious correlations in the dataset [11, 12, 13]. Similarly, concerns regarding the leakage of private information from popular datasets such as ImageNet have surfaced over recent years. In order to build responsible AI systems, it is therefore important to use datasets that are responsibly curated. We assert that _Responsible Datasets leads to building Responsible AI Systems_. Current research for understanding and evaluating trustworthiness focuses primarily on the performance of the models. However, by identifying these issues at the dataset level, we can lay the ground for creating better and _responsible_ datasets, and better AI. With the motivation to evaluate the reliability or trustworthiness of data, in this research, we present a framework to evaluate datasets via the proposed _responsible rubric_ across the axes of fairness, privacy, and regulatory compliance (refer to Figure 1). To the best of our knowledge, this is the first framework that quantitatively evaluates the trustability of the data used for training ML models. For defining dataset fairness, we consider the impact of three factors: diversity, inclusivity, and reliability of annotations. _Inclusivity_ considers whether different groups of people are present in the dataset across parameters of sex, skin tone, ethnic group, and age, and _diversity_ quantifies the distribution of these groups in the dataset. For evaluating privacy preservation in datasets, we identify vulnerable annotations that can lead to the leakage of private information. Finally, we assess datasets for their degree of compliance with contemporary regulatory norms. Different governments around the world have approved various data privacy laws in the past few years. The popular General Data Protection Regulation (GDPR) [14] requires the right to erasure based on its Article 17 (denoted as [Art. 17, GDPR]) and may require the consent of data subjects [Art. 12, GDPR] among other laws for data protection. Subjects providing data should have complete knowledge as to how their data will be used, and they should have the ability to revoke their consent. Further, if a dataset contains information such as images or potentially unethical data, there should be a mechanism to report such incidents. Through the axes of fairness, privacy and regulations, we demonstrate the applicability of the proposed framework by analyzing datasets from biometrics and healthcare domains, particularly face recognition and chest XRay datasets. After surveying over 100 datasets and discarding datasets unusable for this study because of their small size or unavailability, we utilized a total of 60 datasets. Some of our key observations are as follows: * Most of the existing datasets suffer on all three axes of _fairness_, _privacy_ and _regulatory compliance_, as per the proposed rubric. * Fairness is a major concern for most of the datasets we surveyed. * Most existing datasets do not focus on regulatory compliances. * Our analysis highlights that curating datasets from the web poses major risks to privacy preservation. * We also observe the _fairness-privacy paradox_ that exists in the development of datasets where the presence of sensitive attributes aids fairness evaluation but potentially leaks a subject's private information. Finally, we provide recommendations for constructing datasets. These recommendations can serve as an ethical sounding board for development of responsible datasets in the future. ## II Related Work In recent years, there has been an increasing focus on datasets being used in ML and deep learning. Specifically, such concerns are heightened when datasets pertain to the collection of sensitive data such as biometrics and medical data. According to a recent study, the importance of dataset quality being used in AI/ML is constantly undermined, and the data collection work is often undervalued in the community [32]. Researchers have addressed the need for data-centric AI as well as the impact of regulations and policies on trustworthy AI [33]. Heger et al. [34] conducted interviews with ML practitioners and discovered the need to emphasize the relationship between data documentation and responsible AI. Scheuerman et al. identified the patterns followed in the collection process of computer vision datasets based on 1000\(+\) publications and also emphasized the importance of proper dataset documentation [35]. Fig. 1: We introduce the concept of Responsible Machine Learning Datasets and propose a quantitative rubric along with recommendations for future datasets. There has been discussion around the collection of socio-cultural data where researchers have highlighted the need to design institutional frameworks and procedures inspired by archival data [36]. Some of the essential considerations are consent, inclusivity, power, transparency, ethics, and privacy. Similarly, focusing on the entire dataset development pipeline, Peng et al. [37] studied nearly 1000 papers citing problematic datasets such as Labeled Faces in the Wild (LFW), MS-Celeb-1M (_decommissioned_), and DukeMTMC (_decommissioned_). The authors provide recommendations for dataset creators as well as conference program committees to encourage more ethical creation of datasets. In 2021, Gebru et al. [38] proposed a comprehensive 'datasheet' detailing information about the dataset accompanying its release. The datasheets are designed to raise transparency and accountability in the datasets. On similar lines, Hutchinson et al. introduced a framework that helps build accountability and transparency in the data development process [39]. By segregating data development process into various stages, the authors described the roles played by individuals such as requirements owner, stakeholder, and reviewer at each stage. Palluda et al. [40] promoted the usage of quantitative as well as qualitative measures for the effective development of datasets. They showcase how representational harms and spurious correlations present in the datasets can lead to unfair decisions. **Fairness in Datasets:** In order to build fairer AI, researchers have studied bias in various settings [41, 42, 43, 44]. A recent report by NIST for identifying and managing bias in AI has cited the reliance on large-scale datasets as the leading cause of using unsuitable datasets for training and evaluation [45]. The report discusses various challenges and factors associated with datasets in modern AI such as lack of representation, and statistical and socio-technical factors. Towards better representation in AI, Kamikubo et al. analyzed 190 accessibility datasets. Their analysis revealed harmful trends such as lack of older adults for autism, developmental and learning communities. To this end, they suggested more meaningful interactions with data contributors [46]. Similarly, in the domain of NLP, some researchers have proposed the use of data statements that can help understand the intent and, specifically, the biases of the data [47]. The data statement emphasizes the inclusion of information such as the curation rationale and annotator demographic. Some researchers have proposed toolkits for evaluation of bias in the dataset which include object-based, person-based, and geography-based analyses through annotations [48]. **Privacy Leakage in Datasets:** In this work, we refer to the term _privacy leakage_ as "the unintended or unauthorized/accidental exposure of sensitive or protected personal data/information, which may compromise an individual's identity." The concerns of privacy leakage surrounding datasets in deep learning have grown over the past few years. In Birhane et al. [49], the authors discuss the issues of consent and privacy breaches in the context of large-scale vision datasets such as ImageNet. They highlight the harms associated with poor dataset curation practices and propose mandatory institutional reviews. In the field of security and privacy, researchers have attempted to preserve privacy by adopting different concepts, such as \(k\)-anonymity [50] and differential privacy [51]. To quantify privacy leakage, researchers have proposed various metrics, such as \(l\)-diversity [52], \(k\)-anonymity [50], \(t\)-closeness [53], and m-invariance [54], among others. A detailed list of these metrics has been provided by Wagner et al.[55]. These metrics are designed to capture the extent of privacy leakage from the perspective of an adversary with knowledge [53], equivalent representation of sensitive attributes [52], protection from homogeneity attacks [50], and other reasons. After the introduction of \(k\)-anonymity [56], several researchers have developed techniques for facial privacy preservation. Zang et al. [57] devised a function to add random noise to existing data samples to synthesize new samples. This was aimed at masking sensitive information in the dataset while preserving the performance of the model. Chhabra et al. [58] proposed an algorithm that provides the control to the user to anonymize k-facial attributes while preserving other attributes and identity information. Li et al. [59] proposed a technique to anonymize identity and attribute while maintaining the data utility. The authors also performed quantification of privacy preservation through \(k\)-anonymity. In other works, researchers have used Mechanical Turk participants to identify privacy-sensitive information in images to automate the privacy attribute selection/identification task for obfuscation [60]. Gervais et al. [61] propose a framework to infer the location by analysing the consumer purchase histories. Further, experiments have demonstrated the benefit of teaching algorithms to predict the presence and purpose of private information. Orekondy et al. [62] propose an algorithm to predict a leakage risk score for the input image using their proposed VISPR dataset. Orekondy et al. [63] have also proposed a redaction by segmentation approach to aid users in selectively sanitizing images of private content. The proposed approach segments out the privacy attributes in images and provides privacy vs data-utility evaluation analysis. Gurari et al. [64] propose a dataset and study visual privacy issues faced by people who are blind and are trying to learn about their physical surroundings. **Regulatory Compliance in Datasets:** With the increasingly high emphasis on data protection, various countries around the world have put legislation in place for data security and privacy. According to a report, 157 countries in the world had instated data privacy laws by mid-March 2022 [65, 66, 67]. Most of these laws are influenced by GDPR [14] but contain certain variations. The GDPR may prohibit the processing of biometric data unless explicit consent from the subjects is not provided. By providing the _right to be forgotten_, the GDPR puts the subject in charge of their data. Different studies have been conducted to understand the impact of GDPR on artificial intelligence [68, 69]. In Table I, we summarize data privacy laws for some of the countries around the world. There are other laws specific to certain kinds of data, such as the Health Insurance Portability and Accountability Act (HIPAA) [70] for medical health in the US and the Biometric Information Privacy Act (BIPA) [28] protecting biometric information in the state of Illinois, US. Other US states are actively working towards enforcing their data privacy laws [71]. The privacy implications of some of these acts have also received significant attention in the research community [72]. Notably, the European Commission released _Ethics Guidelines for Trustworthy AI_, which discusses the framework, foundations, and possible assessments for trustworthy AI [73], as well as is in the process of amending and debating the Artificial Intelligence Act [74] to address risks associated with AI applications. Recent work discusses the impact of the Artificial Intelligence Act on facial processing applications [75]. While different works identify different problematic aspects of dataset collection, very few works have looked at the factors of fairness, privacy, and regulatory compliance in datasets holistically. In this work, we provide quantitative as well as qualitative insight across the three factors and how data collection in AI needs to turn towards better and more responsible datasets. ## III Methods In this section, we describe the methodology adopted for designing the framework for _Responsible Datasets_. We quantify datasets across the axes of fairness, privacy, and regulatory compliance. The concerns regarding these factors may vary from domain to domain. For example, fairness in a face image dataset may differ from those in an object or egocentric dataset. The quantification in this section is based on datasets centered around individuals and specifically, face-based datasets. ### _Quantifying Dataset Fairness_ In deep learning, fairness concerns have been raised for datasets in multiple domains in different contexts [7, 10]. In face-based image datasets, some sex and race subgroups may be under-represented [7, 8]. In object-based datasets, datasets may contain objects specific to certain geographies or in specific contexts [10, 9]. Similarly, text-based and multi-modal datasets may suffer from spurious correlations in datasets leading to bias in the performance of trained models [11]. In this work, we consider the impact of three factors for quantification of dataset fairness- diversity, inclusivity, and labels (See Fig. 2). In the context of face-based datasets, _inclusivity_ quantifies whether different groups of people are present in the dataset across parameters of sex, skin tone, ethnic group, and age. _Diversity_ quantifies the distribution of these groups in the dataset, with an assumption that a balanced dataset is the most fair. While a balanced dataset does not guarantee equal performance, existing work has shown improved fairness with the use of balanced datasets [76, 77]. We note that such a dataset may not be ideal in many cases, but it acts as a simplifying assumption for the proposed formulation. Finally, we consider the reliability of the _labels_ depending on whether they have been self-reported by the subjects in the dataset or are annotated based on apparent characteristics. We consider four demographic groups- _sex_, _skin tone_, _ethnicity_, and _age_. The different subgroups considered for these demographics are specified in Table II. We utilize the information regarding the biological sex of an individual while leaving room for error in the class _Other_. For ethnic subgroups, we take inspiration from the FairFace dataset [78] with the addition of the mixed-race as a separate category. Ethnicity subgroups around the world tend to be extremely variable. We have adopted the maximum ethnic subgroups as represented in the literature by the FairFace dataset. The age subgroup classification is based on the categories in the AgeDB dataset [79]. The age annotations were binned as per AgeDB categorization for datasets that provided continuous age values. The proposed formulation for quantification of fairness in a dataset is dependent on the annotations available in a dataset (See Fig. 2). Let **D** = {sex, skintone, ethnicity, age} denote the complete set of demographics considered for evaluation of any dataset, and **S** denote the corresponding subgroups in each demographic (Refer Table II for subgroups considered for each demographic). Then, **D\({}_{1}\)** = sex, and **S\({}_{1}\)**= {male, female, other}. For a given dataset, **d** denotes the set of demographics annotated in the dataset, and **s** denotes the subgroups corresponding to those demographics. For example, for the AgeDB dataset [79], **d** = {sex, age}, and **s\({}_{i}\)** = {male, female} for \(ith\) demographic in **d**, and **s\({}_{ij}\)** = male for \(jth\) subgroup of \(ith\) demographic (\(i=1,j=1\)). Then, the **inclusivity**\(r_{i}\) for each demography is defined as the ratio of demographic subgroups present in the dataset and the pre-defined demographic subgroups in **S\({}_{i}\)**. This is quantified as- \[r_{i}=|s_{i}|/|S_{i}| \tag{1}\] The **diversity**\(v_{i}\) is calculated using Shannon's diversity index [80] to capture the distribution of different subgroups for a given demography \(d_{i}\) as follows, \[p_{ij}=num(s_{ij})/\sum_{j}num(s_{ij}) \tag{2}\] \[v_{i}=-\frac{1}{ln(|s_{i}|)}\sum_{j}p_{ij}*ln(p_{ij}), \tag{3}\] where \(num(\textbf{s}_{ij})\) denotes the number of samples for the \(jth\) subgroup of the \(ith\) demographic in the dataset. In certain cases where the number of samples is not available, we consider \(num\) to denote the number of subjects in the dataset. Fairness across each of the demographics is measured between 0 to 1. For example, if a dataset contains images corresponding to each of the six skin tones, it will have an inclusivity score of 6/6 = 1, and if the number of samples is balanced across each of the subgroups of skin tone, the diversity score will also be 1. When combined, that will provide an overall score of 1*1 = 1. By multiplying the inclusivity and diversity scores, we are providing information about the presence as well as distribution of samples corresponding to a given demographic. These scores are added for the four demographic attributes. The **label score**\(l\), is then calculated based on whether the labels or annotations for the dataset are self-reported, classifier-generated, or apparent. _Self-reported labels_ indicate that the subjects provided their demographic information as a part of the data collection process. _Classifier-generated labels_ imply that the demographic labels were obtained through an automated process of classification. Finally, _apparent labels_ indicate that the annotations were done by an external annotator after observing the images (for example, through Amazon Mechanical Turk). Based on the type of annotations, a _label score_ is assigned for the dataset between 0 to 1, with self-reported labels assigned a value of 1, classifier-generated labels assigned 0.67, and apparent labels assigned a value of 0.33. machine-generated labels, sometimes referred to as _pseudo labels_, are assigned a higher score since they have been shown to be more reliable than human-annotated labels for use in deep learning-based applications [81, 82, 83]. We acknowledge that there may be different perceptions of reliability in annotation based on the task and nature of the data. However, the labels generated by a trained classifier are consistent while that may not be true for human annotators. Based on this rationale, we assign a higher score to classifier-generated labels. In cases where the labels are collected using more than one of the three categories, an average of the corresponding categories' scores is taken. For medical datasets, a score of 1 is provided if a medical professional provides/validates the annotations, else a score of 0 is provided. The label score is provided for the entire dataset as per the current formulation. To calculate the **fairness score**, \(F\), for the dataset, the factors of inclusivity, diversity, and labels are combined as follows, \[F=\sum_{i}{(r_{i}*v_{i})}+l \tag{4}\] The fairness score is designed such that a higher value indicates a fairer dataset while a lower value indicates a less fair dataset. ### _Quantifying Dataset Privacy_ Deductions and attacks can be carried out using the annotated labels in publicly available supervised datasets (See Figure 3). Annotated attributes in a face dataset, for example, can be used for face profiling [84, 85, 86, 87]. In contrast, vehicle registration numbers from location datasets can be used to make malicious deductions and track someone down [62, 88]. As a result, the more annotations there are in the dataset, the more privacy is potentially leaked. The extent of privacy leakage can be summarised in terms of the quantity of information leaked and the extent to which private information is exposed in the annotated labels. In this work, for quantification of privacy leakage in the publicly available datasets, we identify vulnerable label annotations that can lead to the potential leakage of private information and devise a mathematical formulation. This formulation employs the dataset's annotated labels to quantify the potential privacy leakage. We identify six label annotations that are widely available in the datasets and lead to leakage of privacy in datasets: _name identification_, _sensitive and protected attributes_, _accessories_, _critical objects_, _location inference_, and _medical condition_. Let the set \(A\) constitute these identified attributes, i.e.: \[A=\{A_{N},A_{SP},A_{AC},A_{C},A_{L},A_{M}\} \tag{5}\] The attributes used in defining \(A\) are as follows: * _Name Identification Information \(A_{N}\):_ This attribute refers to the name of each individual annotated in the dataset. This annotation potentially leads to the highest level of privacy leakage. * _Sensitive and Protected Attribute Information \(A_{SP}\):_ These attributes refer to information regarding gender, sexual orientation, race, past records, etc., corresponding to an individual. * _Accessory Information \(A_{AC}\):_ This attribute denotes the presence of accessories such as hats and sunglasses in a face image as well as other attributes such as five 'o'clock shadow. * _Critical Objects \(A_{C}\):_ This attribute denotes the presence of objects revealing the identity of a person, such as credit cards or signatures. * _Location Information \(A_{L}\):_ This attribute denotes the presence of information in the image that can potentially disclose a person's location, such as geographical coordinates or popular landmarks in the image background. * _Medical Condition Information \(A_{M}\):_ This attribute denotes the presence of any information regarding the medical condition of an individual in the dataset. For a dataset, we manually check for the presence of the annotations from the aforementioned list to estimate its privacy leakage, and a point is awarded for each attribute annotation. The privacy leakage score (\(PL\)) is then calculated as the sum of all the present attributes as described below: \[PL=\sum_{i=1}^{6}A_{i}. \tag{6}\] Fig. 2: (Top) The three aspects involved in fairness quantification- Inclusivity, Diversity, and Labels, and the questions they answer. (Bottom) The formulation employed for the calculation of the fairness score. Finally, the **privacy preservation score**, \(P\), for a given dataset is estimated as, \[P=(|A|-PL). \tag{7}\] \(P\) indicates the amount of information being preserved in a dataset and represents its dependability for public use. We note that while the presence of these annotations constitutes privacy leakage in our formulation, it aids the computation of the fairness score described in the previous section. We discuss this fairness-privacy paradox in detail later in the text. ### _Quantifying Regulatory Compliance_ Different countries around the world have approved various data privacy laws in the past few years. One of the most widely accepted documents covering data privacy laws is the one applicable in European countries known as the GDPR [14]. The following laws can be applied to deep learning methodologies and/or datasets, * Right to be forgotten (right to erasure) [Art. 17] * Consent of data subjects [Art. 12] * Right to object/restriction to the processing of their data [Art. 5, 6, 9, 18, 19] * Right to rectification [Art. 16] * Right of access by the data subject [Art. 15] * Right to object and automated individual decision-making [Art. 21 and 22] [Recital 71] * Right to lodge complaint [Art. 77] * Right to effective judicial remedy [Art. 78,79] Some of these laws restrict the use of users' personal data unless their consent is available for that particular application [Art. 5, 6, 9, 18, 19]. Other laws and conditions specified in the GDPR include, * Right to data portability [Art. 20] * Security of personal data [Art. 32, 33, 34] * Conditions for consent [Art. 7] * Conditions for protection of children's personal data [Art. 8] * Requirements for regular data protection impact assessments (DPIA) [Art. 35] * Cryptographic protection of sensitive data * Breach notification requirements. [Art. 33] Apart from the laws specified above, datasets in deep learning can benefit from existing mechanisms for institutional approval (such as IRB) and newer requirements set by popular conferences such as ethics and impact statement for datasets [91]. In this paper, the **regulatory compliance score**, \(R\), in the dataset is quantified based on three factors- institutional approval (yes/no: the numerical value of 1/0), the subject's consent to the data collection (yes/no: the numerical value of 1/0), and the facility for expunement/correction of the subject's data from the dataset (yes/no: the numerical value of 1/0). If a dataset satisfies all three criteria, a compliance score of \(3\) is provided. While the absence of a data subject's consent may not necessarily breach regulatory norms, for lack of a more subtle evaluation, we utilize _subject consent_ in the dataset as one of the factors for compliance. For example, the privacy rule in HIPAA compliance does not restrict the distribution of de-identified health data. The different factors for compliance are manually validated via information present in the published paper, webpage, and/or GitHub page for the dataset. Unless the information is explicitly specified in the aforementioned resources, it is assumed to be absent in which case we assign a value of zero. ## IV Results For this work, we surveyed a large number of datasets. Datasets containing human subjects were selected for the study. While fairness and privacy issues persist across different data domains such as objects and scenes [10, 9], current regulatory Fig. 3: Privacy leakage through the information available in datasets. The sample is representative of information present in datasets such as the LFW dataset [89, 90]. norms are designed for people. While it is possible to extend the concepts presented in this study to other domains, we limit our discussion to face-based and medical imaging datasets. After filtering through a total of 100 datasets and discarding datasets that are decommissioned, small in size (less than 100 images), and whose data could not be downloaded/accessed/requested, we were left with 60 datasets. These 60 datasets are used for the analysis and quantification of the responsible rubric. We use 52 face-based biometric datasets (Table VII), and eight chest Xray based medical datasets (Table VIII). For face-based datasets, we filtered through over 120 datasets removing datasets that had been decommissioned, older than 2010, and whose data was inaccessible. For chest Xray datasets, we similarly surveyed through about 20 datasets before obtaining the eight analyzed in this work. We quantify the datasets across the dimensions of fairness, privacy, and regulatory compliance. Using the specified quantification methodology, we obtain a 3-tuple containing scores across the three dimensions. Fig. 4: The summary of fairness, privacy, and regulatory compliance scores through histogram visualization for the datasets we surveyed. (Left) The maximum value of the fairness score that can be obtained is 5, but it is observed that the fairness scores do not exceed a value of 3. (Middle) While most datasets in our study preserve privacy in terms of not leaking location or medical information, very few provide perfect privacy preservation. (Right) Most datasets comply with no regulatory norm or only one. We can observe from this plot that most datasets provide a low fairness score and perform poorly on the regulatory compliance metric. Fig. 5: Cluster analysis based on the 3-tuple quantification of fairness, privacy, and regulatory compliance for (a-b) only face-based datasets and (c-d) jointly with medical datasets. (a, c) The 3-D scatter plot of the different datasets across the three axes with the _FPR dataset_ plotted with perfect fairness, privacy preservation, and regulatory compliance. (b, d) The scatter plot after performing DBSCAN clustering with \(eps=1\). We observe that the FB Fairness Dataset and the UTKFace dataset lie the closest to the _FPR dataset_. Analysis across the three different dimensions has been obtained through Fig. 4 where the distribution of scores has been plotted. **Fairness in Datasets:** The fairness of datasets is calculated based on Eqn. 4. Representative and balanced datasets have been shown to provide fairer performance across different demographic subgroups [76]. The fairness metric described in this work provides a maximum value of 5, with five being the fairest. The average value for the fairness score obtained for the datasets comes out to be 0.96 \(\pm\) 0.64, signifying that, on average, the fairness score of a dataset ranges from 0.32 to 1.6. The detailed results are provided in Tables III and IV for biometric and medical datasets, respectively. The UTKFace dataset is observed to be the fairest, with a score of 2.71 among the datasets listed here, providing maximum representation. It should be noted that with a maximum score of 5, the UTKFace dataset achieves slightly more than half that score. Interestingly, the average fairness score for the eight medical datasets was 1.34 \(\pm\) 0.17 while the same score for biometric datasets came out to be 0.90 \(\pm\) 0.67. **Privacy Preservation in Datasets:** The privacy preserved in datasets is computed based on the presence of privacy-compromising information in the annotations, such as names of subjects and the presence of critical objects such as credit cards. A \(P\) indicating the privacy preservation capacity and \(PL\) indicating privacy leakage of the dataset are calculated. The distribution of \(P\) for privacy quantification is presented in Fig. 4. The best value of \(P\) is 6. We observe that the DroneSURF dataset does not contain any private information, which makes it perfectly privacy-preserving. The medical datasets in the study de-identify their subjects but naturally leak information about medical conditions, while some further provide sensitive information such as location. **Regulatory Compliance in Datasets:** With modern IT laws in place, the regulatory compliance of datasets is quantified based on institutional approval of the dataset, subject's consent to the data collection, facility for expungement/correction of the subject's data from the dataset. Based on these criteria, the compliance scores are calculated with a maximum value of three. The distribution of scores is provided in Fig. 4. On average, a regulatory score value of 0.58 is obtained. We observe that the FB Fairness Dataset (Casual Conversations) satisfies all regulatory compliances, thereby obtaining the maximum regulatory score, whereas most datasets provide a score of 0 or 1. **Fairness-Privacy Paradox in Datasets:** Many face-based biometric datasets provide sensitive attribute information. This leads to a _fairness-privacy paradox_ where the presence of these annotations enables fairness quantification but leads to privacy leakage. One way to remedy the situation is by providing population statistics in the published dataset papers instead of sensitive attribute labels for each sample. However, current fairness algorithms are evaluated through sensitive attribute annotations in the dataset, and their absence can hinder the fairness evaluation process. In differential privacy-based solutions, it has been observed that the performance degradation is unequal across different subgroups [144], highlighting the need for labels for fairness analysis. The _fairness-privacy paradox_ remains an open problem for datasets containing sensitive attribute information such as biometrics and medical imaging. With ongoing discussion regarding concerns for privacy and fairness, regulations can sometimes provide conflicting guidance on privacy laws and proposed AI laws, giving researchers and industry a reason to approach this paradox with caution in dataset development. Recent work in face recognition is exploring models trained using synthetically generated datasets[145, 146, 147]. However, the training of powerful generative models utilizes large face datasets. Some diffusion-based models have also been shown to replicate the training data during generation. **Holistic View of Responsibility in Datasets:** When the aforementioned factors are studied in conjunction, we obtain a three-dimensional representation of the datasets. The 3-tuple provides insight into how responsible a dataset may be considered for downstream training. To observe the behavior of the 3-tuple visually, we plotted a 3-D scatter plot for the face datasets along with a hypothetical _FPR dataset_ (Fig. 5(a)). The hypothetical FPR dataset has a perfect fairness, privacy, and regulatory score. After applying the DBSCAN algorithm with \(eps=1\) (the maximum distance between two points to be considered as a part of one cluster), we observe five clusters with two outliers. The FB Fairness Dataset and the UTKFace dataset come out to be outliers with a Euclidean distance of 3.59 and 3.20 units from the _FPR dataset_. When compared to the other clusters, we observe that FB Fairness Dataset and the UTKFace dataset lie the closest to the _FPR dataset_. Other cluster centers lie at a distance of 4.56, 4.79, 5.11, 5.20, and 5.33 units from the _FPR dataset_, with the clusters containing 4, 7, 3, 32, and 4 points, respectively. The next closest cluster is formed by the LAOFIW, 10k US Adult Faces Database, CAFE Database, and IISICIFD datasets with average scores of 0.67, 5, and 2 for fairness, privacy, and regulatory compliance, respectively. Similar observations can be made when the scatter plot includes medical datasets along with the face datasets(Fig. 5(c-d)). The numerical results are tabulated in Tables III and IV. A weighted average of the three scores is calculated by dividing each score by its maximum value and then taking an average that provides a value in the range of 0 to 1 (Table III). By utilizing this average, we observe that the top three responsible datasets come out to be the FB Fairness dataset (Casual Conversations), the Indian Institute of Science Indian Face Dataset (IISCIFD), and the UTKFace dataset. A high regulatory compliance score plays an important role in the overall responsibility score of FB Fairness and IISCIFD datasets. In contrast, a high fairness score imparts UTKFace a high responsible rubric value. To summarize the observations made over the existing face datasets, we find that- * Most of the existing datasets suffer on all the three axes of _fairness_, _privacy_ and _regulatory compliance_ as per the proposed metric. For example, the UTKFace dataset is among the fairest datasets but performs poorly on regulatory compliance. On the other hand, the LFWA dataset lacks on all three fronts- fairness, privacy preservation, and regulatory compliance. * While many works claim fairness as the primary focus in their datasets, these datasets provide poor fairness scores on evaluation. One such example is the DiveFace dataset. The fairness quantification of datasets using our framework shows that being fair is a major concern with 91% of the existing datasets obtaining a fairness score of two or less out of five. * A vast number of large-scale datasets in Computer Vision are web-curated without any institutional approval. These datasets are often released under various CC-BY licenses even when these datasets do not have subject consent. We found that these datasets also fares low on the fairness front since the annotations are not always reliable, posing major risks to overall data responsibility. * Following regulatory norms effectively improves the responsibility rubric for a given dataset; however, most datasets are not compliant based on the available information with 89% datasets having a compliance score of 0 or 1. * When comparing fairness, privacy, and regulatory scores, it is clear that the privacy scores are higher in general. It is worth noting that privacy standards and constraints are already defined and have existed for a few years now [14], and datasets are possibly collected with these regulations in mind. This further indicates a need for fairness and regulatory constraints that promote data collection with higher fairness and regulatory standards. **Recommendations:** Based on the observations of our framework on a large number of datasets, we provide certain recommendations to aid better dataset collection in the future. * _Institutional Approval, Ethics Statement, and Subject Consent:_ Datasets involving human subjects should receive approval from the institutional review board (such as IRB in the United States). Future regulations may require consent from subjects to be obtained explicitly for the dataset and its intended use. * _Facility for Expungement/Correction of Subject's Data:_ Datasets should provide a facility to contact the dataset owners to remove and/or correct information concerning the subject. This is necessary to be compliant with data privacy laws such as GDPR. Some existing datasets already provide the facility for expungement in their datasets such as the FB Fairness Dataset, IJB-C and the UTKFace datasets. * _Fairness and Privacy:_ Datasets should be collected from a diverse population, and distribution across sensitive attributes should be provided while being privacy-preserving. The proposed fairness and privacy scores can aid in quantifying a dataset's diversity and privacy preservation. * _Datasheet:_ Datasets should curate and provide a datasheet containing information regarding the objectives, intended use, funding agency, the demographic distribution of subjects/images, licensing information, and limitations of the dataset. By specifying intended use, the data can be restricted for processing outside of intended use under the GDPR. An excellent resource for the construction of datasheets is provided by Gebru et al. [38]. We propose modifications in the datasheet by Gebru et al. by adding questions concerning fairness, privacy, and regulatory compliance in datasets (Refer Tables V and VI). **Limitations:** The formulation for quantification in this work considers a dataset fair based on the distribution of its labels. However, we do not account for the diversity of the data such as the presence of duplicate images for particular subgroups. Further, we do not comment on equity vs equality in the distribution of images. We note that it may be desirable to have unequal distribution between groups (e.g., when one group is harder to process than others and requires more data for the model to reach equal performance across groups) for some applications. Further, the current formulation for fairness, privacy, and regulatory scores is provided for datasets constituting individuals. While object-based datasets may also suffer from fairness issues, current data regulations are designed in accordance with the impact on human individuals. We leave analysis on object-based datasets for future work. Finally, we would like to note that the recommendations and datasheets introduced in this work are intended to establish the highest standards which can be challenging to achieve given the capabilities of current technologies. These recommendations are meant to serve as a "north star" and reaching them requires deliberate research effort. The fairness-privacy paradox remains an open problem in the community. Similarly, removing instances of data from already trained models requires _unlearning_ techniques, which while being actively explored, are far from being perfect. ## V Conclusion While the vast majority of the existing literature focuses on the design of trustworthy machine learning algorithms, in this work, we offer a fresh perspective for evaluating reliability through a discussion of responsible datasets with respect to fairness, privacy, and regulatory compliance. A detailed analysis is performed for face-based and chest Xray image datasets. We further provide recommendations for the design of'responsible ML datasets.' With governments around the world regularizing data protection laws, the method for the creation of datasets in the scientific community requires revision, and it is our assertion that the proposed quantitative measures, qualitative datasheets, and recommendations can stimulate the creation of responsible datasets which can lead to building responsible AI systems.
2305.11898
Neural information coding for efficient spike-based image denoising
In recent years, Deep Convolutional Neural Networks (DCNNs) have outreached the performance of classical algorithms for image restoration tasks. However most of these methods are not suited for computational efficiency and are therefore too expensive to be executed on embedded and mobile devices. In this work we investigate Spiking Neural Networks (SNNs) for Gaussian denoising, with the goal of approaching the performance of conventional DCNN while reducing the computational load. We propose a formal analysis of the information conversion processing carried out by the Leaky Integrate and Fire (LIF) neurons and we compare its performance with the classical rate-coding mechanism. The neural coding schemes are then evaluated through experiments in terms of denoising performance and computation efficiency for a state-of-the-art deep convolutional neural network. Our results show that SNNs with LIF neurons can provide competitive denoising performance but at a reduced computational cost.
Andrea Castagnetti, Alain Pegatoquet, Benoît Miramond
2023-05-15T09:05:32Z
http://arxiv.org/abs/2305.11898v1
# Neural Information Coding for Efficient Spike-Based Image Denoising ###### Abstract In recent years, Deep Convolutional Neural Networks (DCNNs) have outreached the performance of classical algorithms for image restoration tasks. However most of these methods are not suited for computational efficiency and are therefore too expensive to be executed on embedded and mobile devices. In this work we investigate Spiking Neural Networks (SNNs) for Gaussian denoising, with the goal of approaching the performance of conventional DCNN while reducing the computational load. We propose a formal analysis of the information conversion processing carried out by the Leaky Integrate and Fire (LIF) neurons and we compare its performance with the classical rate-coding mechanism. The neural coding schemes are then evaluated through experiments in terms of denoising performance and computation efficiency for a state-of-the-art deep convolutional neural network. Our results show that SNNs with LIF neurons can provide competitive denoising performance but at a reduced computational cost. Andrea Castagnetti, Alain Pegatoquet, Benoit Miramond+ Universite Cote d'Azur, CNRS, LEAT [email protected] Image denoising, Spiking Neural Networks, Neural information coding, Neuromorphic computing. Footnote †: This work has been supported by the French governement through 31A Côte d’Azur institute, reference ANR-19-P3IA-0002 ## 1 Introduction and Related Work Smartphone cameras, because of their reduced size and high pixel count, are intrinsically more susceptible to noise than conventional digital cameras. Image denoising algorithms are therefore intensively used in smartphones to recover image quality by reducing the amount of noise of the raw image. Image denoising performance have increased during the last few years and recent methods based on Deep Convolutional Neural Networks (DCNNs) have provided very high scores [1] to the point of outreaching classical spatial and patch-based algorithms [2]. However, deploying AI-based algorithms on embedded devices poses many problems. The limited amount of memory available, power consumption and thermal dissipation are indeed critical for embedded battery powered platforms. The field of neuromorphic engineering, especially SNNs, is emerging as a new paradigm for the design of low-power and real-time information processing hardware [3]. The spike information coding used by SNNs enables sparse and event-based computation through the network. The combination of these properties may lead to more energy efficient hardware implementations of neural networks, allowing state-of-the-art AI algorithms to be executed on mobile platforms with a reduced power budget [4]. However, to achieve these energy gains while simultaneously reaching the level of performance of DCNNs, SNNs must be able to encode analog data with high precision using very compact codes, i.e. spike trains. In recent years, several training and conversion methods have been proposed to improve the accuracy of SNNs on large-scale machine learning tasks. To take advantage of better performance provided by supervised learning, several methods have been developed to convert ANNs, trained using standard schemes like backpropagation, into SNNs for event-driven inference [5][6]. The ANN-SNN conversion is based on the idea that firing rates of spiking neurons should match the activations of analog neurons. Rate-based conversion methods have achieved significant results over the last few years, thus reducing the accuracy gap with ANNs [7]. However, rate-based conversion methods have a major drawback since they require a large amount of timesteps to precisely match the activations of analog neurons. A new approach, called surrogate gradient learning, has been proposed to train SNNs directly in the spike domain using standard supervised learning algorithms [8]. Recent studies reported competitive performance on a series of static and dynamic datasets using surrogate gradient training [9]. In this paper, we will extend these previous works and study the trade off between accuracy and efficiency of SNNs for the specific and uncovered case of image denoising. This task is challenging for two reasons. First, as denoising is a regression task, the network has to predict a continuous value (i.e. the noise amplitude) for each pixel of the image. Moreover, state of the art results have been obtained with very deep networks (17 layers or more). In Section 2 we study two spike coding approaches and we formalize the trade-off between performance and activation sparsity in SNN. In Section 3 we propose, for the first time, an image denoising solution based on SNN. The network trained directly in the spike domain provides a level of performance close to the state of the art CNN based solution. In the last section, we conclude the paper and we discuss future work. ## 2 Neural coding Neural coding schemes convert input pixels into spikes that are transmitted to spiking neurons for information processing. Specifically, we are interested in the two complementary neural coding procedures called _encoding_ and _decoding_ that map analog values into train of spikes and its inverse. Two types of encoding and decoding schemes are studied: the coding with information conversion and the rate-coding. The following section presents an analysis of the neural conversion with LIF neurons. The comparison between the two coding scheme will be discussed in Sec. 2.2 ### Neural coding with LIF neurons In this first coding scheme, that we call _Neural information conversion_, a LIF neuron, located in the first layer (encoding) of the network is fed with a constant input \(x\) through \(T\) timesteps. We are interested in finding the encoding function that defines the relation between \(x\) and \(z(t)\), the spiking output. Let us first recall the equations that govern the dynamic of a LIF neuron in the discrete case [10]: \[V[n]=V[n-1]+\frac{1}{\tau}(-(V[n-1]-V_{reset})+x[n]) \tag{1}\] Whenever \(V[n]\geq V_{th}\), where \(V_{th}\) is the threshold voltage, the neuron emits a binary spike. The spike train representation of the analog input \(x\), is thus encoded in the following function: \[z(t)=\sum_{j=1}^{T}\delta(t-t_{j}) \tag{2}\] Where \(\delta(t)\) is the Dirac delta function and \(t_{j}\) are the spike-times indexed by \(j\). In the following we also consider that the membrane potential is completely discharged after a spike emission, \(V_{reset}=0\). Let us find the value of \(x\) that makes the neuron fires at each timestep, thus producing a constant firing rate of 1. The conditions that produce this firing pattern are shown below: \[\begin{cases}V[n-1]=0\\ V[n]\geq V_{th}\\ x[n]=x,\forall\,n\end{cases} \tag{3}\] Since a spike has to be generated at the current timestep \(n\), the membrane potential is greater or equal to \(V_{th}\). Moreover, with a firing rate of 1, the neuron resets its membrane potential at each timestep, after the spike emission. Therefore the membrane potential at the previous timestep, \(V[n-1]\), equals 0. Replacing conditions 3 into Eq. 1 we obtain: \[x\geq V_{th}.\tau \tag{4}\] When the input \(x\) is greater than or equal to \(V_{th}.\tau\), the LIF neuron fires a spike at each timestep. Following the same reasoning, let us find the values of \(x\) that produce a firing rate of 0.5. In such a case, the neuron periodically alternates between two states: [charge, fire&reset, charge, fire&reset,...]. The conditions that lead to this behaviour are shown below: \[\begin{cases}V[n-1]=x/\tau\\ V[n]\geq V_{th}\\ x[n]=x,\forall\,n\end{cases} \tag{5}\] Replacing conditions 5 into Eq. 1 we obtain: \[V[n]=V_{th}=x/\tau+\frac{1}{\tau}(x-x/\tau) \tag{6}\] The values of \(x\) that produces a firing rate of 0.5 are defined by: \[\frac{V_{th}}{2/\tau-1/\tau^{2}}\leq x\leq V_{th}.\tau \tag{7}\] The same approach can be used to determine the production conditions of the other firing rates of a LIF model as depicted in figure 1. Before proceeding in the analysis, it is now interesting to focus on the spike patterns generated by a LIF neuron in more details. We may wonder, for example, if all spiking patterns of a LIF neuron are allowed, as well as the number of different firing rates. Let us start with a simple example where the neuron codes information over \(T=8\) timesteps. Note that a sequence that leads to a generation of a spike (in the case of a constant input) must have the following format: [charge] during \(k\) timesteps, followed by [fire&reset]. Table 1, shows the spiking pattern generated by a LIF neuron, when simulated for \(T=8\) timesteps. We can observe that the number of output codes does not depend on the input \(x\) but only on the value of \(T\) (simulation timesteps). In fact, only \(T+1\) output codes can be generated by a LIF neuron, when stimulated \begin{table} \begin{tabular}{c c c c c c c c c|c} \hline \hline & \multicolumn{6}{c|}{Timesteps} & \multicolumn{1}{c|}{\multirow{2}{*}{}} & \multirow{2}{*}{} \multirow{2}{*}{} & \multirow{2}{*}{} \multirow{2}{*}{} & \multirow{2}{*}{} \multirow{2}{*}{} & \multirow{2}{*}{} \multirow{2}{*}{} \\ 0 & 1 & & & & & & & & & \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1.0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0.5 & 1 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0.25 & 2 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0.25 & 3 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0.125 & 4 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0.125 & 5 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0.125 & 6 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0.125 & 7 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 8 \\ \hline \hline \end{tabular} \end{table} Table 1: Output codes of a LIF neuron stimulated with a constant value. \(T\) represents the number of timesteps (here \(T=8\)). \(f_{r}\) is the firing rate and \(k\) is the number of timesteps used for the charge phase before the first spike. The value 1 means that a spike has been generated on the corresponding timestep. with a constant input. We have so far characterized the _encoding_ function, the relation between the analog input \(x\) and the spike pattern, \(z(t)\), generated at the output of the neuron. The inverse process called _decoding_ aims at mapping the inverse function, that is the reconstruction of the analog value (\(\hat{x}\)) from a spiking input. To do so, we use the firing-rate of the neuron as the information carrying variable and express the decoded analog output as follows, where \(f_{r}\) denotes the firing rate: \[\hat{x}=f_{r}=\frac{1}{T}\sum^{T}z(t) \tag{8}\] The output codes obtained by simulating a LIF neuron are plotted in Fig. 1 as a function of the input value \(x\). As it can be observed, the conversion operated by a LIF neuron is highly non-uniform as it provides more quantization steps at amplitudes near \(V_{th}\) than at higher amplitudes. However, the quantization step sizes decrease while approaching \(V_{th}\). At the opposite, codes that carry a high \(f_{r}\), have large quantization step sizes. As an example and as shown in Fig. 1, a \(f_{r}\) of 0.5 will be generated by the neuron when \(x\in[1.33,2.0]\). ### Comparison between neural conversion and rate coding To assess the performance of the neural conversion scheme, we use a set of 12 natural images [11] (Set12), to measure the accuracy of the quantizer. The Peak-Signal-to-Noise ratio (PSNR) is used as quality criterion. The pixel intensities of the test images are normalized in the interval \([1.0,2.0]\) to match the conversion range of the LIF neuron shown in Fig. 1. The normalized pixel intensities are fed (without noise) into a LIF neuron membrane for \(T\) timesteps. The spikes generated by each neuron are then collected and an estimate of each pixel value is computed using equation 8. We compare the neural coding of LIF neurons with the rate coding, a well known scheme that has been extensively used in the SNN community for coding dense information in the spike domain [12][13][14]. In the rate coding scheme we assume that spike trains are independent realizations of Poisson processes with rates \(r_{i}\), where the pixel intensity \(x_{i}\) is the firing probability, normalized between [0,1], at each timestep. Eq. 8 is also used to decode the spiking output. The average PSNR on Set12 is shown on the left side of Fig. 2. As we can observe, the PSNR increases quickly for the LIF coding scheme and saturates at \(T\sim 10\). Adding more timesteps to the conversion does not improve the image reconstruction. From the previous analysis, presented in Sec. 2.1, we know that increasing \(T\) adds more quantization intervals and therefore also increases the number of output codes that can be generated by a LIF neuron. However, as it can be observed from Fig. 1, the sizes of newly added quantized intervals decrease fast and vanish when \(x\) approaches \(V_{th}\). As a result, the non-uniform quantization scheme that emerges from the LIF neuron does not allow decreasing the distortion between the original and the quantized signals by adding more quantization bits, i.e. increasing \(T\). On the other hand, the rate conversion scheme does not set any limits on the accuracy since the PSNR increases with a logarithmic shape as a function of \(T\). Let us now study a property of great interest for a neural information coding scheme: the activity of the spiking neurons. This property is key for reducing computation costs, thus the en Figure 1: Firing-rate as a function of the input \(x\) for \(T=8\) (\(V_{th}=1.0,\tau=2.0\)). The thresholds for \(f_{r}=1.0\) and \(f_{r}=0.5\) and \(f_{r}=0.25\) are also shown. Figure 2: PSNR (left) and \(\theta\) (right) as a function of the number of timesteps for the LIF neural conversion and rate coding schemes on the Set12 dataset. Each point of the curves represents an average over the number of pixels of all the dataset images. Here \(V_{th}=1\) and \(\tau=2\) for the LIF neurons. ergy consumption in networks of spiking neurons. The activity of a neural network is defined as the average number of spikes generated by each neuron during \(T\) timesteps. Referring to the previous image quantization example, the activity is defined as follows: \[\theta=\frac{\sum_{i=0}^{n}\sum_{j=0}^{m}z_{i,j}(t)}{n\times m} \tag{9}\] Where \((n,m)\) is the size of the input image. Summing over \(z\), which is a \(T\times n\times m\) binary matrix, results in the total number of spikes generated by all the LIF neurons. The activity, \(\theta\), of the rate coding and LIF conversion scheme can be seen on the right side of Fig. 2. This figure shows that, like in each rate conversion scheme, the number of spikes increases almost linearly with \(T\). However, as we can observe from the PSNR curve shown in Fig. 2, the amount of information carried by each new spike in the neural conversion scheme saturates above \(T\sim 10\). The process of coding dense information into stream of spikes is key for SNN and has been pointed out as one of the main reasons for the current performance gap between SNN and ANN. In the next section we investigate how these rules and properties show up in larger and complex networks of spiking neurons. ## 3 Image denoising with spiking neurons Our study of image denoising with spiking neurons is based on the DnCNN network proposed in [1]. We focus the study on Gaussian denoising with a certain noise level. The considered network is composed of 17 convolutional layers. Activation functions are replaced with LIF neurons (\(V_{th}=1\), \(\tau=2\)). The input of the network is a noisy observation \(y=x+v\) where \(v\) is additive white Gaussian noise with zero mean and standard deviation \(\sigma\). As proposed in [1] we follow a residual learning formulation, to train a mapping \(R(y)\sim v\) and recover the clean image from \(x=y-R(y)\). The averaged mean square error between the desired and the estimated residuals is used as the loss function. Training data for gaussian gray image denoising are generated using the same method proposed in [1]. A dataset composed of 12 images (not included in the train set) is used for testing. The surrogate gradient [8] learning is used to train the SNN networks. Denoising results and neuron activity are shown in Fig. 3. As we can observe the network performance (PSNR) follows the same trend observed for information coding shown in Fig. 2. The LIF conversion scheme can provide competitive denoising performance with few timesteps, but PSNR saturates when \(T>10\). Rate coding could theoretically achieve the same PSNR as DnCNN but at the cost of hundreds of timesteps. Fig. 4 illustrates the visual results of the coding methods on the C.man image. As it can be seen with only 7 timesteps, the LIF conversion method provides better images compared with rate coding. The latter scheme would require a large amount of timesteps to encode analog values with the precision needed for the denoising task. ## 4 Conclusion In this paper we have presented an analysis, based on information coding, for SNNs and its application for image denoising. Our analysis showed that information coding at the neuron level can explain the performance at the network level. As future work, we aim at using our approach to guide the design of spiking neural models. Our objective is to encode information with both low latency and high precision for further hardware neuromorphic implementation. Figure 4: Denoising results of the image “C.man” with noise level 25. Denoised images are shown in Fig. c and d for LIF conversion and rate coding, both with \(T=7\). Figure 3: Denoising PSNR (left) and \(\theta\) (right) as a function of the number of conversion timesteps for the LIF neural conversion scheme and the rate coding. The dotted line represent the performance of DnCNN for a noise level \(\sigma=25\).
2306.08899
A comparative study for block chain applications in the MANET
MANET- Mobile Ad-hoc Networks are famous for their infrastructure-less arrangement for communication. In this network, nodes are self-organized and can act as router. They are battery operated and self-organizing. Block chain is a new concept from 2008 and researchers are trying the possible application of Block chain in many sectors including MANETs. This paper surveys the existing researches done in applying block chain in a MANET environment. Block chain is mainly used in MANETs for improving security while routing packets from one node to another. Some researchers have proposed trust models using block chain. This paper reviews some of the existing approaches where block chain is used in MANETs for routing the packets, creating trust models, and dealing with network partitioning problem and scalability problem. This paper acts as a review paper to study on block chain applications in MANET
Sangheethaa S
2023-06-15T07:06:19Z
http://arxiv.org/abs/2306.08899v1
# A Comparative Study for Block Chain Applications in The Manet ###### Abstract MANET- Mobile Ad-hoc Networks are famous for their infrastructure-less arrangement for communication. In this network, nodes are self-organized and can act as router. They are battery operated and self-organizing. Block chain is a new concept from 2008 and researchers are trying the possible application of Block chain in many sectors including MANETs. This paper surveys the existing researches done in applying block chain in a MANET environment. Block chain is mainly used in MANETs for improving security while routing packets from one node to another. Some researchers have proposed trust models using block chain. This paper reviews some of the existing approaches where block chain is used in MANETs for routing the packets, creating trust models, and dealing with network partitioning problem and scalability problem. This paper acts as a review paper to study on block chain applications in MANET. Block chain, MANET, Routing, Trust models ## 1 Introduction Mobile Ad hoc Networks (MANETs) [1] are a type of wireless ad hoc networks that enable mobile devices to communicate with each other without relying on a fixed infrastructure or centralized control. In MANETs, each device acts as a node that can send, receive, and forward data packets to other nodes in the network. The nodes in the MANET are laptops, smartphones, or tablets, equipped with wireless interface for communication. These devices can form a network by establishing direct communication links with nearby nodes, and by relaying data packets to nodes that are out of range. MANETs are particularly useful in situations where a fixed infrastructure is unavailable or unreliable, such as in emergency and disaster response scenarios, military operations, and remote areas where there is no existing communication infrastructure. They are also useful in environments where devices need to communicate with each other without relying on a centralized control, such as in vehicular networks and Internet of Things (IoT) applications. However, MANETs face several challenges, such as limited bandwidth, battery life, and security threats. The limited bandwidth and battery life of mobile devices can affect the performance and reliability of the network, while the lack of a centralized control and the dynamic nature of the network make it vulnerable to security threats such as eavesdropping, Denial of Service Attack and packet drops by malicious nodes. Various researches have been done to improve the MANET environment by means of Routing, congestion control and security mechanisms. This paper gives a review of existing researches which finds their application of Blockchain in MANET environment. The paper is organized as follows. The next 2 sections give the introduction to MANET and Blockchain. Section 4 gives the already existing review or comparative study papers in this area. Section 5 to 9 gives review of the research papers as on date which deals with Blockchain and MANET finally the conclusion section. Table 1 gives the summary of research works discussed in this paper. Table 1 Comparison table \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Paper** & **Use of blockchain for** & **Remarks** \\ \hline [2] & Trust control & Use of PBFT, is introduced. \\ \hline [3] & Trust management & Introduced concept of new consensus model called as Delegated Proof of Trust mechanism \\ \hline [4] & Routing & Introduced approach to calculate reputation of a node and sharing it with others over AODV Protocol. \\ \hline [5] & Network partitioning problem & Uses the DAG (Directed Acyclic Graph)structure in a permissioned Blockchain. \\ \hline [6] & Routing & Enhances OLSR protocol to use reputation based on blockchain ledger concept. \\ \hline [7] & Growth problem & By using a special genesis block to control the growth to reduce the disk space usage. \\ \hline \end{tabular} ## 2 Blockchain Background Blockchain is a distributed ledger technology that enables secure, transparent, and tamper-resistant record-keeping of transactions or data. It was originally introduced in 2008 as a foundational technology for Bitcoin [2], a decentralized digital currency system, but now it is applied to various fields where the aim is to eliminate a third party. At its core, a blockchain is a digital ledger that records a series of transactions or data blocks in a chronological and immutable manner. Each block contains a hash of the previous block, forming a chain of blocks, hence the name "blockchain". Once a block is added to the chain, it cannot be modified or deleted without invalidating all subsequent blocks, making it resistant to tamper and fraud. Blockchain technology [3] is based on a distributed network of nodes, where each node maintains a copy of the ledger and participates in the validation of new transactions or blocks. Transactions are validated by the nodes through a consensus mechanism, which ensures that all nodes agree on the validity of the transactions before they are added to the blockchain. Figure 1 shows the principles of blockchain technology. The consensus mechanism used in public Blockchain are categorized into 3. Proof of work, Proof of stake, delegated proof of stake. Details of these mechanisms can be seen from many research work like [4]. The basis of blockchain is the complex cryptographic algorithms. The decentralized and transparent nature of blockchain technology offers several advantages [5], including increased security, transparency, and efficiency. It eliminates the need for intermediaries or central authorities, reducing costs and improving trust between parties. It also enables new forms of peer-to-peer transactions, smart contracts and De-centralized Applications(DAAPs) Blockchain has found its place in supply chain management, logistics management, Healthcare data privacy, educational genuineness verification etc. ## 3 Literature Review There are researches done in application of Blockchain in MANETS, VANETS and also the new kind of network called FANETs. These researches are focusing on how blockchain technology can be applied for improving performance or improving security in Mobile Ad hoc Networks or Vehicular Networks or Internet of Things. The authors of [6] have given a comprehensive survey of application of Blockchain in Vehicular networks. They have given a detailed review of blockchain based VANETs, security constraints to be considered, challenges and simulation tools that could be used in VANET environment to test Block chain. In [7] the authors have done a systematic comparison of application of Blockchain in Vehicular networks. [8] gives a detailed comparison of approaches for incentive based data forwarding in MANETs and approaches to use blockchain for data forwarding. Authors of [9] gives the research directions and guidelines for the authors to use blockchain technologies in IoTs, MANETS and VANETS. The author of this paper reviews about applications of Blockchain in MANET environment specifically related to security, trust management and scalability issues. Figure 1: Principles of Blockchain (drawn using draw.io) ## 4 Using Blockchain Technology in Manets Security The paper [10] explores the use of blockchain technology for trust control among nodes of MANETs. The authors give explanation of MANET security challenges and Blockchain technology. They also talk about the limitations of Blockchain when applied in MANET. They discuss the disadvantages of using Proof of stake and Proof of work in a MANET environment. Proof of Work is more computing intensive, Proof of Stake is challenging in MANET environment because of the highly dynamic nature of MANET. Some blockchain technologies like hyperledger are using PBFT- Practical Byzantine Fault tolerance. This approach requires 2 out of 3 nodes to agree on the agreement. In MANET scaling may be an issue due to constant breaking of links. So PBFT may not be effective. The authors also analyzed about applying blockchain based concepts for trust management, which is discussed in the next section. ## 5 Blockchain- based Lightweight Trust Management in Mobile Ad-Hoc Networks Blockchain-Based Lightweight Trust Management is a technique used in Mobile Ad-Hoc Networks (MANETs) [11] to establish trust between nodes in a decentralized and distributed manner. In MANETs, trust management is a crucial factor to ensure secure communication among the nodes. The traditional centralized trust management systems are not suitable for MANETs as they require a centralized authority to manage the trust, which is not practical in a decentralized network. Therefore, a decentralized approach is needed to manage trust in MANETS. Blockchain-Based Lightweight Trust Management (BLTM) is a technique used in Mobile Ad-hoc Networks (MANETs) to improve the security and reliability of communication among nodes. BLTM combines the trust management technique with blockchain technology to establish a secure and transparent trust model among nodes. BLTM protocol for MANETs, which consists of four phases as shown in Figure 2. They are trust evaluation, blockchain-based consensus, block generation, block maintenance phase. In the trust evaluation phase, each node evaluates the trustworthiness of its neighboring nodes based on various parameters such as the packet forwarding rate, response time, and packet drop rate. The node then assigns a trust value to each neighboring node based on the evaluation. In the blockchain-based consensus phase, the node shares its local blockchain with the neighboring nodes to reach a consensus on the trust values assigned to each node. The consensus mechanism used in BLTM is a lightweight consensus mechanism that reduces the computational overhead and ensures the consensus is reached in a timely manner. It is named as Delegated Proof of Trust mechanism (DPoT). It works with OLSR protocol. They have adopted the DCFM scheme to develop DPoT. DPoT uses the Validator and delegator nodes for achieving consensus. In the block generation phase, to create a block in a blockchain system, it's necessary to determine what information will be included in the block and how it will be configured by the delegate node. Once transactions in the pool are collected into a block, the blockchain system creates a hash value using the SHA-256 algorithm, which directly comes from the transaction data, and appends it to the block. The previous block's hash is also included as data in the current block to link the blocks together, creating a chain. The block hash is designed to accept only a specific format, such as a hash signature that starts with 10 consecutive zeros. In a secure Mobile Ad-Hoc Network (MANET), a blockchain can be used to ensure trust between nodes. Each block in the blockchain contains a set of transaction data and metadata, which includes a timestamp, transaction hash, delegate ID, and nonce. When a transaction is hashed, the transaction generator ID and related transaction values are included, along with the delegate ID. This helps to ensure that block transactions are trustworthy and cannot be repudiated by any of the participating nodes. The first block in the blockchain, known as the "genesis block," is created with an empty list of transactions when the network is initially formed. ## 6 Reputation based Routing in Manets using Blockchain Maqsood Ahamed et al proposed a reputation-based routing protocol that uses blockchain technology in MANETs [12]. This protocol is implemented over AODV [14] protocol. This paper redefines the cost calculation of AODV protocol. They propose new link cost between the nodes using reputation score. The most reputed path is selected for routing to avoid malicious nodes in the route. Reputation is maintained in the blockchain and it is validated by part of the network nodes called as miners. A reputation value is assigned to nodes, based on their behavior in participating network operations. A miner, monitors the transaction done by the nodes in its vicinity and classifies them as good or bad. These transactions are combined together to form a block. They claim that the use of blockchain in routing has 2 advantages. 1. It provides immutable record of behavior of the nodes. 2. It acts as a gauge to validate credibility of the miners, by increasing difficulty level [12]. Anyhow, the paper does not talk more about the consensus mechanism followed for agreeing upon a block. The need for such immutable record in MANET environment is also not justified. ## 7 Blockgraph Blockgraph [13] is a blockchain-based framework designed for Mobile Ad hoc Networks (MANETs). The goal of Blockgraph is to use DAG (Directed Acyclic Graph) based structure to avoid network partition problem. The Blockgraph framework, consists of three components: the Consensus system, the Block Management System and the Group management system[13]. The consensus system is avote based consensus mechanism that can be used in Blockgraph is a permissioned network. There are 2 sub systems in consensus system- 1. Leader Election 2. Log replication. Every network partition will get a new leader. All the nodes in the network are assumed to be trusted. The block is propagated to all peers by the leader node. The Block management system takes care of the local block structure including Creation, validation and ordering of blocks, recovery of missing blocks. The Group management system- runs in every node and detect if any change happens in the topology. Groups are reformed as a result of topology change. The main objective of this paper is to use Blockgraph to solve the issue of network partitioning by using permissioned Blockchain. Figure 2: Phases of BLTM Protocol ## 8 Blockchain Technology to Enhance Security in Manets In [14],a framework is proposed for usingBlockchain technology to enhance security in MANETs. This approach modifies the existing Optimized Link State Routing Protocol (OLSR)protocol in MANETS to include blockchain and reputation. OLSR uses Multi point Relays (MPR) as relay nodes in routing. Here, the same nodes are used for sharing the blockchain with other MPR nodes. The blockchain managed by MPRs are called as MPR Blockchain. This is used to calculate the credibility of other nodes while selecting a route, thus avoiding malicious nodes. ## 9 Framework For Supporting Connectivity of VANET/Manet The paper [15] proposes a framework for supporting connectivity of VANET/MANET network nodes and elastic software configurable security services using blockchain with floating genesis block. The hindrance to use blockchains in MANET /VANET is the need to store the blockchain which consumes disk space due to its growth[14]. The authors of this article proposes a solution to this hindrance. There will be fixing blocks. Fixing blocks will store the details of the initial status of the system. So it can act as a genesis block for the next chain. This floating genesis blocks are digitally signed by trusted nodes to avoid any other cyber-attacks. This article also gives a comparison table about the existing approaches to solve the growth problem of Blockchain. ## 10 Conclusion Blockchain models uses complex cryptographic mechanism for verification and trust management. But MANET devices are battery operated, and they cannot afford lot of computing intensive operations which are used in blockchain technology for securing operations. So when we use blockchain in MANET environment, there is a tradeoff between the security level, performance and the complexity. This paper discussed the existing approaches of applying Blockchain technology in MANETS especially in routing, trust management and scalability issues. The paper analyzed about the solutions given by various researchers and given review of using blockchain in MANETs. The future work will be to propose a framework for using blockchain in MANET environment by considering the tradeoffs.
2302.05686
A High-dimensional Convergence Theorem for U-statistics with Applications to Kernel-based Testing
We prove a convergence theorem for U-statistics of degree two, where the data dimension $d$ is allowed to scale with sample size $n$. We find that the limiting distribution of a U-statistic undergoes a phase transition from the non-degenerate Gaussian limit to the degenerate limit, regardless of its degeneracy and depending only on a moment ratio. A surprising consequence is that a non-degenerate U-statistic in high dimensions can have a non-Gaussian limit with a larger variance and asymmetric distribution. Our bounds are valid for any finite $n$ and $d$, independent of individual eigenvalues of the underlying function, and dimension-independent under a mild assumption. As an application, we apply our theory to two popular kernel-based distribution tests, MMD and KSD, whose high-dimensional performance has been challenging to study. In a simple empirical setting, our results correctly predict how the test power at a fixed threshold scales with $d$ and the bandwidth.
Kevin H. Huang, Xing Liu, Andrew B. Duncan, Axel Gandy
2023-02-11T12:49:46Z
http://arxiv.org/abs/2302.05686v3
# A High-dimensional Convergence Theorem for U-statistics ###### Abstract We prove a convergence theorem for U-statistics of degree two, where the data dimension \(d\) is allowed to scale with sample size \(n\). We find that the limiting distribution of a U-statistic undergoes a phase transition from the non-degenerate Gaussian limit to the degenerate limit, regardless of its degeneracy and depending only on a moment ratio. A surprising consequence is that a non-degenerate U-statistic in high dimensions can have a non-Gaussian limit with a larger variance and asymmetric distribution. Our bounds are valid for any finite \(n\) and \(d\), independent of individual eigenvalues of the underlying function, and dimension-independent under a mild assumption. As an application, we apply our theory to two popular kernel-based distribution tests, MMD and KSD, whose high-dimensional performance has been challenging to study. In a simple empirical setting, our results correctly predict how the test power at a fixed threshold scales with \(d\) and the bandwidth. ## 1 Introduction We consider a one-dimensional U-statistic of degree two built on \(n\) i.i.d. data points in \(\mathbb{R}^{d}\). Numerous estimators can be formulated as a U-statistic: Modern applications include high-dimensional change-point detection (Wang et al., 2022), sensitivity analysis of algorithms (Gamboa et al., 2022) and convergence guarantees for random forests (Peng et al., 2022). The asymptotic theory of U-statistics is well-established in the classical setting, where \(d\) is fixed and small relative to \(n\) (e.g. Chapter 5 of Serfling (1980)). Classical theory shows that the large-sample asymptotic of a U-statistic depends on its martingale structure and moments: For U-statistics of degree two, this reduces to the notion of _degeneracy_, i.e. whether the variance of a certain conditional mean is zero. Non-degenerate U-statistics are shown to have a Gaussian limit, whereas degenerate ones converge to an infinite sum of weighted chi-squares. However, these results fail to apply to the modern context of high-dimensional data, where \(d\) is of a comparable size to \(n\). The key issue is that the moment terms, which determine degeneracy, may scale with \(d\). Existing efforts on high-dimensional results either focus on U-statistics of a growing degree (Song et al., 2019; Chen and Kato, 2019) and of growing output dimension (Chen, 2018) or rely on very specific data structures (Chen and Qin, 2010; Yan and Zhang, 2022). In particular, these articles focus on a comparison to some Gaussian limit in high dimensions, and the effect of moments on a departure from Gaussianity has largely been ignored. The practical motivation for our work stems from distribution tests, which typically employ U-statistics as a test statistic. In the machine learning community, it has been empirically observed that the power of kernel-based distribution tests can deteriorate in high dimensions, depending on hyperparameter choices and the class of alternatives (Reddi et al., 2015; Ramdas et al., 2015). A theoretical analysis in the most general case has not been possible, due to the lack of a general convergence result for high-dimensional U-statistics. In the statistics community, there are similar interests in analysing U-statistics used in mean testing of high-dimensional data (e.g. Chen and Qin (2010); Wang et al. (2015)). All existing results, to our knowledge, are limited by very specific data assumptions and a focus on obtaining Gaussian limits. In this paper, we prove a general convergence theorem for U-statistics of degree two, which holds in the high-dimensional setting and under very mild assumptions on the data. We observe a high-dimensional analogue of the classical behaviour: Depending on a moment ratio, the limiting distribution of U-statistics can take either the non-degenerate Gaussian limit, the degenerate limit or an intermediate distribution. Crucially, this happens _regardless of_ the statistic's degeneracy, as defined in the classical sense. We provide error bounds that are finite-sample valid and _dimension-independent_ under a mild assumption. In the context of kernel-based distribution tests, we show that our results hold for _Maximum Mean Discrepancy_ (MMD) and for _(Langevin) Kernelized Stein Discrepancy_ (KSD) under some natural conditions. We investigate several examples under Gaussian mean-shift - a setting purposely chosen to be as simple as possible to obtain good intuitions, while already capturing a rich amount of complex behaviours. Our theory correctly predicts the high-dimensional behaviour of the test power with a wider variance than classical results and, perhaps surprisingly, potential asymmetry (see Fig. 1 for one such example). Our results enable us to characterise such behaviours based on the size of \(d\) and hyperparameter choices. ### Overview of results Given some i.i.d. data \(\{\mathbf{X}_{i}\}_{i=1}^{n}\) drawn from a distribution \(R\) on \(\mathbb{R}^{d}\) and a symmetric measurable function \(u:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\), the goal is to estimate the quantity \(D\coloneqq\mathbb{E}[u(\mathbf{X}_{1},\mathbf{X}_{2})]\). The U-statistic provides an unbiased estimator, defined as \[D_{n}\ \coloneqq\ \tfrac{1}{n(n-1)}\sum\nolimits_{1\leq i\neq j\leq n}u( \mathbf{X}_{i},\mathbf{X}_{j}). \tag{1}\] Our main result is Theorem 2. Loosely speaking, it says that as \(n,d\to\infty\), the statistic \(D_{n}\) converges in distribution to a quadratic form of Gaussians: \[D_{n}\ \xrightarrow{d}\ W+Z+D\, \tag{2}\] where \(W\) is some infinite sum of weighted and centred chi-squares and \(Z\) is some Gaussian. Define Figure 1: Behaviour of \(\mathbb{P}(X>t)\) for \(X=D_{n}\), a non-degenerate U-statistic, versus \(X\) being different theoretical limits. _Left._ KSD with RBF kernel, \(n=50\) and \(d=2000\). _Right._ MMD with linear kernel, \(n=50\) and \(d=1000\). The left plot shows that \(\mathbb{P}(D_{n}>t)\)_disagrees_ with the non-degenerate limit from known classical results but aligns with the degenerate limit from ours (moment-matched by a Gamma variable – discussed in Section 3.2). The right plot is when the limit predicted by our result can be computed exactly as a shifted-and-rescaled chi-square and shows asymmetry, which confirms a departure from Gaussianity. See the last paragraph of Section 4.3 and Appendix B for simulation details. and recall that the classical notion of degeneracy is defined by \(\sigma_{\mathrm{cond}}=0\). We next observe that in (2), \(W+D\) is closely related to the classical degenerate limit, whereas \(Z+D\) gives exactly the classical non-degenerate limit. It turns out that, up to a mild assumption, the type of asymptotic distribution of \(D_{n}\) is _completely determined_ by the ratio \(\rho_{d}\). This is reminiscent of the classical result, where the notion of degeneracy, i.e. whether \(\sigma_{\mathrm{cond}}=0\), determines the limit of \(D_{n}\). The difference in high dimensions is that \(\sigma_{\mathrm{full}}\) and \(\sigma_{\mathrm{cond}}\) may scale _differently_ with \(d\). Even if \(\sigma_{\mathrm{cond}}\neq 0\), \(\rho_{d}\) can grow to infinity as \(d\) grows, causing a non-degenerate \(D_{n}\) to behave like a degenerate U-statistic. We show that, depending on \(\rho_{d}\), (2) becomes \[D_{n}\ \xrightarrow{d}W+D\quad\text{for }\rho_{d}=\omega(n^{1/2})\qquad \quad\text{and}\qquad D_{n}\ \xrightarrow{d}Z+D\quad\text{for }\rho_{d}=o(n^{1/2})\.\] The second result is the classical Berry-Esseen bound for U-statistics, while the first result is new. It recovers the classical degenerate limit as a special case but also applies to very general U-statistics in high dimensions regardless of degeneracy. The paper is organised as follows. Section 2 provides definitions and a sketch-of-intuition on the role of moment terms in the limiting behaviour of \(D_{n}\). Section 3 presents the main results along with a proof overview in Section 3.3. Section 4.2 shows that these results apply to MMD and KSD under some natural conditions and Section 4.3 studies the Gaussian mean-shift case in detail. ### Related literature _Convergence results for U-statistics._ Existing high-dimensional results focus either on a different setting or on showing asymptotic normality under very specific assumptions on data; some references are provided at the start of this section. The results that resemble our work more closely are finite-sample bounds for classical degenerate U-statistics. Those works focus on providing bounds under conditions on specific eigenvalues of a spectral decomposition of \(D_{n}\), and we defer a list of references to Remark 1. Among them, Yanushkevichiene (2012) provides a rate \(O(n^{-1/12})\) under perhaps the least stringent assumption on eigenvalues, but the error is still pre-multiplied by the inverse square-root of the largest eigenvalue. These eigenvalues are intractable and yet depend on \(d\) through the data distribution, which make them hard to apply to high-dimensional settings. In the _classical_ setting where \(d\) is fixed, a recent work by Bhattacharya et al. (2022) proves a Gaussian-quadratic-form limit similar to ours for a random quadratic polynomial, which includes a simple U-statistic as a special case. However, their results are asymptotic and in particular do not identify a parameter that leads to the phase transition. Our finite-sample results require a very different proof technique and show how a moment ratio governs the transition. _High-dimensional power analysis for MMD and KSD._ Some recent work has investigated the asymptotic behaviour of \(D_{n}\) for MMD. Yan and Zhang (2022) prove a convergence result under a specific data model and kernel choice, so that \(u(\mathbf{x},\mathbf{y})=g(\|\mathbf{x}-\mathbf{y}\|_{2})\) for some function \(g:\mathbb{R}\to\mathbb{R}\) and \(\|\,\bullet\,\|_{2}\) being the vector norm. The dimension-independence of \(g\) enables a Taylor expansion argument reminiscent of delta method and therefore gives a Gaussian limit. Such structures are not assumed in our work. A related work of Gao and Shao (2021) provides a finite-sample bound under more general conditions. The results show asymptotic normality of a studentised version of \(D_{n}\) rather than \(D_{n}\) itself, and the error bound is only valid if a moment ratio, analogous to excess kurtosis, vanishes with \(d\) (see their Theorem 13). Interestingly, this effect is also observed in our results for much more general settings: In Appendix A, we discuss when the infinite sum of chi-squares is guaranteed to be _non-Gaussian_, and one situation is precisely when the excess kurtosis does _not_ vanish. Another recent line of work (Kim and Ramdas, 2020; Shekhar et al., 2022) focuses on a studentised \(D_{n}\) that is modified to exclude half of the terms. They show dimension-agnostic normality results at the cost of not using the full U-statistic \(D_{n}\). ## 2 Setup and motivation We use the asymptotic notations \(o,O,\Theta,\omega,\Omega\) defined in the usual way (see e.g. Chapter 3 of Cormen et al. (2009)) for the limit \(n\to\infty\), where the dimension is allowed to depend on \(n\); we make the \(n\)-dependence explicit in the dimension \(d_{n}\) whenever such asymptotics are considered. ### Moment terms in high dimensions Consider a U-statistic \(D_{n}\) as defined in (1) with respect to \((R,u)\) with mean \(D=\mathbb{E}[u(\mathbf{X}_{1},\mathbf{X}_{2})]\). For \(\nu\geq 1\), denote the \(L_{\nu}\) norms by \(\|\,\bullet\,\|_{L_{\nu}}\coloneqq\mathbb{E}[|\,\bullet\,|^{\nu}]^{1/\nu}\). The \(\nu\)-th central moment of \(D_{n}\) are bounded from above and below in terms of two types of moment terms (see Lemma 33 in the appendix): \[M_{\mathrm{cond};\nu}\coloneqq\big{\|}\mathbb{E}[u(\mathbf{X}_{1},\mathbf{X} _{2})|\mathbf{X}_{2}]-\mathbb{E}[u(\mathbf{X}_{1},\mathbf{X}_{2})]\big{\|}_{ L_{\nu}},M_{\mathrm{full};\nu}\coloneqq\big{\|}u(\mathbf{X}_{1},\mathbf{X}_{2})- \mathbb{E}[u(\mathbf{X}_{1},\mathbf{X}_{2})]\big{\|}_{L_{\nu}}.\] In the special case \(\nu=2\), the definitions from Section 1.1 implies \(\sigma_{\mathrm{cond}}=M_{\mathrm{cond};2}\), \(\sigma_{\mathrm{full}}=M_{\mathrm{full};2}\) and \(\rho_{d}=\sigma_{\mathrm{full}}\,/\,\sigma_{\mathrm{cond}}\). The fact that these moments may scale with \(d\) has a significant effect on convergence results: For example, bounds of the form \(\frac{\mathrm{moment}}{f(n)}\) for some increasing function \(f\) of \(n\) are no longer guaranteed to be small. This is yet another effect of the "curse of dimensionality". For U-statistics, the classical Berry-Esseen result (see e.g. Theorem 10.3 of Chen et al. (2011)) says that, if \(\sigma_{\mathrm{cond}}>0\), then for a normal random variable \(Z\sim\mathcal{N}(D,4n^{-1}\sigma_{\mathrm{cond}}^{2})\) and \(\nu\in(2,3]\), we have \[\sup_{t\in\mathbb{R}}\Big{|}\mathbb{P}\Big{(}\tfrac{\sqrt{n}}{\sigma_{\mathrm{ cond}}}D_{n}<t\Big{)}-\mathbb{P}\Big{(}\tfrac{\sqrt{n}}{\sigma_{\mathrm{cond}}}Z<t \Big{)}\Big{|}\ \leq\ \tfrac{6.1M_{\mathrm{cond};\nu}^{\nu}}{n^{(\nu-2)/2}\sigma_{\mathrm{ cond}}^{\nu}}+\tfrac{(1+\sqrt{2})\rho_{d}}{2(n-1)^{1/2}}. \tag{3}\] Indeed, the error bound in the classical Berry-Esseen result is an increasing function of \(n^{-1/2}\rho_{d}=\sigma_{\mathrm{full}}/(n^{1/2}\sigma_{\mathrm{cond}})\), which is not guaranteed to be small as \(d\) grows. The ratio \(M_{\mathrm{cond};\nu}/\sigma_{\mathrm{cond}}\) also appears in classical error bounds. However, we do _not_ focus on how this ratio scales, since it appears in Berry-Esseen bounds even for sample averages. Error bounds in our main theorem will depend on similar ratios, and for our theorem to imply a convergence theorem, the following assumption is required: **Assumption 1**.: _There exists some \(\nu\in(2,3]\) and some constant \(C<\infty\) such that for all \(n\) and \(d\), we have the uniform bounds \(\frac{M_{\mathrm{full};\nu}}{\sigma_{\mathrm{full}}}\leq C\) and \(\frac{M_{\mathrm{cond};\nu}}{\sigma_{\mathrm{cond}}}\leq C\)._ ### Sketch of intuition We motivate our results by noting that the variance of \(D_{n}\) defined in (1) satisfies \[\text{Var}[D_{n}] =O\Big{(}\frac{\mathbb{E}[(u(\mathbf{X}_{1},\mathbf{X}_{2})-D)(u( \mathbf{X}_{1},\mathbf{X}_{3})-D)]}{n}+\frac{\mathbb{E}[(u(\mathbf{X}_{1}, \mathbf{X}_{2})-D)(u(\mathbf{X}_{1},\mathbf{X}_{2})-D)]}{n(n-1)}\Big{)}\] \[=O\big{(}\frac{\sigma_{\mathrm{cond}}^{2}}{n}+\frac{\sigma_{ \mathrm{full}}^{2}}{n(n-1)}\big{)}\.\] To study the asymptotic distribution of \(D_{n}\), we need to understand how its asymptotic variance behaves as \(n\) and \(d\) grow. Suppose we are in the classical _non-degenerate_ setting, where \(d\) is fixed and \(\sigma_{\mathrm{cond}}>0\). The dominating term in \(\mbox{Var}[D_{n}]\) is \(O(n^{-1}\sigma_{\mathrm{cond}}^{2})\). The contribution of the \(\sigma_{\mathrm{full}}^{2}\) term is small, i.e. the effect of the variance of each individual summand \(u(\mathbf{X}_{1},\mathbf{X}_{2})\) is negligible. In fact, we can approximate \(D_{n}\) by replacing each of the second argument in the summand by an independent copy \(\mathbf{X}^{\prime}_{j}\) of \(\mathbf{X}_{j}\) and applying CLT for an empirical average: \[D_{n}\ =\ \frac{1}{n(n-1)}\sum\nolimits_{1\leq i\neq j\leq n}u( \mathbf{X}_{i},\mathbf{X}_{j}) \approx\frac{1}{n}\sum\nolimits_{i=1}^{n}\Big{(}\frac{1}{n-1} \sum\nolimits_{j=1}^{n}u(\mathbf{X}_{i},\mathbf{X}^{\prime}_{j})\Big{)}\] \[\approx\frac{1}{n}\sum\nolimits_{i=1}^{n}\mathbb{E}[u(\mathbf{X }_{i},\mathbf{X}^{\prime}_{j})|\mathbf{X}_{i}]\ \approx\ \mathcal{N}\big{(}D\,,\ \frac{4\sigma_{\mathrm{cond}}^{2}}{n}\big{)}\.\] This argument underpins results on CLT for non-degenerate U-statistics. In the classical degenerate setting, however, \(d\) is still fixed but \(\sigma_{\mathrm{cond}}=0\), and the above argument fails to apply. Instead, one considers a spectral decomposition \(u(\mathbf{x},\mathbf{y})=\sum_{k=1}^{\infty}\lambda_{k}\phi_{k}(\mathbf{x}) \phi_{k}(\mathbf{y})\) for some eigenvalues \(\{\lambda_{k}\}_{k=1}^{\infty}\) and eigenfunctions \(\{\phi_{k}\}_{k=1}^{\infty}\), and compares the distribution of \(D_{n}\) to a weighted sum of chi-squares: \[D_{n} =\frac{1}{n(n-1)}\sum\nolimits_{1\leq i\neq j\leq n}\sum\nolimits_ {k=1}^{\infty}\lambda_{k}\phi_{k}(\mathbf{X}_{i})\phi_{k}(\mathbf{X}_{j})\] \[\approx\sum\nolimits_{k=1}^{\infty}\lambda_{k}\Big{(}\frac{1}{n} \sum\nolimits_{i=1}^{n}\phi_{k}(\mathbf{X}_{i})\Big{)}\Big{(}\frac{1}{n}\sum \nolimits_{j=1}^{n}\phi_{k}(\mathbf{X}_{j})\Big{)}\] \[\approx\frac{1}{n}\sum\nolimits_{k=1}^{\infty}\lambda_{k}\left( \sqrt{\mbox{Var}[\phi_{k}(\mathbf{X}_{1})]}\,\xi_{k}+\mathbb{E}[\phi_{k}( \mathbf{X}_{1})]\right)^{2}\,,\] where \(\xi_{k}\)'s are i.i.d. standard normals. The limiting distributions in both settings enable one to construct consistent confidence intervals for \(D_{n}\) and study \(\mathbb{P}(D_{n}>t)\). The key takeaway is that the asymptotic distribution of \(D_{n}\) depends on the relative sizes of \(\sigma_{\mathrm{cond}}^{2}\) and \((n-1)^{-1}\sigma_{\mathrm{full}}^{2}\). This comparison reduces to degeneracy when \(d\) is fixed, but is no longer so when \(d\) grows. In the high-dimensional setting, \(\sigma_{\mathrm{cond}}\) and \(\sigma_{\mathrm{full}}\) can scale with \(d\) at _different orders_, making it possible for the ratio \(\rho_{d}\) to vary with \(d\). In particular, a non-degenerate U-statistic with \(\sigma_{\mathrm{cond}}>0\) may still satisfy \(\rho_{d}=\omega(n^{1/2})\), i.e. \((n-1)^{-1}\sigma_{\mathrm{full}}^{2}/\sigma_{\mathrm{cond}}^{2}\to\infty\) as \(n\) and \(d\) grow. In this case, the classical argument for a non-degenerate Gaussian limit would fail and a degenerate limit would dominate. This is exactly what we observe in the practical applications in Section 4.3, and motivates the need for results that explicitly addresses the high-dimensional setting. ## 3 Main results The main result presented in this section is a finite-sample bound that compares \(D_{n}\) to a quadratic form of infinitely many Gaussians. The limiting distribution is a sum of the non-degenerate limit and a variant of the degenerate limit, and subject to Assumption 1, the error bound is _independent_ of \(\rho_{d}\). In the case \(\rho_{d}=o(n^{1/2})\), the non-degenerate limit dominates and our result agrees with the Gaussian limit given by a Berry-Esseen theorem for U-statistics. However when dimension is high such that \(\rho_{d}=\omega(n^{1/2})\), the degenerate limit dominates and implies a _larger asymptotic variance_. We also discuss how to obtain consistent distribution bounds that reflect the effect of a large dimension \(d\) on the original statistic \(D_{n}\). Our results rest on a functional decomposition assumption. For a sequence of \(\mathbb{R}^{d}\to\mathbb{R}\) functions \(\{\phi_{k}\}_{k=1}^{\infty}\) and a sequence of real values \(\{\lambda_{k}\}_{k=1}^{\infty}\), we define the \(L_{\nu}\) approximation error for \(\nu\geq 1\) and a given \(K\in\mathbb{N}\) as \[\varepsilon_{K;\nu}\ \coloneqq\ \big{\|}\sum_{k=1}^{K}\lambda_{k}\phi_{k}( \mathbf{X}_{1})\phi_{k}(\mathbf{X}_{2})-u(\mathbf{X}_{1},\mathbf{X}_{2})\big{\|} _{L_{\nu}}\.\] **Assumption 2**.: _There exists some \(\nu\in(2,3]\) such that, for any fixed \(n\) and \(d\), as \(K\to\infty\), the \(L_{\nu}\) approximation error \(\varepsilon_{K;\nu}\to 0\) for some choice of \(\{\phi_{k}\}_{k=1}^{\infty}\) and \(\{\lambda_{k}\}_{k=1}^{\infty}\)._ **Remark 1**.: (i) If Assumption 2 holds for some \(\nu>3\), it certainly holds for \(\nu=3\). We restrict our focus to \(\nu\in(2,3]\) for simplicity. (ii) Assumption 2 always holds for \(\nu=2\) by the spectral decomposition of an operator on \(L_{2}(\mathbb{R}^{d},R)\). For degenerate U-statistics with \(d\) fixed, the corresponding orthonormal eigenbasis of functions and eigenvalues are used to prove asymptotic results (see Section 5.5.2 of Serfling (1980)) and finite-sample bounds (Bentkus and Gotze, 1999, Gotze and Tikhomirov, 2005, Yanushkevichene, 2012). In fact, these finite-sample bounds are dependent on the specific \(\lambda_{k}\)'s, making the results hard to apply. Instead, we forgo orthonormality at the cost of a convergence slightly stronger than \(L_{2}\). This allows for a much more flexible choice of \(\{\phi_{k},\lambda_{k}\}_{k=1}^{\infty}\) and is particularly well-suited for a kernel-based setting; see Remark 16 for a discussion. Before stating the results, we introduce some more notations. For every \(K\in\mathbb{N}\), we define a diagonal matrix of the first \(K\) "eigenvalues" and a concatenation of the first \(K\) "eigenfunctions" by \[\Lambda^{K}\ \coloneqq\ \text{diag}\{\lambda_{1},\dots,\lambda_{K}\}\ \in\mathbb{R}^{K\times K}\, \qquad\phi^{K}(x)\ \coloneqq\ (\phi_{1}(x),\dots,\phi_{K}(x))^{\top}\ \in\mathbb{R}^{K}. \tag{4}\] We denote the mean and variance of \(\phi^{K}(\mathbf{X}_{1})\) by \(\mu^{K}\coloneqq\mathbb{E}[\phi^{K}(\mathbf{X}_{1})]\) and \(\Sigma^{K}\coloneqq\text{Cov}[\phi^{K}(\mathbf{X}_{1})]\). ### Result for the general case Let \(\eta_{i}^{K}\), with \(i,K\in\mathbb{N}\), be i.i.d. standard Gaussian vectors in \(\mathbb{R}^{K}\). In the general case, the limiting distribution is given in terms of a quadratic form of Gaussians, defined by \[U_{n}^{K}\coloneqq\frac{1}{n(n-1)}\sum\nolimits_{1\leq i\neq j\leq n}(\eta_{i }^{K})^{\top}(\Sigma^{K})^{1/2}\Lambda^{K}(\Sigma^{K})^{1/2}\eta_{j}^{K}+ \frac{2}{n}\sum\nolimits_{i=1}^{n}(\mu^{K})^{\top}\Lambda^{K}(\Sigma^{K})^{1/2 }\eta_{i}^{K}+D.\] We also denote the dominating moment terms by \[\sigma_{\max}\ \coloneqq\ \max\{\sigma_{\mathrm{full}},(n-1)^{1/2}\sigma_{ \mathrm{cond}}\}\,\quad M_{\max;\nu}\ \coloneqq\ \max\{M_{\mathrm{full};\nu},(n-1)^{1/2}M_{\mathrm{cond};\nu}\}\.\] We are ready to state our main result - a finite-sample error bound that compares \(D_{n}\) to the limiting distribution of \(U_{n}^{K}\), where the error is given in terms of \(n\) and the moment terms. **Theorem 2**.: _There exists a constant \(C>0\) such that, for all \(u\), \(R\), \(d\), \(n\) and \(K\), if \(\nu\in(2,3]\) satisfies Assumption 2, then the following holds:_ \[\sup_{t\in\mathbb{R}}\Big{|}\mathbb{P}\Big{(}\frac{\sqrt{n(n-1)}} {\sigma_{\max}}D_{n}>t\Big{)}-\lim_{K\to\infty}\mathbb{P}\Big{(}\frac{\sqrt{n(n -1)}}{\sigma_{\max}}U_{n}^{K}>t\Big{)}\Big{|}\\ \leq C\,n^{-\frac{\nu-2}{4\nu+2}}\Big{(}\frac{(M_{\mathrm{full}; \nu})^{\nu}}{\sigma_{\max}^{\nu}}+\frac{((n-1)^{1/2}\,M_{\mathrm{cond};\nu})^{ \nu}}{\sigma_{\max}^{\nu}}\Big{)}^{\frac{1}{2\nu+1}}\ \leq\ 2^{\frac{\nu}{2\nu+1}}C\,n^{-\frac{\nu-2}{4\nu+2}} \Big{(}\frac{M_{\max;\nu}}{\sigma_{\max}}\Big{)}^{\frac{\nu}{2\nu+1}}\.\] **Remark 3**.: If \(\nu=3\), the RHS is given by \(2^{3/7}Cn^{-\frac{1}{14}}\big{(}\frac{M_{\max;3}}{\sigma_{\max}}\big{)}^{6/7}\). If Assumption 1 holds for \(\nu\), the RHS can be replaced by \(C^{\prime}n^{-\frac{\nu-2}{4\nu+2}}\) for some constant \(C^{\prime}\) and is dimension-independent. Theorem 2 immediately implies a convergence theorem: **Corollary 4**.: _Let the dimension \(d_{n}\) depend on \(n\). Suppose Assumptions 1 and 2 hold for some \(\nu>2\) and the sequential distribution limit \(\bar{U}=\lim_{n\to\infty}\lim_{K\to\infty}\frac{\sqrt{n(n-1)}}{\sigma_{\max}}(U_{ n}^{K}-D)\) exists. Then_ \[\frac{\sqrt{n(n-1)}}{\sigma_{\max}}(D_{n}-D)\xrightarrow{d}\bar{U} \qquad\qquad\qquad\qquad\text{as }\ n\to\infty\.\] \(U_{n}^{K}\) is a quadratic form of Gaussians, which does not admit a closed-form c.d.f. in general and whose limiting behaviour depends heavily on \(\lambda_{k}\) and \(\phi_{k}\). Nevertheless, the presence of Gaussianity still allows us to obtain crude bounds that reflect how dimension \(d\) affects its distribution. By combining such bounds with Theorem 2, we can bound the c.d.f. of the original U-statistic \(D_{n}\). **Proposition 5**.: _There exists constants \(C_{1},C_{2},C_{3}>0\) such that, for all \(u\), \(R\), \(d\), \(n\) and \(K\), if \(\nu\in(2,3]\) satisfies Assumption 2, then for all \(\epsilon>0\),_ \[\mathbb{P}(|D_{n}-D|>\epsilon) \ \geq\ 1-C_{1}\Big{(}\frac{\sqrt{n(n-1)}}{\sigma_{\max}}\Big{)}^ {1/2}\epsilon^{1/2}-C_{2}\,n^{-\frac{\nu-2}{4\nu+2}}\Big{(}\frac{M_{\max;\nu}} {\sigma_{\max}}\Big{)}^{\frac{\nu}{2\nu+1}}\,\] \[\mathbb{P}(|D_{n}-D|>\epsilon) \ \leq\ C_{3}\epsilon^{-2}\Big{(}\frac{\sigma_{\max}}{\sqrt{n(n-1)}} \Big{)}^{2}\.\] **Remark 6**.: The second bound is a concentration inequality directly available via Markov's inequality, whereas the first bound is an anti-concentration result. Anti-concentration results are generally available only for random variables from known distribution families, and we obtain such a result by comparing \(D_{n}\) to \(U_{n}^{K}\). The error bounds are free of any dependence on \(K\) and specific choices of \(\phi_{k}\) and \(\lambda_{k}\). The trailing error term involving \(M_{\max;\nu}/\sigma_{\max}\) is inherited from Theorem 2 and is negligible, whereas the other error term is directly related to the inverse of the Markov error term. Proposition 5 provide two-sided bounds on how likely it is for \(D_{n}\) to be far from \(D\). The next corollary provides a more explicit statement. **Corollary 7**.: _Let the dimension \(d_{n}\) depend on \(n\) and fix \(\epsilon>0\). Suppose Assumptions 1 and 2 hold for some \(\nu\in(2,3]\). As \(n\to\infty\), we have that \(\mathbb{P}(|D_{n}-D|>\epsilon)\to 1\) if \(\sigma_{\max}=\omega(n)\) and \(\mathbb{P}(|D_{n}-D|>\epsilon)\to 0\) if \(\sigma_{\max}=o(n)\)._ Another way of formulating the bounds in Proposition 5 is the following: Similar to the intuition for a Gaussian, when \(n\) is large (with \(d_{n}\) depending on \(n\)), the distribution of \(D_{n}\) is not only concentrated in an interval around \(D\) with width being a multiple of \(\frac{\sigma_{\max}}{n}\), but also "well spread-out" within the interval. The probability mass gets concentrated around \(D\) when \(\sigma_{\max}=o(n)\), but spreads out along the whole real line when \(\sigma_{\max}=\omega(n)\); the latter only happens in a high dimensional regime. To have a more precise understanding of the limiting behaviour of \(D_{n}\), we need a better knowledge of \(U_{n}^{K}\). By a closer examination of \(U_{n}^{K}\), we see that it is a sum of three terms: A sum of weighted chi-squares with variance of the order \(n^{-1}(n-1)^{-1}\sigma_{\text{full}}^{2}\), a Gaussian with variance of the order \(n^{-1}\sigma_{\text{cond}}^{2}\), and a constant \(D\). The first term closely resembles the limit for degenerate U-statistics when \(d\) is fixed, while the second term corresponds exactly to the Gaussian limit for non-degenerate U-statistics. It turns out that, unless we are at the boundary case where \(\rho_{d}=\Theta(n^{1/2})\), we can always approximate \(U_{n}^{K}\) by ignoring either the first or the second term. Ignoring the first term gives exactly the Gaussian limit, where a well-established result has already been provided in (3). Ignoring the second term gives an infinite sum of weighted chi-squares, which is discussed next. ### The case \(\rho_{d}=\omega(n^{1/2})\) Let \(\{\xi_{k}\}_{k=1}^{\infty}\) be a sequence of i.i.d. standard Gaussians in 1d, and for \(K\in\mathbb{N}\), let \(\{\tau_{k;d}\}_{k=1}^{K}\) be the eigenvalues of \((\Sigma^{K})^{1/2}\Lambda^{K}(\Sigma^{K})^{1/2}\). The limiting distribution we consider is given in terms of \[W_{n}^{K}\ :=\ \frac{1}{\sqrt{n(n-1)}}\sum_{k=1}^{K}\tau_{k;d}(\xi_{k}^{2}-1)+D. \tag{5}\] The next result adapts Theorem 2 by replacing \(U_{n}^{K}\) with \(W_{n}^{K}\): **Proposition 8**.: _There exists a constant \(C>0\) such that, for all \(u\), \(R\), \(d\), \(n\) and \(K\), if \(\nu\in(2,3]\) satisfies Assumption 2, then the following holds:_ \[\sup_{t\in\mathbb{R}} \Big{|}\mathbb{P}\Big{(}\frac{\sqrt{n(n-1)}}{\sigma_{\rm full}}D _{n}>t\Big{)}-\lim_{K\to\infty}\mathbb{P}\Big{(}\frac{\sqrt{n(n-1)}}{\sigma_{ \rm full}}W_{n}^{K}>t\Big{)}\Big{|}\] \[\leq\ C\Big{(}\frac{1}{(n-1)^{1/5}}+\Big{(}\frac{\sqrt{n-1}\, \sigma_{\rm cond}}{\sigma_{\rm full}}\Big{)}^{2/5}+\,n^{-\frac{\nu-2}{4\nu+2}} \Big{(}\frac{(M_{\rm full;\nu})^{\nu}}{\sigma_{\rm full}^{\nu}}+\frac{((n-1)^{ 1/2}M_{\rm cond;\nu})^{\nu}}{\sigma_{\rm full}^{\nu}}\Big{)}^{\frac{1}{2\nu+1} }\Big{)}\.\] **Remark 9**.: In the case \(\nu=3\), the error term above becomes \[C\Big{(}\frac{1}{(n-1)^{1/5}}+\Big{(}\frac{\sqrt{n-1}\,\sigma_{\rm cond}}{ \sigma_{\rm full}}\Big{)}^{2/5}+\,n^{-\frac{1}{14}}\Big{(}\frac{(M_{\rm full;3 })^{3}}{\sigma_{\rm full}^{3}}+\frac{\big{(}(n-1)^{1/2}M_{\rm cond;3}\big{)}^{ 3}}{\sigma_{\rm full}^{3}}\Big{)}^{\frac{1}{7}}\Big{)}\.\] In the case when Assumption 1 holds for \(\nu\), the error term is \(\Theta\big{(}\big{(}\frac{n-1}{\rho_{d}^{2}}\big{)}^{1/5}+n^{-\frac{\nu-2}{4 \nu+2}}\big{)}\). **Remark 10**.: Proposition 8 agrees with the classical results for degenerate U-statistics. In those results, \(\{\phi_{k}\}_{k=1}^{\infty}\) are chosen such that they are orthonormal in \(L_{2}(\mathbb{R}^{d},R)\) and \(\mathbb{E}[\phi_{k}(\mathbf{X}_{1})]=0\). This corresponds to \(\Sigma^{K}\) being a diagonal matrix and the expression for \(\tau_{k;d}\) can be simplified. We seek to obtain a better understanding of the limiting distribution of \(D_{n}\) in the case \(\rho_{d}=\omega(n^{1/2})\). Write \(W_{n}\coloneqq\lim_{K\to\infty}W_{n}^{K}\) as the distributional limit of \(W_{n}^{K}\) as \(K\to\infty\). Provided that \(W_{n}\) exists, Proposition 8 gives the convergence of \(D_{n}\) to \(W_{n}\) in the Kolmogorov metric. The next lemma guarantees the existence of \(W_{n}\). **Proposition 11**.: _Fix \(n,d\). If Assumption 2 holds for some \(\nu\geq 2\) and \(|D|,\sigma_{\rm full}<\infty\), \(W_{n}\) exists._ The \(r\)-th moment of \(W_{n}^{K}\) as \(K\to\infty\) can be computed easily when Assumption 2 holds for \(\nu\geq r\). In particular, these limiting moments depend only on moments of the original function \(u\) and _not_ on specific values of the intractable weights \(\tau_{k;d}\). Lemma 36 in the appendix shows that \(\mathbb{E}[W_{n}^{K}]=D\) for every \(k\in\mathbb{N}\) and \[\lim_{K\to\infty}\text{Var}[W_{n}^{K}]=\frac{2}{n(n-1)}\sigma_{ \rm full}^{2}\,\qquad\lim_{K\to\infty}\mathbb{E}\big{[}(W_{n}^{K}-D)^{3}\big{]}=\frac{8 \mathbb{E}[u(\mathbf{X}_{1},\mathbf{X}_{2})u(\mathbf{X}_{2},\mathbf{X}_{3})u( \mathbf{X}_{3},\mathbf{X}_{1})]}{n^{3/2}(n-1)^{3/2}}\,\] \[\lim_{K\to\infty}\mathbb{E}\big{[}(W_{n}^{K}-D)^{4}\big{]}\ =\ \frac{12(4\mathbb{E}[u(\mathbf{X}_{1},\mathbf{X}_{2})u(\mathbf{X}_{2}, \mathbf{X}_{3})u(\mathbf{X}_{3},\mathbf{X}_{4})u(\mathbf{X}_{4},\mathbf{X}_{1} )]+\sigma_{\rm full}^{4})}{n^{2}(n-1)^{2}}\.\] provided that Assumption 2 holds for \(\nu\geq 2\), \(\nu\geq 3\) and \(\nu\geq 4\) respectively. If we additionally assume that the moments of \(W_{n}\) can be computed as these limiting moments of \(W_{n}^{K}\), we can obtain \(\text{Var}[W_{n}]\) in terms of \(\sigma_{\rm full}\). The asymptotic variance of \(D_{n}\) predicted by Proposition 8 is then _larger_ than what one would predict by naively using the variance in the Gaussian CLT limit. Although we now have a new result that deals with the case \(\rho_{d}=\omega(n^{1/2})\), understanding the distribution of \(W_{n}\) remains difficult: As commented by Buckley and Eagleson (1988), the limit of a weighted sum of chi-squares is generally expected to be either a Gaussian or a weighted sum of chi-squares, the choice of which is determined by the intractable weights \(\tau_{k,d}\). In Appendix A, we include a detailed discussion on testing the Gaussianity for \(W_{n}\), all of which offer limited help for the general case. On the other hand, \(W_{n}\) is definitely not Gaussian under a naive condition: **Lemma 12**.: _Suppose there exists a finite \(K_{*}\) such that \(\lambda_{k}=0\) for all \(k>K_{*}\). Then \(W_{n}=W_{n}^{K^{*}}\), which is a weighted sum of chi-squares._ A weighted sum of chi-squares again does not admit a closed-form distribution function. Fortunately in the case when \(\tau_{k;d}\geq 0\) for all \(k\), many numerical approximation schemes are available and used widely. These methods generally rely on matching the moments of \(W_{n}\), which can be computed easily due to Proposition 11. The simplest example is the Welch-Satterthwaite method, which approximates the distribution of \(W_{n}\) by a gamma distribution with the same mean and variance. We refer readers to Bodenham and Adams (2016) and Duchesne and De Micheaux (2010) for a review of other moment-matching methods. ### Proof overview The proof for Theorem 2 consists of three main steps: 1. **"Spectral" approximation.** We first use Assumption 2 to replace \(u(\mathbf{X}_{i},\mathbf{X}_{j})\) with the truncated sum \(\sum_{k=1}^{K}\lambda_{k}\phi_{k}(\mathbf{X}_{i})\phi_{k}(\mathbf{X}_{j})\), which gives a truncation error that vanishes as \(K\to\infty\); 2. **Gaussian approximation.** The truncated sum is a simple quadratic form of i.i.d. vectors in \(\mathbb{R}^{d}\), each of which can be approximated by a Gaussian vector. This is done by following Chatterjee (2006)'s adaptataion of Lindeberg's telescoping sum argument. Similar proof ideas have been used to develop new convergence results in statistics and machine learning; examples include empirical risk Montanari and Saeed (2022) and bootstrap for non-asymptotically normal estimators Austern and Syrgkanis (2020). This step introduces errors in terms of moment terms of \(U_{n}^{K}\), which are then related to those of \(D_{n}\); 3. **Bound the distribution of \(U_{n}^{K}\).** Step (ii) introduces errors in terms of the distribution of \(U_{n}^{K}\), a quadratic form of Gaussians, over a short interval. These errors are then controlled by the distribution bounds from Carbery and Wright (2001). The proof for Proposition 8 is similar, except that we use an additional Markov-type argument to remove the linear sum from \(U_{n}^{K}\) and obtain the limit in terms of \(W_{n}^{K}\). ## 4 Kernel-based testing in high dimensions Given two probability measures \(P\) and \(Q\) on \(\mathbb{R}^{d}\), we consider the problem of testing \(H_{0}:P=Q\) against \(H_{1}:P\neq Q\) through some measure of discrepancy between \(P\) and \(Q\). We focus on _Maximum Mean Discrepancy_ (MMD) and _(Langevin) Kernelized Stein Discrepancy_ (KSD), two kernel-based methods that use a U-statistic \(D_{n}\) as the test statistic. It is well-known that \(\sigma_{\mathrm{cond}}=0\) under \(H_{0}\) and the limit of \(D_{n}\) is a weighted sum of chi-squares (see Gretton et al. (2012) for MMD and Liu et al. (2016) for KSD). Instead, we are interested in quantifying the power of \(D_{n}\) given as \(\mathbb{P}_{H_{1}}(D_{n}>t)\). The test threshold \(t\) is often chosen adaptively in practice, but we assume \(t\) to be fixed for simplicity of analysis. The results in Section 3 offer two key insights to this problem: 1. \(D_{n}\) may have different limiting distributions depending on \(\rho_{d}\). In the non-Gaussian case, the confidence interval and thereby the distribution curve can be wider than what a Berry-Esseen bound predicts, and there may be potential asymmetry; 2. We can completely characterise the high-dimensional behaviour of the power in terms of \(\rho_{d}\), which in turn depends on the hyperparameters and the set of alternatives considered. In this section, we first show that our results naturally apply to MMD and KSD. We then investigate their high-dimensional behaviours in an example of Gaussian mean-shift under simple kernels. Throughout, \(\|\bullet\|_{2}\) denotes the vector Euclidean norm, which is not to be confused with \(\|\bullet\|_{L_{2}}\). ### Notations We follow the kernel definition from Steinwart and Scovel (2012) as below: **Definition 13**.: A function \(\kappa:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) is called a _kernel_ on \(\mathbb{R}^{d}\) if there exists a Hilbert space \((\mathcal{H},\langle\,\bullet\,,\,\bullet\,\rangle_{\mathcal{H}})\) and a map \(\Phi:\mathbb{R}^{d}\to\mathcal{H}\) such that \(\kappa(\mathbf{x},\mathbf{x}^{\prime})=\langle\phi(\mathbf{x}),\phi(\mathbf{x }^{\prime})\rangle_{\mathcal{H}}\) for all \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{H}\). We give the minimal definitions of MMD and KSD, and refer interested readers to Gretton et al. (2012) and Gorham and Mackey (2017) for further reading. Throughout, we let \(\{\mathbf{Y}_{j}\}_{j=1}^{n}\) be i.i.d. samples from \(P\) and \(\{\mathbf{X}_{i}\}_{i=1}^{n}\) be i.i.d. samples from \(Q\). We also write \(\mathbf{Z}_{i}\coloneqq(\mathbf{X}_{i},\mathbf{Y}_{i})\) and assume that \(\kappa\) is measurable. MMD with respect to \(\kappa\) is defined by \[D^{\mathrm{MMD}}(Q,P)\;\coloneqq\;\mathbb{E}_{\mathbf{Y},\mathbf{Y}^{\prime} \sim P}[\kappa(\mathbf{Y},\mathbf{Y}^{\prime})]-2\mathbb{E}_{\mathbf{Y}\sim P,\mathbf{X}\sim Q}[\kappa(\mathbf{Y},\mathbf{X})]+\mathbb{E}_{\mathbf{X}, \mathbf{X}^{\prime}\sim Q}[\kappa(\mathbf{X},\mathbf{X}^{\prime})]\.\] A popular unbiased estimator for \(D^{\mathrm{MMD}}\) is exactly a U-statistic with balanced sample size: \[D^{\mathrm{MMD}}_{n}\;\coloneqq\;\frac{1}{n(n-1)}\sum_{1\leq i\neq j\leq n}u^{ \mathrm{MMD}}(\mathbf{Z}_{i},\mathbf{Z}_{j})\,\] where the summand is given by \(u^{\mathrm{MMD}}\big{(}(\mathbf{x},\mathbf{y}),(\mathbf{x}^{\prime},\mathbf{ y}^{\prime})\big{)}\coloneqq\kappa(\mathbf{x},\mathbf{x}^{\prime})+\kappa( \mathbf{y},\mathbf{y}^{\prime})-\kappa(\mathbf{x},\mathbf{y})-\kappa(\mathbf{x }^{\prime},\mathbf{y}^{\prime})\). To define KSD, we assume that \(\kappa\) is continuously differentiable with respect to both arguments, and \(P\) admits a continuously differentiable, positive Lebesgue density \(p\). The following formulation of KSD is due to Theorem 2.1 of Chwialkowski et al. (2016): \[D^{\mathrm{KSD}}(Q,P)\;\coloneqq\;\mathbb{E}_{\mathbf{X},\mathbf{X}^{\prime} \sim Q}[u^{\mathrm{KSD}}_{P}(\mathbf{X},\mathbf{X}^{\prime})]\,\] where we assume \(\mathbb{E}_{\mathbf{X}\sim Q}[u^{\mathrm{KSD}}_{P}(\mathbf{X},\mathbf{X})]<\infty\) and the function \(u^{\mathrm{KSD}}_{P}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) is given by \[u^{\mathrm{KSD}}_{P}(\mathbf{x},\mathbf{x}^{\prime}) =\big{(}\nabla\log p(\mathbf{x})\big{)}^{\top}\big{(}\nabla\log p (\mathbf{x}^{\prime})\big{)}\kappa(\mathbf{x},\mathbf{x}^{\prime})\;+\; \big{(}\nabla\log p(\mathbf{x})\big{)}^{\top}\nabla_{2}\kappa(\mathbf{x}, \mathbf{x}^{\prime})\] \[\quad+\big{(}\nabla\log p(\mathbf{x}^{\prime})\big{)}^{\top} \nabla_{1}\kappa(\mathbf{x},\mathbf{x}^{\prime})\;+\;\text{Tr}(\nabla_{1} \nabla_{2}\kappa(\mathbf{x},\mathbf{x}^{\prime}))\.\] \(\nabla_{1}\) and \(\nabla_{2}\) are the differential operators with respect to the first and second arguments of \(\kappa\) respectively. The estimator is again a U-statistic, given by \(D^{\mathrm{KSD}}_{n}\coloneqq\frac{1}{n(n-1)}\sum_{1\leq i\neq j\leq n}u^{ \mathrm{KSD}}_{P}(\mathbf{X}_{i},\mathbf{X}_{j})\). ### General results We show that a kernel structure allows Assumption 2 to be fulfilled under some natural conditions. Let \(\mathbf{V}_{1},\mathbf{V}_{2}\overset{i.i.d.}{\sim}R\) for some probability measure \(R\) on \(\mathbb{R}^{b}\) and \(\kappa^{*}\) be a measurable kernel on \(\mathbb{R}^{b}\). A sequence of functions \(\{\phi_{k}\}_{k=1}^{\infty}\) in \(L_{2}(\mathbb{R}^{b},R)\) and a sequence of non-negative values \(\{\lambda_{k}\}_{k=1}^{\infty}\) with \(\lim_{k\to\infty}\lambda_{k}=0\) is called a _weak Mercer representation_ if \[\big{|}\sum_{k=1}^{K}\lambda_{k}\phi_{k}(\mathbf{V}_{1})\phi_{k}(\mathbf{V}_{2 })-\kappa^{*}(\mathbf{V}_{1},\mathbf{V}_{2})\big{|}\to 0\quad\text{almost surely}\qquad\text{ as }K\to\infty\.\] Steinwart and Scovel (2012) show that such a representation exists if \(\mathbb{E}[\kappa^{*}(\mathbf{V}_{1},\mathbf{V}_{1})]<\infty\), whose result is summarised in Lemma 40 in the appendix. To deduce from this the \(L_{\nu}\) convergence of Assumption 2, we need the following assumptions on the kernel \(\kappa^{*}\): **Assumption 3**.: _Fix \(\nu>2\). Assume \(\mathbb{E}[\kappa^{*}(\mathbf{V}_{1},\mathbf{V}_{1})]<\infty\) and let \(\{\lambda_{k}\}_{k=1}^{\infty}\) and \(\{\phi_{k}\}_{k=1}^{\infty}\) be a weak Mercer representation of \(\kappa^{*}\) under \(R\). Also assume that for some \(\nu^{*}>\nu\), \(\|\kappa^{*}(\mathbf{Z}_{1},\mathbf{Z}_{2})\|_{L_{\nu^{*}}}<\infty\) and \(\sup_{K\geq 1}\|\sum_{k=1}^{K}\lambda_{k}\phi_{k}(\mathbf{Z}_{1})\phi_{k}( \mathbf{Z}_{2})\|_{L_{\nu^{*}}}<\infty\,.\)_ For MMD, we can use the weak Mercer representation of \(u^{\mathrm{MMD}}\) to show that our results apply: **Lemma 14**.: \(u^{\mathrm{MMD}}\) _defines a kernel on \(\mathbb{R}^{2d}\). Moreover, if Assumption 3 holds for \(\kappa^{*}=u^{\mathrm{MMD}}\) under \(P\otimes Q\) for some \(\nu>2\), then Assumption 2 holds for \(\min\{\nu,3\}\) with \(u=u^{\mathrm{MMD}}\) and \(R=P\otimes Q\)._ In the case of KSD, we use the representation of \(\kappa\) directly. We require some additional assumptions for the score function \(\nabla\log p(\mathbf{x})\) to be well-behaved and the differential operation on \(\kappa\) to behave well under the representation. **Assumption 4**.: _Fix \(n\), \(d\) and \(\nu>2\). Assume that Assumption 3 holds with \(\nu\) for \(\kappa\) under \(Q\), with \(\{\lambda_{k}\}_{k=1}^{\infty}\) and \(\{\phi_{k}\}_{k=1}^{\infty}\) as the weak Mercer representation of \(\kappa\) under \(Q\) and \(\nu^{*}\) being defined as in Assumption 3. Further assume that (i) \(\|\|\nabla\log p(\mathbf{X}_{1})\|_{2}\|_{L_{2\nu^{**}}}<\infty\) for \(\nu^{**}=\frac{\nu(\nu+\nu^{*})}{\nu^{*}-\nu}\) ; (ii) \(\sup_{k\in\mathbb{N}}\|\phi_{k}(\mathbf{X}_{1})\|_{L_{2\nu}}<\infty\); (iii) \(\phi_{k}\)'s are differentiable with \(\sup_{k\in\mathbb{N}}\|\|\nabla\phi_{k}(\mathbf{X}_{1})\|_{2}\|_{L_{\nu}}<\infty\); (iv) As \(K\to\infty\), we have the convergence \(\big{\|}\big{\|}\big{\|}\sum_{k=1}^{K}\lambda_{k}(\nabla\phi_{k}(\mathbf{X}_{ 1}))\phi_{k}(\mathbf{X}_{2})-\nabla_{1}\kappa(\mathbf{X}_{1},\mathbf{X}_{2}) \big{\|}_{2}\big{\|}_{L_{2\nu}}\to 0\) as well as the convergence \(\big{\|}\sum_{k=1}^{K}\lambda_{k}(\nabla\phi_{k}(\mathbf{X}_{1}))^{\top}( \nabla\phi_{k}(\mathbf{X}_{2}))-\text{Tr}(\nabla_{1}\nabla_{2}\kappa(\mathbf{ X}_{1},\mathbf{X}_{2}))\big{\|}_{L_{\nu}}\to 0\)._ We can now form a decomposition of \(u_{P}^{\mathrm{KSD}}\). Given \(\{\lambda_{k}\}_{k=1}^{\infty}\) and \(\{\phi_{k}\}_{k=1}^{\infty}\) from Assumption 4 and any fixed \(d\in\mathbb{N}\), define the sequences \(\{\alpha_{k}\}_{k=1}^{\infty}\) and \(\{\psi_{k}\}_{k=1}^{\infty}\) as, for \(1\leq l\leq d\) and \(k^{\prime}\in\mathbb{N}\), \[\alpha_{(k^{\prime}-1)d+l}\ \coloneqq\lambda_{k^{\prime}}\qquad\text{ and }\qquad\psi_{(k^{\prime}-1)d+l}(\mathbf{x})\ \coloneqq(\partial_{x_{l}}\log p(\mathbf{x}))\phi_{k^{\prime}}(\mathbf{x})+ \partial_{x_{l}}\phi_{k^{\prime}}(\mathbf{x}). \tag{6}\] **Lemma 15**.: _If Assumption 4 holds for some \(\nu>2\), then Assumption 2 holds for \(\min\{\nu,3\}\) with \(u=u_{P}^{\mathrm{KSD}}\), \(R=Q\), \(\lambda_{k}=\alpha_{k}\) and \(\phi_{k}=\psi_{k}\)._ **Remark 16**.: The benefits of formulating our results in terms of Assumption 2 are now clear: By forgoing orthonormality, we can choose a functional decomposition e.g. in terms of the Mercer representation of a kernel, which is already widely considered in this literature. The non-negative eigenvalues from Lemma 40 also allow moment-matching methods discussed in Section 3.2 to be considered. In fact, a Mercer representation is not even necessary: In Appendix B.1, we construct a simple decomposition for the setup in Section 4.3 such that Assumption 2 can be verified easily. ### Gaussian mean-shift examples We study KSD and MMD under Gaussian mean-shift, where \(P=\mathcal{N}(0,\Sigma)\) and \(Q=\mathcal{N}(\mu,\Sigma)\) with mean \(\mu\in\mathbb{R}^{d}\) and covariance \(\Sigma\in\mathbb{R}^{d\times d}\) to be specified. Two simple kernels are considered. RBF kernel.We consider the RBF kernel \(\kappa(\mathbf{x},\mathbf{x}^{\prime})=\exp(-\|\mathbf{x}-\mathbf{x}^{ \prime}\|_{2}^{2}/(2\gamma))\), where \(\gamma=\gamma(d)\) is a bandwidth potentially depending on \(d\). A common strategy to choose \(\gamma\) is the _median heuristic_: \[\gamma_{\text{med}}\ \coloneqq\ \text{Median}\left\{\|\mathbf{V}-\mathbf{V}^{ \prime}\|_{2}^{2}:\mathbf{V},\mathbf{V}^{\prime}\in\mathcal{V}\,,\ \mathbf{V}\neq\mathbf{V}^{\prime}\right\}\,\] where the samples \(\mathcal{V}=\{\mathbf{X}_{i}\}_{i=1}^{n}\) for KSD and \(\mathcal{V}=\{\mathbf{X}_{i}\}_{i=1}^{n}\cup\{\mathbf{Y}_{i}\}_{i=1}^{n}\) for MMD. We include a further discussion of this setup and applicability of our assumptions in Appendix B. We focus on \(\Sigma=I_{d}\), where the \(d\)-dependence of the moment ratio \(\rho_{d}\) can be explicitly studied for both KSD and MMD. Importantly, we give bounds in terms of the bandwidth \(\gamma\) and the scale of mean shift \(\|\mu\|_{2}^{2}\), which reveal their effects on \(\rho_{d}\) and thereby on the behaviour of the test power. The assumptions on \(\gamma\) and \(\|\mu\|_{2}^{2}\) in both propositions are for simplicity rather than necessity. **Proposition 17** (KSD-RBF moment ratio).: _Assume \(\gamma=\omega(1)\) and \(\|\mu\|_{2}^{2}=\Omega(1)\). Under the Gaussian mean-shift setup with \(\Sigma=I_{d}\), the KSD U-statistic satisfies that_ 1. _If_ \(\gamma=o(d^{1/2})\)_, then_ \(\rho_{d}=\exp\big{(}\frac{3d}{4\gamma^{2}}+o\big{(}\frac{d}{\gamma^{2}}\big{)} \big{)}\,\Theta\Big{(}\frac{d}{\gamma\|\mu\|_{2}^{2}}+\frac{d^{1/2}}{\gamma^{1 /2}\|\mu\|_{2}}+1\Big{)}\) _;_ 2. _If_ \(\gamma=\omega(d^{1/2})\)_, then_ \(\rho_{d}=\Theta\Big{(}\frac{d^{1/2}(1+\gamma^{-1}d^{1/2}\|\mu\|_{2})}{\|\mu\|_{ 2}\,(1+\gamma^{-1}d^{1/2}\|\mu\|_{2})}+1\Big{)}\) _;_ 3. _If_ \(\gamma=\Theta(d^{1/2})\)_, then_ \(\rho_{d}=\Theta\Big{(}\frac{d^{1/2}}{\|\mu\|_{2}^{2}}+\frac{d^{1/4}}{\|\mu\|_{ 2}}+1\Big{)}\) _._ **Proposition 18** (MMD-RBF moment ratio).: _Consider the Gaussian mean-shift setup with \(\Sigma=I_{d}\) and assume \(\gamma=\omega(1)\) and \(\|\mu\|_{2}^{2}=\Omega(1)\). For the MMD U-statistic, if \(\gamma=o(\|\mu\|_{2}^{2})\) and \(\gamma=o(d^{1/2})\), then \(\rho_{d}=\Theta\big{(}\exp\big{(}\frac{3d}{4\gamma^{2}}+o\big{(}\frac{d}{\gamma ^{2}}\big{)}\big{)}\big{)}\). If instead \(\gamma=\omega(\|\mu\|_{2}^{2})\), then_ 1. _For_ \(\gamma=o(d^{1/2})\)_, we have_ \(\rho_{d}=\Theta\Big{(}\frac{\gamma}{\|\mu\|_{2}^{2}}\exp\Big{(}\frac{3d}{4 \gamma^{2}}+o\big{(}\frac{d}{\gamma^{2}}\big{)}\Big{)}\Big{)}\) _;_ 2. _For_ \(\gamma=\omega(d^{1/2})\)_, we have_ \(\rho_{d}=\Theta\Big{(}\frac{\|\mu\|_{2}+d^{1/2}}{\|\mu\|_{2}+\gamma^{-1}d^{1/2 }\|\mu\|_{2}^{2}}\Big{)}\) _;_ 3. _For_ \(\gamma=\Theta(d^{1/2})\)_, we have_ \(\rho_{d}=O\Big{(}\frac{d^{1/2}}{\|\mu\|_{2}^{2}}\Big{)}\) _._ The case \(\|\mu\|_{2}=\Omega(\|\Sigma\|_{2})=\Omega(d^{1/2})\) is not very interesting, as it means that the signal-to-noise ratio (SNR) is high and can even increase with \(d\). WLOG we focus on a low SNR setting with \(\|\mu\|_{2}=\Theta(1)\). In this case, it has been shown that the median-heuristic bandwith scales as \(\gamma_{\text{med}}=\Theta(d)\)[2, 14, 15, 16]. While Propositions 17 and 18 do not directly address the case \(\gamma=\gamma_{\text{med}}\) due to its data dependence, they do show that \(\rho_{d}=\Theta(d^{1/2})\) for both KSD and MMD with a data-_independent_ bandwidth \(\gamma=\Theta(d)\)+. In this case, the asymptotic distributions of \(D_{n}^{\mathrm{KSD}}\) and \(D_{n}^{\mathrm{MMD}}\) are _(i)_ the non-degenerate Gaussian limit predicted by (3) when \(d=o(n)\) and _(ii)_ the degenerate limit from Proposition 8 when \(d=\omega(n)\). Footnote †: In our experiments, the data-independent choice \(\gamma=d\) and the data-dependent \(\gamma=\gamma_{\text{med}}\) yield almost identical plots. Intriguingly, in both results, different regimes arise based on how \(\gamma\) compares with the noise scale \(\|\Sigma\|_{2}=d^{1/2}\). In fact, a phase transition as \(\gamma\) drops from \(\omega(d^{1/2})\) to \(o(d^{1/2})\) has been reported in Ramdas et al. [2015] but with no further comments++. Our results offer one explanation: Such transitions may happen due to a change in the dependence of \(\rho_{d}\) on \(\gamma\), \(\|\mu\|_{2}\) and \(d\). Fig. 4 shows a transition across different limits as \(\gamma\) varies, where the transition occurs at around \(\gamma\sim d^{1/2}\). Footnote ‡: Their bandwidth \(\gamma_{\text{Ramdas}}\) is defined to equal our \(\sqrt{2\gamma}\). Phase transition occurs at \(\gamma_{\text{Ramdas}}=d^{1/4}\) in their Figure 1. While their figure is for MMD with threshold chosen by a permutation test, ours is for KSD with a fixed threshold. Linear kernel.Section 3.2 discussed that the limit of \(D_{n}\) can be non-Gaussian. This is true for MMD with a _linear_ kernel \(\kappa(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{x}^{\top}\mathbf{x}^{\prime}\): It satisfies Lemma 12 with \(K_{*}=d\) and the limit is a shifted-and-rescaled chi-square. Fig. 1 verifies this for some \(\Sigma\neq I_{d}\) by showing an asymmetric distribution curve close to the chi-square limit. We remark that a linear kernel, while not commonly used, is a valid choice here since \(D^{\mathrm{MMD}}=0\) iff \(P=Q\) under our setup (see Appendix B). Simulations.We set \(\mu=(2,0,\ldots,0)^{\top}\in\mathbb{R}^{d}\), \(\Sigma=I_{d}\) and \(\gamma=\gamma_{\rm med}\) for KSD with RBF and MMD with RBF. The exact setup for MMD with linear kernel is described in Appendix B.4. The limits for comparison are the non-degenerate Gaussian limit in Eq. (3) ("Non-degen.") and Gamma / shifted-and-rescaled chi-square ("Degen. Gamma" / "Degen. Chi-square") distributions that match the degenerate limit in Proposition 8 by mean and variance. Fig. 1 plots the distribution curves for KSD with RBF and MMD with linear kernel. Fig. 2 plots the same quantity for MMD with RBF. Fig. 3 and Fig. 4 examine the behaviour of KSD with RBF as \(d\) or \(\gamma\) varies (as a data-independent function of \(d\), similar to Ramdas et al. (2015)). Results involving \(D_{n}\) are averaged over 30 random seeds, and shaded regions are \(95\%\) confidence intervals4. Code for reproducing all experiments can be found at github.com/XingLLiu/u-stat-high-dim.git. Footnote 4: The shaded regions are not visible for \(\mathbb{P}(D_{n}>t)\) in Fig. 1, 2 and 4 as the confidence intervals are very narrow. ## Acknowledgement KHH is supported by the Gatsby Charitable Foundation. XL is supported by the President's PhD Scholarships of Imperial College London and the EPSRC StatML CDT programme EP/S023151/1. ABD is supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1 and EPSRC Grant EP/W006022/1, particularly the "Ecosystems of Digital Twins" theme within those grants & The Alan Turing Institute. We thank Antonin Schrab, Heishiro Kanagawa and Arthur Gretton for their helpful comments.
2305.01447
Multimodal Neural Databases
The rise in loosely-structured data available through text, images, and other modalities has called for new ways of querying them. Multimedia Information Retrieval has filled this gap and has witnessed exciting progress in recent years. Tasks such as search and retrieval of extensive multimedia archives have undergone massive performance improvements, driven to a large extent by recent developments in multimodal deep learning. However, methods in this field remain limited in the kinds of queries they support and, in particular, their inability to answer database-like queries. For this reason, inspired by recent work on neural databases, we propose a new framework, which we name Multimodal Neural Databases (MMNDBs). MMNDBs can answer complex database-like queries that involve reasoning over different input modalities, such as text and images, at scale. In this paper, we present the first architecture able to fulfill this set of requirements and test it with several baselines, showing the limitations of currently available models. The results show the potential of these new techniques to process unstructured data coming from different modalities, paving the way for future research in the area. Code to replicate the experiments will be released at https://github.com/GiovanniTRA/MultimodalNeuralDatabases
Giovanni Trappolini, Andrea Santilli, Emanuele Rodolà, Alon Halevy, Fabrizio Silvestri
2023-05-02T14:27:56Z
http://arxiv.org/abs/2305.01447v1
# Multimodal Neural Databases ###### Abstract. The rise in loosely-structured data available through text, images, and other modalities has called for new ways of querying them. Multimedia Information Retrieval has filled this gap and has witnessed exciting progress in recent years. Tasks such as search and retrieval of extensive multimedia archives have undergone massive performance improvements, driven to a large extent by recent developments in multimodal deep learning. However, methods in this field remain limited in the kinds of queries they support and, in particular, their inability to answer database-like queries. For this reason, inspired by recent work on neural databases, we propose a new framework, which we name Multimodal Neural Databases (MMNDBs). MMNDBs can answer complex database-like queries that involve reasoning over different input modalities, such as text and images, at scale. In this paper, we present the first architecture able to fulfill this set of requirements and test it with several baselines, showing the limitations of currently available models. The results show the potential of these new techniques to process unstructured data coming from different modalities, paving the way for future research in the area. Code to replicate the experiments will be released at [https://github.com/GiovanniTRA/MultimodalNeuralDatabases](https://github.com/GiovanniTRA/MultimodalNeuralDatabases) m multimedia information retrieval, databases, neural networks + Footnote †: 10.023 Association for Computing Machinery. ACM ISBN 978-x-xx-xxx-xxx-x/YY/MM... $15.00[https://doi.org/10.1145/mnmnmnn.mnmnn](https://doi.org/10.1145/mnmnmnn.mnmnn) 2 ## 1. Introduction The amount and variety of data available have increased dramatically in recent years, and as more devices, such as smart glasses, become widespread, this trend is likely to accelerate. While current devices generate mostly text and image data, smart glasses will likely increase the amount of audio and video data individuals create. With the emergence of generative AI, we will likely see an explosion of valuable generated data. Multimedia Information Retrieval (MMIR) has always attracted the attention of scientists and practitioners in Information Retrieval. MMIR aims to address the challenges of processing queries on multimedia collections. Due to the enormous increase of data availability, MMIR has also seen a surge in its interest. The field has explored topics such as retrieval from large image archives, query by image, and retrieval based on face or fingerprint (Borda et al., 2017). However, this paper brings forward a novel and transformative idea: _given the huge impact that the field of AI has having in all of the areas of technology, we argue that the MMIR field needs to explore systems that can handle more expressive database-like queries called multi-modal neural databases (MMNDBs)_. We illustrate the potential of MMNDBs with an example. Consider the following query over an image archive: _how many images contain musical instruments?_ Assume that the images in the collection are labeled with the objects that are identified in them (e.g., trumpet, avocado, person). Hence, an MMIR system is likely to be able to return images with trumpets, or other musical instruments. However, finding which objects are wind instruments (or a more detailed category) requires an additional reasoning step of a join with a database of instruments. Moreover, counting the number of images that satisfy our condition requires reasoning about the size of the answer set, an operation routinely done by database systems but not supported by MMIR systems. Examples can be more complicated, such as finding the most common musical instrument appearing in the photos or considering only photos taken in cities that hosted the Olympic games. As seen from the examples above, one of the critical needs of MMNDBs is the ability to reason about sets. In this perspective paper, we propose to study, design, and build MMNDBs by combining the capabilities of large multimodal models, multi-media information retrieval, and database query processing, as shown in Figure 1. We have been inspired by the work on neural databases (Romie et al., 2017; Wang et al., 2018; Wang et al., 2018) that have garnered interest in the NLP, database, and IR communities. However, we differentiate from that work as we position ourselves as an evolution of the field of MMIR by means of modern and, more recently proposed, multimodal AI technologies. We develop a first principled prototype to show the proposed task's feasibility. We will later stress that this is only one of the possible architectures to solve MMNDBs and that future research will unveil new strategies. At a high level, we build our prototype on the retriever-reasoner-aggregator model. Given a query, the retriever returns a small subset of documents from the database that is relevant to the query. However, typically even that subset is too big to be provided as input to a single reader, which is essentially a transformer. Hence, the system runs multiple copies of the reasoner in parallel, each producing a partial result for the query. Finally, the aggregator component of the MMNDB will create the query result from the intermediate ones. For example, if the query counts the number of images that contain people, the intermediate results would be 1 or 0, depending on whether the image contains a person. The aggregator will add up the 1s. MMNDB systems will be designed to handle a wide range of multimedia data, including images, videos, audio, and text. The system will be able to process queries in natural language, allowing users to express their queries intuitively and easily. The system will also be able to extract features from multimedia data and use them to improve the performance of retrieval tasks. This paper describes a first step towards the realization of MMNDBs flexible enough to scaffold future models. We consider queries over collections of images and validate several aspects of our proposed architecture, as seen in Figure 2. We perform a rich set of experiments that show the feasibility and potential of the proposed task across a subset of possible query types. Finally, we discuss possible future research directions stemming from the anlysis brought forward in this paper and the introduction of Multimodal Neural Databases. ## 2. Multimodal Neural Databases We refer to a corpus of _documents_ coming from different modalities as a multimodal database. The definition of documents we provide here is intentionally very loose. In general, it could be any self-contained piece of data. Multimodal databases could include wildly different sources. For instance, it could contain information in natural language form, images, sounds, geo-tagging information, a timestamp, and many others. Unlike a traditional database, a multimodal database is unstructured in the sense that it does not need to have a schema, or even less, it does not need to have any particular ordering but can be just the unordered and unstructured set of these documents. Multimodal databases arise in several contexts. One existing context today is that of online social media, where users post content of different kinds (text, images, memes, videos, audio). Here, each post is a document in the multimedia database, with the added peculiarity that the database would have to keep track of the graph of friendships between users. Another context that will arise in the near future when smart glasses are prevalent is the record of a user's day. Just by doing simple activities, like getting a coffee in a bar, the glasses will capture (adhering to whatever privacy conventions get adopted) sensory data, pictures (videos) of who is at the bar and what they are eating, audio of the background track playing, and photos of receipts for one's purchases. Ideally, we would like to be able to query these rich, large, and unstructured collections of data the same way we query a database. Going further, unlike a standard database, we would like to use natural language to perform queries instead of a rigid language like SQL. Specifically, given a multimodal database \(D\) and a query \(q\), we would like to be able to perform the following types of query: (a) **Set queries**; set queries are extractive queries that return a list of spans, such as entities, from the facts. (b) **Boolean queries**; that return either True or False as an answer. (c) **Join queries**; which require the combination of two or more documents to produce each answer. We note that unlike traditional databases (or even neural databases), Multimodal databases can produce answers consisting of heterogeneous modalities. For instance, a set query can produce answers that include images, audio, and natural language (and their combination) seamlessly. Designing a Multimodal Neural Database presents several substantial challenges. First, it is crucial that the system is able to reason on the modalities given in input. For instance, if I were to look for images of cats and dogs fighting, I need to recognize both the presence of these animals _and_ that the interactions between the two is indeed that of fighting (a poster of Mike Tyson boxing in the background is not sufficient). Similarly, if the query mentions someone whispering or yelling, the system must understand such subtleties in an audio frame. Recently, deep learning techniques, particularly large deep learning models, have shown excellent reasoning capabilities (Dong et al., 2018). The tasks of Visual Question Answering and multi-hop question answering have reached near human results (Song et al., 2018) for natural language processing, with promising candidates in the multimodal setting as well. However, these models are usually extremely large, with billions of parameters, leading to the next challenge, namely scale. Given a large collection of documents, it is infeasible to run such models on every query-document pair, or even on every document for that matter. Open domain question answering systems (ODQA), developed for answering queries from natural language text, provide a methodology for scaling to larger document collections. ODQA answers a query by first retrieving relevant documents from the document collection and feeding them as context to a transformer along with the query. However, transformers can only accept contexts of limited sizes (currently, 512 to 1024 tokens). Even though extending these sizes is a very active area of research, it will always likely be smaller than the size necessary to process the kinds of queries we are striving for. The number of documents that need to be processed for answering database queries can be arbitrarily big, as can the intermediate result sets. In contrast, ODQA systems usually consider queries whose answers are small and can be obtained by feeding just a few documents to the transformer. Furthermore, a multimodal database is an _unordered_ set of documents, so we cannot exploit any locality heuristic to retrieve the relevant documents. Last but definitely not least, there is a challenge of bridging between the different modalities in a multimodal database. To answer queries over multimodal data, one has to process, reason, and combine information coming not only from different documents but also from documents expressed in different modalities. The literature in natural language processing and computer vision has recently paved the way and achieved outstanding results in the field. Multi-modal models have followed, showing excellent results in the task of text-to-image, image-to-text, and text-to-music. However, most multimodal models available today tackle either the text-visual or the text-audio tasks. Combining multiple modalities, while not Figure 1. A possible use case for MMNDBs. Imagine walking around the city with smart glasses and collecting information in a multimodal database. In the evening, you could be interested in knowing which are good places to eat that satisfy some criteria. MMNDBs could help make that decision by answering a database-like query posed in natural language (or voice!), combining multiple information sources and modalities. Figure 2. Schema for our proposed MMNDB prototype. Given a query, documents are first filtered by a retriever module. A reasoner produces intermediate answers that are them processed by an aggregator to produce the final answer. unexplored (Bogorst et al., 2016), still needs additional research efforts to reach suitable levels to address the task at hand. In particular, to suitably address the task of MMNDB, we would need a "true" multimodal model, which can reason on any possible modality given as input. For further discussion on this and other current limitations/future research directions, we refer to Section 5. ## 3. A first prototype for MMNDB To demonstrate the feasibility MMNDBs, this section describes a first prototype of such a system, for a restricted case. We consider databases in which all the documents are images, and queries, which are posed in natural language, can express COUNT, MAX, and IN. However, as we explain below, the architecture for our preliminary system can apply to broader settings as well. We hope that this architecture forms the basis for other approaches to MMNDBs. Our system takes an input query \(q\) over a database \(D\). It includes three components. The first component is the retriever, which selects a subset of the documents in \(D\) that are relevant to answer the query. The second component is the reasoner, which processes, possibly in parallel, subsets of the retrieved documents. The reasoner provides a partial answer to the query. The third component is an aggregation operator that synthesizes the answers provided by the reasoner to compute the final answer to the query. The strength of our architecture is that it enables us to exploit recent advances in multimodal neural models when implementing the retriever and the reasoner. Specifically, these models are able to map multiple modalities into the same embedding space, and therefore reason about the contents of images and text together. For example, these models can identify objects in images and create textual captions that describe the main aspects of of the image. Before we explain each of the components, we give an end-to-end overview of how a query is processed in our system. Consider the query "How many people are playing the guitar in a blue t-shirt on a beach". The reasoner considers a single image in \(D\) and uses the latest neural methods to determine whether the image contains a person playing guitar on the beach. However, applying such powerful reasoning on each of the documents in \(D\) is infeasible, so we use a retriever to filter to only a small subset of the images in \(D\), \(P(D,q)\). Multiple instances of the reasoner then are applied in parallel to the retrieved images in \(P(D,q)\) to determine which image satisfies the query. In our example, if an image satisfies the query, the reasoner returns 1 and otherwise 0. The aggregator then counts the number of 1's to return the final answer. We now describe each of the components. **Retriever**. The goal of the retriever is to return a subset \(P(D,q)\) of documents from \(D\) that are relevant to the query \(q\). The main requirement from the retriever is that it be scalable. While the reasoning we expect from the retriever is not at the same granularity as the reasoner, it should weed out the vast majority of irrelevant images. To retrieve documents that are relevant to the query, we encode both the query and the documents in the same latent embedding space. However, as noted earlier, it is important that the embedding of a document _not_ be dependent on the query \(q\), otherwise we would have to compute a new embedding for every document in \(D\) for any given query. Hence, as we describe in Section 4.1, we consider several methods for embedding the documents in \(D\) in a query independent way. **Reasoner**. An instance of the reasoner \(P(D,q)\) takes one of the documents in \(D\) as input and returns an intermediate answer to our query \(A_{p}\). In the example above, the reasoner returns either 1 or 0 depending on whether the image satisfies the conditions in the query. However, the intermediate result may be different. For a query such as "What is the maximum number of people in the images" the reasoner would return, for every image, the number of people in that image. As another example, for the query "what is the most common musical instrument seen in the database", the output of the reasoner would be the list and number of occurrences of each of the instruments it identified in the image. The crucial role of the reasoner is, precisely, to reason about the relationship between the image and the query. In our example, the reasoner needs to determine whether there is a person wearing a blue outfit, that the same person is the one playing the guitar, and that they are physically located on a beach. The reasoner leverages the recent advances in neural models that are able to perform such reasoning by embedding the image and text in the same latent space and generating textual captions of images. It is worth noticing, however, that these models compute a dynamic embedding of the query and of the image, that depends on both, i.e., \(F(I|T)\neq F(I)\) and vice versa, where \(I\) is the image, and \(T\) is natural language (could be any two modalities). This has profound computational implications. In fact, to be able to answer the query, one would need to process any possible \(D,q\) pair. Furthermore, since the query is known only at inference time, it is not possible to precompute the embeddings. It is then clearly unfeasible to run the reasoner on the entire database. For this reason, we introduce an additional module in our pipeline, namely the retriever. **Aggregator**. The Aggregator takes as input the query and the set of intermediate outputs from all the instances of the reasoners and produces the answer to the query. Conceptually, this component of the system is the simplest because the intermediate results need to be aggregated depending on the semantics of the query. In our example, the aggregator would count the number of images for which 1 was returned. For the query counting the total number of people, the aggregator would sum the intermediate results returned from the reasoners. ## 4. Experiments This section describes the experiments we performed to validate the promise of our prototype. We begin by describing the experimental settings. ### Experimental setup In this section, we outline the experimental setup utilized to verify the validity of our approach. _Dataset._ Our experiments use the MS-COCO dataset (Common Object in Context) (Kang et al., 2017), which is the single most popular benchmark dataset in computer vision. We use the latest version made available by the authors. The COCO dataset contains approximately 123K labeled images. Each image is associated with 5 captions and is annotated with the objects that are identified in it. The objects are drawn from a collection of 1.5M object instances across 80 object categories. The dataset is divided into train and eval subsets, containing 118K and 5K images, respectively. We use the train set to train/fine-tune our methods while we report our results on the eval set. _Queries._ We use the MS-COCO dataset to build our queries. For the COUNT query type, we may ask a query of the type "How many [object] are in the database?", where object can be any of the object category contained in the COCO dataset. Similarly, for the MAX query type, we may be interested in the image of the dataset with most frequent annotation of a particular kind. Finally, for the In query, we are interested in images whose annotations satisfy certain conditions. _Models._ We now describe the neural models we used throughout our experiment. For the Reasoner, we employ OFA [32]. OFA is a deep learning model trained on a wide variety of multimodal (text and image) tasks, ranging from image captioning to image generation, showing great results on unseen tasks as well. OFA is open-source (code and weights) and is currently one of the best-performing multimodal models. We test four different versions of OFA, namely medium, base, large, and huge, with the largest featuring close to 1B parameters. OFA is a transformer-based model that builds a joint representation of the input, namely text and visual, that is used to generate a textual response. We stress again the fact that, given adequate computational resources, this module of the pipeline is highly parallelizable, hence capable of producing intermediate answers in the span of a few seconds. For the Retriever, we employed the CLIP model [22]. These models are trained in an unsupervised, contrastive manner by matching captions and images. They take either text or images and align them in a shared latent space that can be used for later inferences and to measure their distance, with similar image-caption pairs being close together. We test on 8 different versions of CLIP, namely RN50, RN101, RN50x4, RN50x16, RN50x64, ViT-B/32, ViT-L/14, ViT-L/14@366px. CLIP's salient feature is that the created embeddings are static, meaning they do not depend on the query. This allows us to pre-compute the embeddings for all images beforehand, meaning that only the embedding for the query has to be computed at inference time. Once the embeddings are computed, a strategy is needed to select which documents are considered relevant (and passed to the reasoner) and which ones are not. To do this, we craft three strategies: (i) _TopK_: in this case, we compute the dot product between the embeddings of the documents and the query, we sort them, and we select the TopK documents. (ii) _Threshold_: we compute the cosine similarities between the embeddings of the text and the images, and we return all the documents for which the cosine similarity is greater than a certain threshold \(\tau\) that depends on the particular CLIP model we are using, lying in a range between 0.15 and 0.4. (iii) _Neural Selector_: here, we train a small neural network that, given the \(q\) and \(D\) embeddings, returns a binary outcome that indicates whether the document is relevant for the query or not and whether it should be returned. The actual number of parameters depends on the CLIP version employed, but it is always in the order of thousands. It is worth noticing that, while it is still much more scalable with respect to the large 1B parameters models the reasoner employs, this strategy requires a "dynamic" processing; namely, the decision on which documents to select relies on a neural model evaluating all q, D pairs. In a practical system, it is possible to circumvent some of the issues above by borrowing techniques from the literature on online aggregation literature [12]. In practice, we can sort the embedding of the images according to the dot product they have with the query. We then process them in batches of predetermined sizes \(w\). We stop once a specific tolerance criterion is met, namely when no more than \(c\) documents are predicted as relevant by the model. This leads us to our fourth strategy, which we call Mixed. As the name suggests, we mix two of the strategies already introduced, Neural Selector and TopK. Specifically, we take the set union of the TopK (With a small K) and Neural Selector documents to retrieve and to be passed onto the Reasoner. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Model** & \(\mu\)**F1** & \(\mu\)**Recall** & \(\mu\)**Precision** & **F1** & **Recall** & **Precision** \\ \hline RN50 & 0.315 \(\pm\) 0.002 & 0.819 \(\pm\) 0.003 & 0.195 \(\pm\) 0.002 & 0.320 \(\pm\) 0.018 & 0.731 \(\pm\) 0.035 & 0.302 \(\pm\) 0.026 \\ RN50x4 & 0.424 \(\pm\) 0.002 & 0.794 \(\pm\) 0.003 & 0.290 \(\pm\) 0.002 & 0.447 \(\pm\) 0.022 & 0.717 \(\pm\) 0.031 & 0.419 \(\pm\) 0.027 \\ RN50x16 & **0.440 \(\pm\) 0.002** & 0.791 \(\pm\) 0.003 & **0.305 \(\pm\) 0.002** & **0.478 \(\pm\) 0.023** & 0.710 \(\pm\) 0.029 & **0.457 \(\pm\) 0.028** \\ RN50x64 & 0.331 \(\pm\) 0.002 & 0.837 \(\pm\) 0.003 & 0.206 \(\pm\) 0.002 & 0.384 \(\pm\) 0.019 & 0.759 \(\pm\) 0.034 & 0.343 \(\pm\) 0.025 \\ RN101 & 0.344 \(\pm\) 0.002 & 0.873 \(\pm\) 0.003 & 0.214 \(\pm\) 0.002 & 0.388 \(\pm\) 0.021 & 0.809 \(\pm\) 0.028 & 0.317 \(\pm\) 0.024 \\ ViT-B/32 & 0.378 \(\pm\) 0.002 & 0.876 \(\pm\) 0.003 & 0.241 \(\pm\) 0.002 & 0.395 \(\pm\) 0.018 & 0.813 \(\pm\) 0.022 & 0.298 \(\pm\) 0.019 \\ ViT-L/14 & 0.324 \(\pm\) 0.002 & 0.931 \(\pm\) 0.002 & 0.196 \(\pm\) 0.001 & 0.329 \(\pm\) 0.015 & 0.894 \(\pm\) 0.018 & 0.219 \(\pm\) 0.013 \\ ViT-L/14@336px & 0.337 \(\pm\) 0.002 & **0.932 \(\pm\) 0.002** & 0.205 \(\pm\) 0.002 & 0.347 \(\pm\) 0.016 & **0.905 \(\pm\) 0.015** & 0.228 \(\pm\) 0.014 \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison of different Retriever models under the “Mixed” retrieval strategy. While CLIP’s versions featuring resnets as a backbone have higher F1 and precision scores, ViT-based models achieve higher recall. We opt for the latter, as it allows the Reasoner module to receive as much relevant information as possible, ultimately reducing the final pipeline error. ### Results In this section, we present the experimental evidence to support the ideas presented in this paper. First, we will show results that test the performance of single architecture components. Following that, we proceed to evaluate the entirety of our pipeline. Results for all metrics are reported together with their standard error. We start by evaluating our retriever strategy. We argue that, for our pipeline, a good retriever should have a high level of recall since every relevant document that is failed to be retrieved will produce an error that will propagate to the subsequent components and onto the final response. For this reason, we explicitly express a preference for models and strategies obtaining a high recall. We tested each of the 8 CLIP model versions on each of the 4 crafted strategies. For the sake of space efficiency, we only show results for the various models in the chosen final setting - mixed strategy - and the comparison between different strategies using the best model - ViT-L/14@366px. In Table 1, you can see the performance of the various models in the Mixed Strategy setting. The first thing we can notice is that while there is a shift in scale between \(\mu\) and macro metrics, at least for precision and recall, the ranking between different models does not really change. Furthermore, While ViT-L/14@366px is the best model neither with respect to F1 nor precision, it is the best model when considering a recall. In fact, it consistently beat other models in that regard, with the exception of its twin ViT-L/14, with which the difference in terms of performance is minimal. Since the difference in the number of parameters and general complexity is almost unnoticeable, too, we saw no reason not to proceed with the former. In Table 3, we report results for the 4 retrieving strategies we tested. Once again, while Threshold offers the best precision, the Neural Selector, particularly the Mixed Strategy, offers the best overall results with comparable F1 and much higher Recall. In Table 4, we show the difference in performance between the various OFA version we tested. We only show results for the COUNT query type for the sake of not being repetitive since the difference between these models transfers across tasks. In this case, unlike the retriever, we see significant differences in results between the model versions tested. Larger models clearly outperform smaller ones by a wide margin. Moreover, OFA-huge outperforms OFA-large in terms of total error and \(\Delta\) error, while the latter achieves higher accuracy. We choose OFA-large for two reasons: (i) we favor accuracy over the other two metrics, and (ii) it has half the parameters with respect to the huge version (0.5B vs. 1B). We also report on a finetuned version of OFA-large (OFA-large FT), obtained by finetuning OFA-large on the train set for 10 epochs with a learning rate of \(5e-5\) with the same task. Finetuning the OFA model significantly boosts its performance on the MANDB task. The metrics tracked, though spun off, are the same as in the test whose results are reported in Table 2. Here, we test both the reasoner capabilities and the full pipeline. We perform our testing under 4 different scenarios, considering both the case in which we have a stock model or a finetuned one, reporting on 10 different metrics. We use the PerfectIR setting as a baseline. In this setting, the set of documents retrieved \(D_{r}\) is the set of documents that are actually relevant, taken directly from the ground truth. This, of course, is an ideal setting in which we assume a perfect retriever and acts as a sort of upper bound for our method. Full pipeline instead refers to our actual setting, in which our mixed strategy retriever passes the set of retrieved documents. The metrics we collect are of two kinds: one, with the word total as antecedent, refers to the whole pipeline; the others, without the word total in them, are meant as a test on the intermediate answers \(A_{p}\) produced by the reasoner. In particular, By accuracy, we mean the percentage of intermediate answers \(A_{p}\), that are exactly equal to their ground truth value. This is then averaged over all queries. We then further divide this computation into two disjoint sets, namely, accuracy for true positives (TP), documents in \(D_{r}\) that are actually relevant, and accuracy on false positives (FP), documents in \(D_{r}\) that should not have been retrieved. Please note that in the case of PerfectIR, the set of FP documents is empty by definition. Since the task at hand is that of the query type COUNT, we are also interested in knowing of close an intermediate answer is to the ground truth value. We track this with the metric \(\Delta\) error. Here, similarly to the \begin{table} \begin{tabular}{l|c c c c|c c c|c c c} \hline \hline & \multicolumn{4}{c|}{**Total Error \(\downarrow\)**} & \multicolumn{4}{c|}{\(\Delta\) Error \(\downarrow\)} & \multicolumn{4}{c}{**Accuracy \(\uparrow\)**} \\ \hline **Stock** & Error & Error TP & Error FP & Error FN & \(\Delta\) Error & \(\Delta\) Error TP & \(\Delta\) Error FP & Accuracy & Accuracy TP & Accuracy FP \\ \hline Perfect IR & \(\mathbf{0.46\pm 0.07}\) & \(0.46\pm 0.07\) & N/A & N/A & \(4.64\pm 1.94\) & \(4.64\pm 1.91\) & N/A & \(0.60\pm 0.02\) & \(0.60\pm 0.02\) & N/A \\ Noisy IR & \(0.77\pm 0.16\) & \(0.46\pm 0.07\) & \(\mathbf{0.31\pm 0.15}\) & N/A & \(\mathbf{2.66\pm 1.04}\) & \(4.64\pm 1.91\) & \(\mathbf{0.31\pm 0.02}\) & \(\mathbf{0.81\pm 0.01}\) & \(0.60\pm 0.02\) & \(0.92\pm 0.01\) \\ Dmg. IR & \(1.24\pm 0.32\) & \(0.46\pm 0.07\) & \(0.78\pm 0.32\) & N/A & \(4.22\pm 1.41\) & \(4.64\pm 1.91\) & \(2.31\pm 1.15\) & \(0.70\pm 0.02\) & \(0.60\pm 0.02\) & \(0.76\pm 0.02\) \\ Full & \(1.27\pm 0.17\) & \(\mathbf{0.42\pm 0.07}\) & \(0.76\pm 0.13\) & \(0.09\pm 0.02\) & \(3.33\pm 1.16\) & \(4.83\pm 2.03\) & \(1.96\pm 0.80\) & \(0.73\pm 0.02\) & \(\mathbf{0.61\pm 0.02}\) & \(0.75\pm 0.02\) \\ \hline **FImodel** & \multicolumn{4}{c}{} \\ \hline Perfect IR & \(\mathbf{0.14\pm 0.01}\) & \(0.14\pm 0.01\) & N/A & N/A & \(1.46\pm 0.10\) & \(1.46\pm 0.10\) & N/A & \(0.67\pm 0.02\) & \(0.67\pm 0.02\) & N/A \\ Noisy IR & \(0.22\pm 0.01\) & \(0.14\pm 0.01\) & \(0.08\pm 0.01\) & N/A & \(\mathbf{0.90\pm 0.06}\) & \(1.46\pm 0.10\) & \(0.43\pm 0.05\) & \(\mathbf{0.86\pm 0.01}\) & \(0.67\pm 0.02\) & \(0.93\pm 0.01\) \\ Ding. IR & \(0.54\pm 0.05\) & \(0.14\pm 0.01\) & \(0.40\pm 0.05\) & N/A & \(1.25\pm 0.08\) & \(1.46\pm 0.10\) & \(1.04\pm 0.09\) & \(0.73\pm 0.01\) & \(0.67\pm 0.02\) & \(0.73\pm 0.02\) \\ Full & \(0.99\pm 0.06\) & \(\mathbf{0.11\pm 0.01}\) & \(0.79\pm 0.06\) & \(0.09\pm 0.02\) & \(1.10\pm 0.07\) & \(\mathbf{1.42\pm 0.10}\) & \(0.99\pm 0.07\) & \(0.72\pm 0.01\) & \(\mathbf{0.69\pm 0.02}\) & \(0.72\pm 0.02\) \\ \hline \hline \end{tabular} \end{table} Table 2. Results performance on the query type count. The PerfectIR setting acts as an ideal upper bound, showing the full potential of the MANDB framework. The Full Pipeline (Full), on the other hand, shows excellent accuracy and \(\Delta\) error but a total error that, while being good, is not at the level of PerfectIR. We empirically show that this is not caused by noise introduced by the retriever module, as indicated by the excellent results achieved in the NoisyIR setting. Instead, this is caused by damaging documents picked up by the retriever that trick the reasoner resulting in a large False Positives error and, ultimately, a large total error. accuracy metric, we register the mean absolute deviation between the intermediate answer \(A_{\rho_{i}}\) and the ground truth, averaged over all queries. Once again, we spun this off into its two components, namely TP and FP. Under these two metrics, we can see that the Full Pipeline results are competitive, if not better, with the PerfectIR version. Upon further inspection, we can also deduct the cause. In fact, in Full Pipeline, false positive documents are added to the computations. Many of these documents are actually easier to deal with since they do not contain the object of interest and can produce an intermediate answer of 0, raising both the accuracy and the \(\Delta\) error of the Full Pipeline version. In our experimenting, we also noticed that the stock model was struggling to produce useful intermediate results in some instances. For instance, the model would produce indecisive answers like "many" and "few". Using some prompt engineering, explicitly asking the model to "Answer with a number" alleviated the problem but did not totally eradicate it. For this reason, as mentioned earlier, we produced a finetuned version of the reasoner, which improves the accuracy score and dramatically reduces the \(\Delta\) error. Finally, we report results on the total error metric. Under this metric, we consider the final outcome of the pipeline \(o\), and we compute its absolute deviation from the ground truth, averaged over all queries, and normalized by cardinality. The PerfectIR version achieves excellent results for this task, fully demonstrating the feasibility of the task we propose in this paper. Full Pipeline, while achieving good scores, lags behind the PerfectIR setting. To further investigate this difference in performance, we divide the total error into its components. Once again, TP refers to documents correctly retrieved, FP to documents wrongly retrieved, and false negatives (FN) to documents that should have been retrieved but have not (These last two components are null in the case of PerfectIR by definition). We notice how the total error TP is actually comparable between the two versions, slightly lower in the case of Full Pipeline since a few of the more challenging documents are not retrieved. Upon further inspection, we notice that the total error FN is almost negligible, meaning that the gap in total error is not caused by documents not being retrieved. From the experimental evidence, it is clear that this gap is actually caused by false positives, documents that should not have been retrieved, but they were, nonetheless. To further investigate this phenomenon, we devise an additional setting called NoisyIR. In this setting, we assume \(D_{r}\) is composed, as in PerfectIR, of the set of relevant documents to which we add, however, some non-relevant documents (300) taken at random. We notice that the NoisyIR setting performs only slightly worse than \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **Total Error \(\downarrow\)** & \(\Delta\) Error \(\downarrow\) & **Accuracy \(\uparrow\)** \\ \hline OFA-base & 0.831 \(\pm\) 0.024 & 2.876 \(\pm\) 0.217 & 0.094 \(\pm\) 0.014 \\ OFA-medium & 0.871 \(\pm\) 0.013 & 2.869 \(\pm\) 0.180 & 0.074 \(\pm\) 0.005 \\ OFA-large & 0.460 \(\pm\) 0.073 & 4.645 \(\pm\) 1.944 & **0.597 \(\pm\) 0.022** \\ OFA-huge & **0.392 \(\pm\) 0.025** & 2.363 \(\pm\) 0.179 & 0.533 \(\pm\) 0.023 \\ \hline OFA-large FT & **0.138 \(\pm\) 0.011** & **1.455 \(\pm\) 0.100** & **0.668 \(\pm\) 0.018** \\ \hline \hline \end{tabular} \end{table} Table 4. We test different neural models to be used as the building block for the reasoner on the PerfectIR setting. Smaller models clearly fail to compete with their larger counterparts. OFA-huge achieves a smaller total and \(\Delta\) error, while OFA-large has higher accuracy. We choose the latter as we favor accuracy over the other metrics and because it has half the amount of parameters. We also report on a finetuned version that significantly improves over the stock versions. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Selection Strategy** & \(\mu\)**F1** & \(\mu\)**Recall** & \(\mu\)**Precision** & **F1** & **Recall** & **Precision** \\ \hline Top-K & 0.211 \(\pm\) 0.001 & 0.683 \(\pm\) 0.004 & 0.125 \(\pm\) 0.001 & 0.201 \(\pm\) 0.009 & 0.852 \(\pm\) 0.018 & 0.125 \(\pm\) 0.009 \\ Threshold & **0.351 \(\pm\) 0.003** & 0.226 \(\pm\) 0.003 & **0.791 \(\pm\) 0.006** & **0.445 \(\pm\) 0.029** & 0.377 \(\pm\) 0.030 & **0.776 \(\pm\) 0.022** \\ Neural & 0.337 \(\pm\) 0.002 & **0.932 \(\pm\) 0.002** & 0.205 \(\pm\) 0.002 & 0.343 \(\pm\) 0.016 & 0.898 \(\pm\) 0.018 & 0.235 \(\pm\) 0.016 \\ Mixed & 0.337 \(\pm\) 0.002 & **0.932 \(\pm\) 0.002** & 0.205 \(\pm\) 0.002 & 0.347 \(\pm\) 0.016 & **0.905 \(\pm\) 0.015** & 0.228 \(\pm\) 0.014 \\ \hline \hline \end{tabular} \end{table} Table 3. Comparison among different retrieval strategies. The Threshold strategy achieves higher F1 and precision scores, while the “Mixed” strategy has a higher recall. Once again, we opt for the strategy that achieves higher recall, namely Mixed, as it allows the Reasoner module to receive as much relevant information as possible, ultimately reducing the final pipeline error. \begin{table} \begin{tabular}{l c c c} \hline \hline **Stock** & **Total Error \(\downarrow\)** & \(\Delta\) Error \(\downarrow\)** & **Accuracy \(\uparrow\)** \\ \hline Perfect IR & \(2.845\pm 1.759\) & \(29.263\pm 17.598\) & 0.188 \(\pm\) 0.044 \\ Noisy IR & \(4.576\pm 2.486\) & \(41.438\pm 21.343\) & 0.200 \(\pm\) 0.045 \\ Dmg. IR & \(4.258\pm 2.035\) & \(53.325\pm 23.933\) & 0.188 \(\pm\) 0.044 \\ Full & \(4.280\pm 2.014\) & \(53.063\pm 24.027\) & **0.213 \(\pm\) 0.046** \\ \hline \hline **FTmodel** & & & \\ \hline Perfect IR & \(\textbf{0.229}\pm\textbf{0.035}\) & \(1.813\pm 0.271\) & \(\textbf{0.575}\pm\textbf{0.056}\) \\ Noisy IR & \(\textbf{0.229}\pm\textbf{0.035}\) & \(\textbf{1.800}\pm\textbf{0.273}\) & 0.550 \(\pm\) 0.055 \\ Dmg. IR & \(0.303\pm 0.060\) & \(2.100\pm 0.320\) & 0.525 \(\pm\) 0.056 \\ Full & \(0.317\pm 0.056\) & \(2.263\pm 0.342\) & 0.563 \(\pm\) 0.055 \\ \hline \hline \end{tabular} \end{table} Table 5. Results for the query type MAX. It can be immediately noticed how much the finetuning process improves the performance of the MAX query type. In particular, we notice that finetuned models are less prone to produce indecisive intermediate answers such as “many” and “a lot”, which are highly relevant to this query. We also notice how close the Full Pipeline setting is to PerfectIR compared to other queries. We argue this is due to the reduced impact of damaging documents, i.e., it is unlikely that a damaging document will be a likely candidate for MAX. the PerfectIR setting, showing that our model is actually robust to noise. Following this experiment, we devised a new setting, identical to NoisyIR, but in which the negative documents are not taken at random anymore. In fact, we take the non-relevant document whose CLIP embedding with the query is the highest. We call this setting DamagingIR. Results clearly show that these documents are able to "trick" the reasoner into generating wrong intermediate answers, causing a large FP error and ultimately a more significant total error resulting in a performance difference between the PerfectIR version and the Full Pipeline one. DamagingIR has already been observed by (Zhou et al., 2017) and, to the best of the authors' knowledge, has not been yet fully addressed. At the end of this Section, we provide a more complete commentary on this issue. In Table 6, we show results for the IN query type. This query answers questions of the type "In how many pictures there are (object)?". We consider two metrics in this scenario that mirror the ones defined for the COUNT setting. First, we consider accuracy, that is, the percentage of time the intermediate results \(A_{p_{i}}\) are exactly equal to their respective ground truths. The total error indicates the absolute deviation of the total number of documents found satisfying the condition from its ground truth, later averaged over all queries and normalized by cardinality. We can immediately notice that the finetuned version of the reasoner generally performs better with respect to its stock counterpart. We also notice the positive results obtained by the Full Pipeline, even though they are lower than the near-perfect PerfectIR. Once again, even more clearly than before, we can attribute this reduction in performance to DamagingIR, that is, to false positive documents that manage to "trick" the model into thinking that there is an object in the image when there is really not, as evidence by the drop in performance observed under this regime. Finally, we report results for the MAX query type, which return the document with the max instances of a particular object in the collection. We test on the same 4 scenarios and report on three metrics. \(\Delta\) and total error are specular to previous settings, while total accuracy is the percentage of queries in which the correct document is found. This is the scenario that shows the most significant difference between the stock reasoner and its finetuned version. We attribute this gap to an issue cited earlier, in which for pictures with high instances of a particular object, the model would produce indecisive answers like "many", a problem that the finetuned model does not feature. Furthermore, we notice that the difference between the PerfectIR and the Full Pipeline version is rather small. This stems from the fact that, unlike in the two other scenarios, false positives documents are unlikely to be appetible candidates for the MAX type of query, failing to impact the final outcome. We also register that, even when the model is not able to retrieve the correct max document, the picture found has a comparable number of instances, as indicated by the total error. Overall, the results are very promising and fully show the potential for Multimodal Neural Databases. We managed to build an effective and efficient retrieval system with a high recall. The reasoner module, and the pipeline as a whole, show good performance and resistance to noise, with low error and high accuracy, coupled with a resistance to noise. However, like other systems in IR, it is weak to DamagingIR, as shown by the increased caused false positives. We argue that by tackling this issue we can further increase the performance of MNNDB and bring it close to the optimum. ## 5. Future Research Directions The introduction of Multimodal Neural Databases paves the way toward new and exciting research directions; in this section, we proceed to discuss some of the more interesting ones. In this paper, we have shown the feasibility of the proposed task but have yet to explore many open problems. First and foremost, a key feature in any database system is the ability to update its information. In a typical database system, one would expect to be able to remove, add, or modify the information as he wishes. This is not straightforward under our current paradigm and needs more research efforts. On this line, it would be crucial to account for the importance of time in databases. I could ask the database question like "What is the place I visited the most between 1 pm and 3 pm this year?" Furthermore, we have restricted ourselves to only two modalities, and in particular, a database made of strictly images. Expanding available modalities is a clear path with obvious benefits. Additionally, we could consider not only documents but documents and their meta-data. To provide an example, whenever we take a picture with our smartphone, we collect a variety of information, such as the location and time, which would definitely be helpful for a database of this kind. To remain in the field of smartphones, recently, video-clip sharing has become very popular among social network users. Asking database-like queries on videos is an open problem that presents \begin{table} \begin{tabular}{l l l} \hline \hline **Stock Model** & **Total Error \(\downarrow\)** & **Accuracy \(\uparrow\)** \\ \hline Perfect IR & \(\mathbf{0.131\pm 0.014}\) & \(0.869\pm 0.014\) \\ Noisy IR & \(0.404\pm 0.176\) & \(\mathbf{0.906\pm 0.007}\) \\ Damaging IR & \(0.829\pm 0.357\) & \(0.811\pm 0.013\) \\ Full & \(0.793\pm 0.150\) & \(0.672\pm 0.018\) \\ \hline **FTmodel** & & \\ \hline Perfect IR & \(\mathbf{0.060\pm 0.007}\) & \(0.940\pm 0.007\) \\ Noisy IR & \(0.085\pm 0.007\) & \(\mathbf{0.946\pm 0.004}\) \\ Damaging IR & \(0.436\pm 0.054\) & \(0.838\pm 0.008\) \\ Full & \(0.330\pm 0.015\) & \(0.877\pm 0.007\) \\ \hline \hline \end{tabular} \end{table} Table 6. Results for the query type IN. Once again, we observe a gap in performance for the finetuned models. In particular, the finetuned version produces answers that are much more robust to noise. Moreover, while results are generally satisfactory, we observe an increase in error for the Full Pipeline. We attribute this to damaging documents that trick the reasoner into mispredicting the presence of an object, as evidenced by the high loss for the DamagingIR setting. many challenges. Among all, it is crucial to be able to identify entities along frames to be able to answer queries effectively. While recognizing an entity (like a person) is generally feasible for text, it is much more complex when considering different modalities. Solving this will be critical for the development of MMNDBs. In our presentation, we stressed the fact that the proposed architecture is not the only possible way of solving this problem. In fact, recently, we have witnessed the power of large foundational models to solve a wide array of tasks, with chatGPT and GPT-x models, in general, leading the way (Kang et al., 2019). We believe that these large foundational models could bring an advance to this field as well. However, this is not straightforward, and some issues should be addressed. These models require a large amount of data to be pre-trained; this begs the question of how one could encapsulate the memory used during training from the actual Multimodal Database to avoid knowledge contamination. By knowledge contamination, we mean the known phenomenon for which data used during pretraining is spilled when generating answers in a completely unrelated context. Knowledge contamination proved troublesome in many applications, with some systems allegedly revealing private keys or even personal phone numbers. Furthermore, true multimodality in these large models remains an open research direction and a major roadblock toward conversational multimodal systems. Finally, we have taken Multimodal Neural Database in its most general setting. However, one might be interested in specific scenarios with more precise guidelines and goals. For instance, there may be cases in which one has a precise idea of which kind of queries are to be expected. In that case, strategies could be crafted to optimize the system. In traditional database systems, for example, indexing or creating views for common queries is a prevalent practice. Creating equivalent procedures for MMNDB is still unexplored. ## 6. Related Work **Multimedia Information Retrieval (MMIR)** Bridging the gap between multimodal unstructured data and structured database systems has always been a central key endeavor in Information Retrieval (Henderson et al., 2017). The former is vastly highly available on the web but challenging to digest and query compared to the latter. Particular focus has been posed on content-based image retrieval (Krause et al., 2017; Krause et al., 2017; Krause et al., 2017) and recently on cross-modal retrieval (Krause et al., 2017; Krause et al., 2017), which have been made possible with the recent advancements in deep learning (Krause et al., 2017). Specifically, there has been an explosion of such approaches for Image-text retrieval (Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017). However, these systems are primarily concerned with retrieving relevant documents (e.g., images) based on a given query (e.g., text). In contrast, MMNDBs focus on answering database-like queries on large data collections, which current cross-modal retrieval methods cannot achieve. **Multimodal Neural Models** There has been a recent surge in the development of multimodal neural models that can handle data in different forms, primarily images, and text, for various applications. Usually, this is performed via a single neural multimodal encoder (Krause et al., 2017; Krause et al., 2017; Krause et al., 2017; Krause et al., 2017) or via different encoders for each modality that is jointly aligned via a shared space (Krause et al., 2017; Krause et al., 2017). In MMNDBs, we take advantage of this characteristic by using a separate encoder system as a Retriever to precompute and index visual tokens, thus reducing computation and time at runtime by only using the text encoder to compute the textual embedding of the query. However, directly applying these neural models to the MMNDB task would not be scalable due to the high computational cost. We use them as components in our architecture, building on their successes in other vision-language tasks. **Visual Question Answering (VQA)** Most of these multimodal vision-text models are evaluated on the task of visual question answering (Krause et al., 2017), where the goal is to generate an accurate and semantically coherent response based on a question about an image. Usually, these involve using reasoning and other capacities that are non-trivial, even for current neural architectures. Compared to the task of MMNDBs, VQA is defined on a single image-question pair and requires reasoning over the image to answer the question. Closer to the task of MMNDBs, is Open-domain Question Answering (OpenQA) (Krause et al., 2017) and the multimodal variant WebQA (Krause et al., 2017) which aim to answer natural language questions over large-scale unstructured textual documents. Compared to the task of MMNDBs, their scope is different and involves multimodal, open-domain question-answering, while we want to focus on efficiently answering database-like queries over a collection of documents in different formats (e.g., images). **Answering Database Queries** There has been substantial effort put into converting queries expressed in natural language into SQL queries for databases with known structure (Krause et al., 2017; Krause et al., 2017; Krause et al., 2017), and there have also been advancements in adapting this approach for semi-structured data and knowledge bases (Krause et al., 2017; Krause et al., 2017). Recently, Thorne et al. (Thorne et al., 2017; Thorne et al., 2017) proposed NeuralDB as a way to perform database queries over a collection of textual documents without the need to translate data or queries into a predefined database schema but using parallel neural techniques instead. Their approach is very effective but it: (i) requires preprocessing and analysis for the aggregation operator; (ii) is limited to simple queries and (iii) is capable of handling data just in textual format. In this paper, we stem from this research approach and tackle the third limitation extending the original architecture proposed to multimodal document processing. **Retrieval-augmented models** Recently there has been a surge of interest in the line of research concerning retrieval-augmented neural models (Krause et al., 2017). Most of the current models focus on augmenting current language models' capabilities with an external memory or retrieval mechanism that retrieves relevant documents given an input query, reducing the number of parameters and non-factual errors (Thorne et al., 2017). ## 7. Conclusion In this paper, we have proposed to expand the field of Multimedia Information retrieval through the introduction of Multimodal Neural Databases. MMNDBs promise to answer complex database-like queries that involve reasoning over multiple modalities at scale. We have demonstrated the feasibility and potential of this system by proposing a first principled approach to solve this problem with an architecture composed of three modules - retriever, reasoner, and aggregator - and performing a rich set of experiments. We have discussed potential future research directions that could stem from the system introduced in this paper. MMNDBs set a new research agenda that strives to simultaneously act as a bridge between information retrieval and database systems and reduce the gap between the two. We believe MMNDBs have the potential to substantially advance not only the field of MMIR but the general field of Information Retrieval in its entirety. ## 8. Acknowledgment This work was partially supported by projects FAIR (PE0000013) and SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU and by ERC Starting Grant No. 802554 (SPECGEO) and PRIN 2020 project n.2020TA3K9N "LEGO.AI".
2304.09332
Nonexistence of Solutions to the Coupled Generalized Jang Equation/Zero Divergence System
In [5], Bray and Khuri proposed coupling the generalized Jang equation to several different auxiliary equations. The solutions to these coupled systems would then imply the Penrose inequality. One of these involves coupling the generalized Jang equation to $\overline{div}(\phi q)=0$, as this would guarantee the non-negativity of the scalar curvature in the Jang surface. This coupled system of equations has not received much attention, and we investigate it's solvability. We prove that there exists a spherically symmetric initial data set for the Einstein equations for which there do not exist smooth radial solutions to the system having the appropriate asymptotics for application to the Penrose inequality.
Jaroslaw S. Jaracz
2023-04-18T22:39:33Z
http://arxiv.org/abs/2304.09332v1
# Nonexistence of solutions to the coupled generalized Jang equation/zero divergence system ###### Abstract. In [5], Bray and Khuri proposed coupling the generalized Jang equation to several different auxiliary equations. The solutions to these coupled systems would then imply the Penrose inequality. One of these involves coupling the generalized Jang equation to \(\overline{div}(\phi q)=0\), as this would guarantee the non-negativity of the scalar curvature in the Jang surface. This coupled system of equations has not received much attention, and we investigate it's solvability. We prove that there exists a spherically symmetric initial data set for the Einstein equations for which there do not exist smooth radial solutions to the system having the appropriate asymptotics for application to the Penrose inequality. ## 1. Introduction and Statement of Results ### The Penrose Conjecture The Penrose inequality has been one of the most famous open conjectures in mathematical general relativity. Conjectured by Roger Penrose in the 1970's using a heuristic argument based on the established view point of gravitational collapse [18], it relates the total mass \(m\) of a spacetime to the surface area \(A\) of a black hole in the spacetime via the inequality \[m\geq\sqrt{\frac{A}{16\pi}}. \tag{1.1}\] A special case, known as the Riemannian Penrose inequality, was proven in the late 1990's for an asymptotically flat initial data set \((M,g)\) by Huisken and Illmanen using a weak version of the inverse mean curvature flow (IMCF) [12], and independently by Hubert Bray using a conformal flow of metrics [3]. In these cases, the black hole is represented by a minimal surface and the initial data set must have non-negative scalar curvature \(R\geq 0\) (or to satisfy some assumptions which imply this condition). The \(m\) in these cases is given by the ADM energy. For the definitions of the ADM energy and ADM mass, see (2.2) and (2.4). The Penrose inequality for a general asymptotically flat initial data set \((M,g,k)\) where \(k\) is the extrinsic curvature remains an open problem. It had previously been proven in the case of spherical symmetry where \(m\) is given by the ADM energy, assuming the so-called dominant energy condition and where the black hole is mathematically represented by an outermost future or past apparent horizon [11], with a different proof further establishing rigidity for the inequality given in [4] using a Jang equation approach. The inequality has been extended to include charge and angular momentum and proven under certain conditions, but all of these require some kind of non-negativity for the scalar curvature. See for example [8, 14, 16, 17]. A popular mathematically precise formulation of the Penrose conjecture is the following: **Conjecture** (Penrose Inequality).: _Let \((M,g,k)\) be an asymptotically flat initial data set satisfying appropriate fall-off conditions and the dominant energy condition with boundary \(\partial M\) consisting of an outermost apparent horizon, with possibly multiple components. Let \(N\) be any component of the boundary and let \(A=A_{min}(N)\) denote the outermost minimal area enclosure of the component \(N\). Then_ \[E_{ADM}\geq\sqrt{\frac{A}{16\pi}} \tag{1.2}\] _where \(E_{ADM}\) is the ADM energy._ Here apparent horizons play the roles of black holes. There is also a formulation where the ADM energy is replaced by the ADM mass. We mention that under weaker energy conditions the Penrose inequality does not hold. One of the key ingredients for the Penrose inequality is the black hole are law, which holds under the weaker assumption of the weak energy condition. However, in this setting there exists a counter-example satisfying all the assumptions of the above conjecture, with the dominant energy condtion replaced by the weak energy condition, see [15]. ### The Generalized Jang Equation Here and for the remainder of the paper, given some arbitrary function \(f\), we write \[\partial_{x^{i}}f=\partial_{i}f=f_{i}\] for the partial derivatives. Also, if \(f=f(r)\) is a function of a single variable, we write \(f^{\prime}=f_{r}\) when convenient. The Jang equation was proposed by Jang in [13] and was successfully used by Schoen and Yau in [20] to prove the positive energy theorem assuming the dominant energy condition. The idea of the proof was to solve the Jang equation which would give a hypersurface \(\Sigma\) in the product manifold \((M\times\mathbb{R},g+dt^{2})\) as the graph of a certain function \(f\) and, after a further conformal deformation, this surface would have positive scalar curvature and the same ADM energy as the original data set. One could then apply the positive energy theorem which had been proven in the case of positive scalar curvature [19]. One might hope that having established the Riemannian Penrose inequality the same approach could be used. However, it was pointed out that the original Jang equation was not suited for this, for a variety of reasons, but nevertheless it remained a tantalizing idea. Hence, in [5], Bray and Khuri proposed a modification of the Jang equation, presented below. The original Jang equation is just the special case of \(\phi=1\). Given an initial data set \((M,g,k)\), one looks for the hypersurface \(\Sigma\), referred to as the Jang surface, given by the graph \(t=f(x)\) inside the warped product space \((M\times\mathbb{R},g+\phi^{2}dt^{2})\). One then looks for a surface satisfying the equation, called the generalized Jang equation \[H_{\Sigma}-Tr_{\Sigma}K=0 \tag{1.3}\] where \(H_{\Sigma}\) is the mean curvature of \(\Sigma\) and \(Tr_{\Sigma}K\) denotes the trace of \(K\) over \(\Sigma\). Here \(K\) is a nontrivial extension of the initial data. Letting \(\partial_{x^{i}}=\partial_{i}\) for \(1\leq i\leq 3\) and \(\partial_{x^{4}}=\partial_{t}\), the extension is given by \[\begin{split} K(\partial_{i},\partial_{j})=K(\partial_{j},\partial_ {i})=k(\partial_{j},\partial_{i})\quad\text{for}\quad 1\leq i,j\leq 3\\ K(\partial_{i},\partial_{t})=K(\partial_{t},\partial_{i})=0 \quad\text{for}\quad 1\leq i\leq 3\\ K(\partial_{t},\partial_{t})=\frac{\phi^{2}g(\nabla f,\nabla\phi )}{\sqrt{1+\phi^{2}|\nabla f|^{2}}}\end{split} \tag{1.4}\] where \(x^{i},i=1,2,3\) are local coordinates on \(M\). In local coordinates, the generalized Jang equation takes the form \[\left(g^{ij}-\frac{\phi^{2}f^{i}f^{j}}{1+\phi^{2}|\nabla f|^{2}}\right)\left( \frac{\phi\nabla_{ij}f+\phi_{i}f_{j}+\phi_{j}f_{i}}{\sqrt{1+\phi^{2}|\nabla f |^{2}}}-k_{ij}\right)=0 \tag{1.5}\] where \(f^{j}=g^{ij}f_{i}\). The tangent vectors to \(\Sigma\) are given by \[X_{i}=\partial_{i}+f_{i}\partial_{t}\] and hence the induced metric on \(\Sigma\) is given by \[\bar{g}=g+\phi^{2}df^{2}. \tag{1.6}\] The inverse metric is given by \[\bar{g}^{ij}=g^{ij}-\frac{\phi^{2}f^{i}f^{j}}{1+\phi^{2}|\nabla f|^{2}}. \tag{1.7}\] The unit normal to \(\Sigma\) is given by \[N=\frac{\nabla f-\phi^{-2}\partial_{t}}{\sqrt{\phi^{-2}+|\nabla f|^{2}_{g}}}\] where \(\nabla\) denotes the covariant derivative with respect to the \(g\) metric. We denote by \(A\) the second fundamental form of \(\Sigma\) in \(M\times\mathbb{R}\) and by \(\overline{div}\) the divergence operator with respect to \(\bar{g}\). We also define the 1-forms \(q\) and \(w\) by \[w_{i}=\frac{\phi f_{i}}{\sqrt{1+\phi^{2}|\nabla f|^{2}}},\quad q_{i}=\frac{ \phi f^{j}}{\sqrt{1+\phi^{2}|\nabla f|^{2}}}\left(A_{ij}-(K|_{\Sigma})_{ij} \right), \tag{1.8}\] where \(K|_{\Sigma}\) is the restriction of \(K\) to \(\Sigma\). One then has the following key formula for the scalar curvature of \((\Sigma,\bar{g})\): \[\overline{R}=2\left(\mu-J(w)\right)+|A-K|_{\Sigma}|^{2}+2|q|^{2}-2\phi^{-1} \overline{div}(\phi q). \tag{1.9}\] This is known as the generalized Schoen-Yau identity, see [4, 5]. In the case where \(\overline{R}\geq 0\), the Riemannian Penrose inequaltity can be applied to the Jang surface. ### The coupled system and the main theorem Assuming the dominant energy condition, all the terms in (1.9) are non-negative, except possibly the last one. This then naturally leads to the coupled system of equations \[\begin{split}\left(g^{ij}-\frac{\phi^{2}f^{i}f^{j}}{1+\phi^{2}| \nabla f|^{2}}\right)\left(\frac{\phi\nabla_{ij}f+\phi_{i}f_{j}+\phi_{j}f_{i}} {\sqrt{1+\phi^{2}|\nabla f|^{2}}}-k_{ij}\right)&=0\\ \overline{div}(\phi q)&=0.\end{split} \tag{1.10}\] Sine \(q\) depends on \(f\) and \(\phi\), this is a system of two equations in those two unknown functions. In addition, even though the second equation turns out to be third order in \(f\), for a fixed smooth \(f\) it can be viewed as a degenerate second order elliptic equation in \(\phi\), which gives some hope that the system might be solvable. As discussed in [9], for application to the Penrose inequality one needs \(\phi>0\) outside of the boundary, with \(\phi\) potentially vanishing at the boundary. However, we have the following theorem. **Theorem 1.1**.: _There exists a smooth, spherically symmetric, assymptotically flat initial data set \((M,g,k)\) satisfying the dominant energy condition and with boundary consisting of a compact outermost apparent horizon for which (1.10) does not possess any smooth radial solutions with \(\phi>0\) outside of the boundary, and with the appropriate asymptotics for application to the Penrose inequality._ Now, this leaves open the highly unlikely possibility that there exist non-radial solutions to (1.10) for the spherically symmetric initial data set given in Theorem 1.1. This is unlikely as symmetries make solving differential equations easier, and hence if (1.10) possessed solutions with the appropriate properties, at least some of them should be radial, which the above shows is impossible. Nevertheless, since the equations are non-linear proving this is non-trivial. Hence, we formulate this as a conjecture, which we expect to resolve in a future paper. **Conjecture**.: _If a spherically symmetric initial data set \((M,g,k)\) possesses solutions to (1.10) with the appropriate asymptotics for application to the Penrose inequality, at least some of them must be radial. Hence the initial data set of Theorem 1.1 possesses no such solutions, and thus this approach cannot be used to prove the Penrose inequality._ We also mention that (1.10) was recently investigated in the case where \(\overline{div}(\phi q)\) was linearized. In this case, under some very restrictive assumptions on the initial data, solutions were shown to exist, though of course this doesn't prove the Penrose inequality in general [22]. Our paper grew out of attempting to solve (1.10) in the general case. The aim of Theorem 1.1 and of the above conjecture is to settle whether or not this is a viable approach to the Penrose inequality, so that researchers do not waste precious resources on a hopeless approach. ### A closer look at the divergence term Since \(q\) is a 1-form, the divergence is interpreted in the usual way of first raising the index to obtain a vector and then taking the divergence. In abstract index notation we have \[\overline{div}(\phi q)=\overline{\nabla}_{a}(\phi\bar{g}^{ab}q_{b})\] Recall, the formula for the divergence of a vector field with respect to the metric \(g\) in local coordinates is given by \[div_{g}(V)=\frac{1}{\sqrt{|g|}}\partial_{k}\left(\sqrt{|g|}V^{k}\right)\] so we can rewrite \[\begin{split}\overline{div}(\phi q)&=\frac{1}{\sqrt{ |\bar{g}|}}\partial_{k}\left(\sqrt{|\bar{g}|}\phi\bar{g}^{ki}q_{i}\right)\\ &=\frac{1}{\sqrt{|\bar{g}|}}\partial_{k}\left(\sqrt{|\bar{g}|} \phi\bar{g}^{ki}\frac{\phi f^{j}}{\sqrt{1+\phi^{2}|\nabla f|^{2}}}\left(A_{ij}- (K|_{\Sigma})_{ij}\right)\right).\end{split} \tag{1.11}\] The components of \(A\) are given by \[A_{ij}=\left\langle\widetilde{\nabla}_{X_{i}}N,X_{j}\right\rangle_{\widetilde{ g}}=\frac{\phi\nabla_{ij}f+\phi_{i}f_{j}+\phi_{j}f_{i}+\phi^{2}f^{m}\phi_{m}f_{i}f _{j}}{\sqrt{1+\phi^{2}|\nabla f|^{2}_{g}}}\] where \(\nabla_{ij}f\) are the components of the second covariant derivative of \(f\) with respect to the \(g\) metric. On the other hand, we have \[\left(\left.K\right|_{\Sigma}\right)_{ij}=K(X_{i},X_{j})=k_{ij}+\frac{\phi^{2 }f^{m}\phi_{m}f_{i}f_{j}}{\sqrt{1+\phi^{2}|\nabla f|^{2}_{g}}}\] and so \[A_{ij}-\left(\left.K\right|_{\Sigma}\right)_{ij}=\left\langle\widetilde{\nabla }_{X_{i}}N,X_{j}\right\rangle_{\widetilde{g}}=\frac{\phi\nabla_{ij}f+\phi_{i}f _{j}+\phi_{j}f_{i}}{\sqrt{1+\phi^{2}|\nabla f|^{2}_{g}}}-k_{ij}\] and therefore in local coordinates \[\overline{div}(\phi q)=\frac{1}{\sqrt{|\bar{g}|}}\partial_{k}\left(\frac{ \sqrt{|\bar{g}|}\phi^{2}\bar{g}^{ki}f^{j}\left(\phi\nabla_{ij}f+\phi_{i}f_{j}+ \phi_{j}f_{i}\right)}{1+\phi^{2}|\nabla f|^{2}_{g}}-\frac{\sqrt{|\bar{g}|}\phi ^{2}\bar{g}^{ki}f^{j}k_{ij}}{\sqrt{1+\phi^{2}|\nabla f|^{2}_{g}}}\right). \tag{1.12}\] ## 2. Proof of Theorem 1.1 The initial data set we construct will be asymptotically flat and spherically symmetric. We begin by giving the appropriate definitions, then looking at the generalized Jang equation in spherical symmetry, coupling it to the zero divergence term, and analyzing the resulting ODE. We then show that the solutions to this ODE yield a contradiction. ### Asymptotic Flatness and the ADM Formalism We consider an asymptotically flat initial data set \((M,g,k)\) where \(M\) is a 3-manifold, \(g\) a Riemannian metric, and \(k\) is a symmetric 2-tensor, the extrinsic curvature. See [2, 7] for precise definitions. In each asymptotically flat end \(g\) and \(k\) satisfy certain fall-off conditions such as \[\left|D^{\lambda}(g_{ij}-\delta_{ij})\right|\leq Cr^{-1-|\lambda|},\quad|R| \leq Cr^{-3},\quad|k|\leq Cr^{-2},\quad|Tr_{g}k|\leq Cr^{-2} \tag{2.1}\] for \(|\lambda|\leq 2\) which are standard. Here, \(\delta\) is the Euclidean metric, \(r=\sqrt{x^{2}+y^{2}+z^{2}}\) the standard Euclidean radius, \(D^{\lambda}\) is a derivative operator with respect to the Euclidean coordinates, and \(\lambda\) is a multi-index. We have \[|k|^{2}=k_{ij}k^{ij},\quad Tr_{g}k=g^{ij}k_{ij}\] as usual. For an asymptotically flat end, the ADM energy and ADM momentum are defined by \[E_{ADM}=\lim_{r\to\infty}\frac{1}{16\pi}\sum_{i,j}\int_{S_{r}}(g_ {ij,i}-g_{ii,j})\nu^{j}dS_{r} \tag{2.2}\] \[P_{i}=\lim_{r\to\infty}\frac{1}{8\pi}\sum_{j}\int_{S_{r}}\left(k _{ji}\nu^{j}-(Tr_{g}k)\nu_{i}\right)dS_{r} \tag{2.3}\] where \(S_{r}\) are coordinate spheres of radius \(r\) and \(\nu^{j}\) is the outward unit normal [7]. One then defines the ADM mass by \[m_{ADM}=\sqrt{E_{ADM}^{2}-|P|^{2}}. \tag{2.4}\] It is well known [2, 6] that with our fall-off conditions these quantities do not depend on the choice of asymptotically flat coordinates. ### Energy Conditions and Constraint Equations For a full discussion of energy conditions in a \(3+1\) spacetime \((\mathcal{M},\mathfrak{g})\) we refer the reader to [21, 10]. The Einstein constraint equations for an initial data set \((M,g,k)\) are \[\begin{split} 16\pi\mu=R+(Tr_{g}k)^{2}-|k|^{2}\\ 8\pi J_{i}=\nabla^{j}(k_{ij}-(Tr_{g}k)g_{ij})\end{split} \tag{2.5}\] where \(R\) is the scalar curvature, \(\nabla^{j}\) denotes covariant differentiation, \(\mu\) is the mass-energy density, and \(J_{i}\) the components of the momentum density. Then the dominant energy condition takes the form \[\mu\geq|J|_{g}. \tag{2.6}\] ### Null expansions and apparent horizons Given a two dimensional surface \(S\) inside \(M\) one can calculate the future (+) and past (-) null expansion at each point of the surface. These are defined by \[\theta_{\pm}=H_{S}\pm Tr_{S}k\] where \(H_{S}\) indicates the mean curvature of the surface, and \(Tr_{S}k\) indicates the trace of \(k\) restricted to \(S\) calculated with respect to the induced metric. The null expansions measure the convergence and divergence of past and future directed null geodesics. A future or past apparent horizon is defined by \[\theta_{\pm}=H_{S}\pm Tr_{S}k=0\] and this is a popular way of modeling black holes in initial data sets without knowing the full development of the initial data. ### Asymptotically Flat Initial Data Sets in Spherical Symmetry We take our manifold to be \[M=\mathbb{R}^{3}\setminus B_{1}(0)=\{(r,\theta,\phi):r\in[1,\infty),\theta\in(0, \pi),\phi\in(0,2\pi)\} \tag{2.7}\] with general spherically symmetric metric \[g=g_{11}(r)dr^{2}+\rho^{2}(r)d\theta^{2}+\rho^{2}(r)\sin^{2}(\theta)d\phi^{2} \tag{2.8}\] and spherically symmetric extrinsic curvature \[k=g_{11}k_{a}dr^{2}+k_{b}\rho^{2}d\theta^{2}+k_{b}\rho^{2}\sin^{2}(\theta)d\phi ^{2} \tag{2.9}\] where \(g_{11}(r)>0\), \(\rho(r)>0\), and \(k_{a}=k_{a}(r),k_{b}=k_{b}(r)\) are some arbitrary functions of \(r\). We assume the usual fall-off conditions \[\begin{array}{rl}|k(r)|_{g}\leq Cr^{-2},&|Tr_{g}k(r)|\leq Cr^{-3},\quad|(g_{ 11}-1)(r)|+r|g_{11,r}(r)|\leq Cr^{-1},\\ &|\rho(r)-r|+r|\rho_{r}(r)-1|+r^{2}|\rho_{rr}(r)|\leq C\end{array} \tag{2.10}\] for some constant \(C\) which simply say that the initial data set is asymptotically flat. It is also easy to calculate \(Tr_{g}k\) in such coordinates and we find \[Tr_{g}k=k_{a}+2k_{b}.\] The null expansions for a sphere \(S_{r}\) of coordinate radius \(r\) are given by \[\theta_{\pm}(S_{r})=\theta_{\pm}(r)=2\left(\sqrt{g^{11}\frac{\rho_{r}}{\rho}} \pm k_{b}\right)(r).\] If the boundary \(\partial M=S_{1}\) is an outermost (future or past or both) apparent horizon, then \[\theta_{\pm}(r)>0,\quad r>1 \tag{2.11}\] and \[H_{S_{r}}=\frac{1}{2}\left(\theta_{+}+\theta_{-}\right)=2\sqrt{g^{11}}\frac{ \rho_{r}}{\rho}>0,\quad r>1.\] ### The Generalized Jang Equation in Spherical Symmetry For any \(\phi>0\), defining \[v(r)=\frac{\phi\sqrt{g^{11}}f_{r}}{\sqrt{1+\phi^{2}g^{11}f_{r}^{2}}} \tag{2.12}\] the generalized Jang equation takes the form \[\sqrt{g^{11}}v_{r}+2\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)+( v^{2}-1)k_{a}+\sqrt{g^{11}}v(1-v^{2})\frac{\phi_{r}}{\phi}=0. \tag{2.13}\] See equation (7) in [4]. Using the fact that \(Tr_{g}k=k_{a}+2k_{b}\), we can also write this as \[\sqrt{g^{11}}v_{r}+2\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v+v^{2}k_{a}-Tr_{g}k+ \sqrt{g^{11}}v(1-v^{2})\frac{\phi_{r}}{\phi}=0 \tag{2.14}\] We can calculate a few useful quantities in terms of \(v\). We have \[\bar{g}_{11}=g_{11}+\phi^{2}f_{r}^{2}=g_{11}(1+\phi^{2}g^{11}f_{r}^{2})=g_{11} \left(\frac{1}{1-v^{2}}\right)\] \[g_{22}=\bar{g}_{22}=\rho^{2},\quad g_{33}=\bar{g}_{33}=\rho^{2}\sin^{2}(\theta), \quad g_{ij}=\bar{g}_{ij}=0,\quad i\neq j.\] so \[\bar{g}=g_{11}\left(\frac{1}{1-v^{2}}\right)dr^{2}+\rho^{2}(r)d\theta^{2}+\rho ^{2}(r)\sin^{2}(\theta)d\phi^{2}\] and \[|\bar{g}|=\frac{|g|}{1-v^{2}}. \tag{2.15}\] Notice that for \(v\) to yield a smooth \(f\), one needs \[-1<v(r)<1.\] ### The zero divergence term in spherical symmetry In spherical symmetry one has \(q_{2}=q_{3}=0\) and \[q_{1}=-2\sqrt{g_{11}}\frac{v}{1-v^{2}}\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho }v-k_{b}\right)\] (see Appendix C of [4]) and raising the index using \(\bar{g}\) we have \[q^{1}=\bar{g}^{11}q_{1}=-2\sqrt{g^{11}}v\left(\sqrt{g^{11}}\frac{\rho_{r}}{ \rho}v-k_{b}\right).\] So the zero divergence condition is \[\overline{div}(\phi q)=\frac{1}{\sqrt{|\bar{g}|}}\partial_{r}\left(\sqrt{|\bar {g}|}\phi q^{1}\right)=0\] which we can rewrite as \[\partial_{r}(\sqrt{|\bar{g}|})\phi q^{1}+\sqrt{|\bar{g}|}\phi_{r}q^{1}+\sqrt{| \bar{g}|}\phi q_{r}^{1}=0 \tag{2.16}\] or \[\frac{\phi_{r}}{\phi}=-\frac{q_{r}^{1}}{q^{1}}-\frac{\partial_{r}(\sqrt{|\bar {g}|})}{\sqrt{|\bar{g}|}}. \tag{2.17}\] We remark that we have to be a bit careful here. To apply the argument to the Penrose inequality, we need \(\phi(r)>0\) for \(r>1\). Also, \(|\bar{g}|>0\) automatically for \(-1<v<1\). However, to obtain (2.17), we need to divide by \(q^{1}\). But, (2.16) can hold if \(q_{1}\equiv 0\). We will see how to handle this possibility later on. We calculate: \[\frac{q_{r}^{1}}{q^{1}} =\frac{\partial_{r}(\sqrt{g^{11}})}{\sqrt{g^{11}}}+\frac{v_{r}}{v}+ \frac{\partial_{r}\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)}{ \left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)}\] \[=\frac{\partial_{r}(\sqrt{g^{11}})}{\sqrt{g^{11}}}+\frac{v_{r}}{v} +\frac{\sqrt{g^{11}}\frac{\rho_{r}}{\rho}}{\left(\sqrt{g^{11}}\frac{\rho_{r}}{ \rho}v-k_{b}\right)}v_{r}+\frac{\partial_{r}\left(\sqrt{g^{11}}\frac{\rho_{r}} {\rho}\right)}{\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)}v-\frac{ \partial_{r}\left(k_{b}\right)}{\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{ b}\right)}\] \[=\left(\frac{1}{v}+\frac{\sqrt{g^{11}}\frac{\rho_{r}}{\rho}}{ \left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)}\right)v_{r}+\frac{ \partial_{r}(\sqrt{g^{11}})}{\sqrt{g^{11}}}+\frac{\partial_{r}\left(\sqrt{g^{ 11}}\frac{\rho_{r}}{\rho}\right)}{\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_ {b}\right)}v-\frac{\partial_{r}\left(k_{b}\right)}{\left(\sqrt{g^{11}}\frac{ \rho_{r}}{\rho}v-k_{b}\right)}.\] Also \[\frac{\partial_{r}(\sqrt{|\bar{g}|})}{\sqrt{|\bar{g}|}} =\frac{\partial_{r}\left((1-v^{2})^{-1/2}\sqrt{|g|}\right)}{(1-v ^{2})^{-1/2}\sqrt{|g|}}=\frac{\partial_{r}\left((1-v^{2})^{-1/2}\sqrt{g_{11}} \rho^{2}\sin(\theta)\right)}{(1-v^{2})^{-1/2}\sqrt{g_{11}}\rho^{2}\sin(\theta)}\] \[=\frac{\partial_{r}\left((1-v^{2})^{-1/2}\right)}{(1-v^{2})^{-1/2 }}+\frac{\partial_{r}(\sqrt{g_{11}})}{\sqrt{g_{11}}}+2\frac{\rho_{r}}{\rho}\] \[=\frac{v}{1-v^{2}}v_{r}+\frac{\partial_{r}(\sqrt{g_{11}})}{\sqrt{ g_{11}}}+2\frac{\rho_{r}}{\rho}.\] The primary terms we want to focus on are the terms containing \(v_{r}\). Thus we can write \[\frac{\phi_{r}}{\phi}=-\left(\frac{v}{1-v^{2}}+\frac{1}{v}+\frac{\sqrt{g^{11}} \frac{\rho_{r}}{\rho}}{\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right) }\right)v_{r}-F \tag{2.18}\] where \[F=\frac{\partial_{r}\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}\right)}{\left( \sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)}v-\frac{\partial_{r}\left(k_ {b}\right)}{\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)}+2\frac{ \rho_{r}}{\rho}.\] where we used the fact that \[\frac{\partial_{r}(\sqrt{g_{11}})}{\sqrt{g_{11}}}+\frac{\partial_{r}(\sqrt{g^{ 11}})}{\sqrt{g^{11}}}=0.\] Substituting (2.18) into (2.14) we obtain \[\sqrt{g^{11}}v_{r}+2\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v+v^{2}k_{a}-Tr_{g}k- \sqrt{g^{11}}v(1-v^{2})\left(\left(\frac{v}{1-v^{2}}+\frac{1}{v}+\frac{\sqrt{g ^{11}}\frac{\rho_{r}}{\rho}}{\left(\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b} \right)}\right)v_{r}+F\right)=0\] or, after a bit of algebra, \[\sqrt{g^{11}}\left((v^{2}-1)\frac{\sqrt{g^{11}}\frac{\rho_{r}}{\rho}v}{\left( \sqrt{g^{11}}\frac{\rho_{r}}{\rho}v-k_{b}\right)}\right)v_{r}+2\sqrt{g^{11}} \frac{\rho_{r}}{\rho}v+v^{2}k_{a}-Tr_{g}k-\sqrt{g^{11}}v(1-v^{2})F=0. \tag{2.19}\] The basic problem with this equation is that if \((v^{2}-1)<0\), as is needed for smooth solutions, the coefficient of \(v_{r}\) has the wrong sign. ### Construction of the spherically symmetric initial data set We want our spherically symmetric initial data set to be asymptotically flat, satisfy the dominant energy condition, and have the boundary consist of an outermost apparent horizon. To do this, we use methods similar to those of [15]. To simplify things, we let \(\rho(r)=r\), so our metric has the form \[g=h(r)dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}(\theta)d\phi^{2} \tag{2.20}\] where we let \(g_{11}=h\) and \[k=hk_{a}dr^{2}+k_{b}r^{2}d\theta^{2}+k_{b}r^{2}\sin^{2}(\theta)d\phi^{2}. \tag{2.21}\] In such a metric we can explicitly calculate \(J_{i}\). We find that \(J_{2}=J_{3}=0\) and \[J_{1} =\frac{1}{8\pi}\left(\partial_{r}(k_{a})+2\frac{k_{a}-k_{b}}{r}- \partial_{r}(Tr_{g}k)\right)\] \[=\frac{1}{8\pi}\left(\partial_{r}(k_{a})+2\frac{k_{a}-k_{b}}{r}- \partial_{r}(k_{a}+2k_{b})\right)\] \[=\frac{1}{4\pi}\left(\frac{k_{a}-k_{b}}{r}-\partial_{r}(k_{b}) \right).\] We want \(k_{b}\) to be compactly supported. This is extremely useful because when \(k_{b}=0\) and \(\rho(r)=r\), the equation (2.19) simplifies greatly. Let \(\Phi(r)\) be a smooth cut-off function satisfying: \[\Phi(r)=\begin{cases}1&:\quad 1\leq r\leq 2\\ \text{smooth, decreasing}&:\quad 2<r<3\\ 0&:\quad r\geq 3\end{cases}.\] Define \[k_{a}\coloneqq\frac{1}{6}\left(\frac{5\sin(r)}{r^{5.1}}-\frac{\cos(r)}{r^{4.1 }}\right) \tag{2.22}\] and \[k_{b}\coloneqq-\Phi(r). \tag{2.23}\] From this point forward, when we write \(k_{a},k_{b}\) we mean the functions given by these definitions. Now, this choice of \(k_{a}\) might seem extremely perplexing at first. However, it is chosen so that \[\int(-r^{-0.9}k_{a}(r))dr=\frac{\sin(r)}{6r^{5}}+C \tag{2.24}\] as can be easily checked, which becomes extremely useful in the calculations (2.36) and (2.37). Also \[|k_{a}|\leq\frac{1}{r^{4.1}} \tag{2.25}\] Notice, with these definitions we have \[Tr_{g}k=k_{a}+2k_{b}\] \[Tr_{g}k=k_{a}\quad r\geq 3.\] With the above \(k_{a},k_{b}\) define \[U(r)=\frac{k_{a}-k_{b}}{r}-\partial_{r}(k_{b})\] and notice \(|U(r)|<1/r^{5}\) for \(r\geq 3\). Then \[|J|_{g}^{2}=g^{ij}J_{i}J_{j}=g^{11}J_{1}J_{1}=\frac{1}{h}\frac{1}{16\pi^{2}}U^ {2}\] and so \[|J|_{g}=\frac{1}{4\pi}\frac{1}{\sqrt{h}}|U|.\] Next, take some fixed smooth function \(V(r)>0\) defined for \(r\geq 1\) with \(V(r)\geq|U(r)|\) and \(V(r)=1/r^{4}\) for \(r\geq 4\) which is possible by (2.25). Then \[\frac{1}{4\pi}\frac{1}{\sqrt{h}}V\geq|J|_{g}\] Next we need to find an \(h\) which will satisfy the dominant energy condition. In spherical symmetry we have \[16\pi\mu =R+(Tr_{g}k)^{2}-|k|_{g}^{2}=R+(k_{a}+2k_{b})^{2}-(k_{a}^{2}+2k_{ b}^{2})\] \[=R+4k_{a}k_{b}+2k_{b}^{2}.\] Notice, since \(k_{b}\) is supported on the interval \([1,3]\) so is \(4k_{a}k_{b}+2k_{b}^{2}\). Now, take some fixed smooth function \(W(r)\) defined for \(r\geq 1\) with \(W(r)\geq|4k_{a}k_{b}+2k_{b}^{2}|\) and compactly supported on \([1,4)\). We need \(16\pi\mu\geq 16\pi|J|_{g}\). Therefore we can take: \[R(r)=W(r)+\frac{4V(r)}{\sqrt{h(r)}}\] since then \[16\pi\mu=W(r)+\frac{4V(r)}{\sqrt{h(r)}}+4k_{a}k_{b}+2k_{b}^{2}\geq\frac{4V(r)} {\sqrt{h(r)}}\geq\frac{4|U(r)|}{\sqrt{h(r)}}=16\pi|J|_{g}.\] Now, define \[k_{\epsilon}=\epsilon k\] so that \(k_{a\epsilon}=\epsilon k_{a}\) and \(k_{b\epsilon}=\epsilon k_{b}\). In this case \[Tr_{g}k_{\epsilon}=\epsilon Tr_{g}k,\quad|J_{\epsilon}|_{g}=\epsilon|J|_{g}, \quad 4k_{a\epsilon}k_{b\epsilon}+2k_{b\epsilon}^{2}=\epsilon^{2}(4k_{a}k_{b}+2 k_{b}^{2})\] and so if we had a metric \(g_{\epsilon}\) with the prescribed scalar curvature \[R_{\epsilon}(r)=\epsilon^{2}W(r)+\frac{4\epsilon V(r)}{\sqrt{h(r)}} \tag{2.26}\] then \((M,g_{\epsilon},k_{\epsilon})\) would satisfy the dominant energy condition. For a metric of the form (2.20) the scalar curvature is given by \[R(r)=\frac{2h^{\prime}(r)}{rh^{2}(r)}-\frac{2}{r^{2}h(r)}+\frac{2}{r^{2}}\] (see equation (2.10) in [15] and the equation before (20) in [4]) which we can rearrange to obtain \[h^{\prime}=\frac{h}{r}-\frac{h^{2}}{r}+\frac{1}{2}Rrh^{2}. \tag{2.27}\] Substituting (2.26) we obtain \[h^{\prime}=\frac{h}{r}-\frac{h^{2}}{r}+\frac{1}{2}\epsilon^{2}rWh^{2}+2 \epsilon rVh^{3/2}. \tag{2.28}\] It turns out that for sufficiently small \(\epsilon>0\) this ordinary differential equation can be solved for any initial condition \(h(1)>0\). **Proposition 2.1**.: _There exists some \(\epsilon>0\) such that (2.28) has a smooth solution \(h(r)\) for any \(h(1)>0\). Moreover_ \[|h(r)-1|+r|h_{r}(r)|\leq Cr^{-1} \tag{2.29}\] _for some constant \(C\) depending on \(h(1)\)._ The proof is an application of some basic methods of ordinary differential equations. However, written out in full detail it becomes quite lengthy. Thus, in order to not interrupt the flow of the paper, we relegate it to Appendix A. To finish constructing our initial data set, we must now pick the correct initial condition for \(h(1)\) at the boundary, and then take the region exterior to the outermost apparent horizon. **Proposition 2.2**.: _Let \((M,g_{\epsilon},k_{\epsilon})\) be an initial data set with \(M\) given by (2.7), \(k_{\epsilon}=\epsilon k\) where \(k\) is given by (2.21) with \(k_{a}\) and \(k_{b}\) given by (2.22) and (2.23) respectively, and \(g_{\epsilon}\) given by (2.20) with \(h(r)\) being the solution given in Proposition 2.1 with \(h(1)=(1/\epsilon)^{2}\). Then \((M,g_{\epsilon},k_{\epsilon})\) is an asymptotically flat initial data set satisfying the dominant energy condition, containing no compact past apparent horizons, and with \(\partial M\) being a compact future apparent horizon._ Proof.: We have already proven all the statements, by construction, except for the last two. Due to asymptotic flatness, there always exists an outermost future and an outermost past apparent horizon. These are unique, and possibly empty, and with possibly several components [1]. However, due to the uniqueness and spherical symmetry, these must consist of some coordinate spheres \(S_{r}\). Notice, for (2.20), the mean curvature of a coordinate sphere is \[H_{S_{r}}=\frac{2}{r\sqrt{h(r)}}>0\] since \(h(r)>0\). Also \[\theta_{\pm}(S_{r})=\theta_{\pm}(r)=\frac{2}{r\sqrt{h(r)}}\pm 2k_{b\epsilon}(r)= \frac{2}{r\sqrt{h(r)}}\pm 2\epsilon k_{b}(r)\] but since \(k_{b}\leq 0\) then \(\theta_{-}(r)>0\) and so there is no outermost compact past horizon, and hence no compact past horizons whatsoever. Also, with our choice of \(h(1)\) we have \[\theta_{+}(1)=\frac{2}{\sqrt{h(1)}}-2\epsilon\Phi(1)=\frac{2}{\sqrt{(1/\epsilon)^ {2}}}-2\epsilon=0\] so indeed \(S_{1}=\partial M\) is a compact future apparent horizon. Now, \(S_{1}\) might not be an outermost future apparent horizon. However, as mentioned, there is always an outermost future apparent horizon, and due to the spherical symmetry is some sphere of coordinate radius \(r_{0}\). Therefore if we take the region exterior to this horizon, we have the following: **Proposition 2.3**.: _Let \(S_{r_{0}}\) denote the outermost apparent horizon of \((M,g_{\epsilon},k_{\epsilon})\) of Proposition 2.2. Let \(M_{\epsilon}=\{r\geq r_{0}\}\subset M\). Then \((M_{\epsilon},g_{\epsilon},k_{\epsilon})\) is an asymptotically flat initial data set satisfying the dominant energy condition with boundary consisting of an outermost compact future apparent horizon, and containing no other compact apparent horizons._ ### Appropriate Asymptotics for \(v(r)\) In order to apply the Riemannian Penrose inequality in the Jang surface to conclude that the Penrose inequality holds for the original data set, the ADM energies of \((M,g,k)\) and \((M,\bar{g})\) must be the same. This means that \(v(r)\) must have certain asymptotics at infinity. If we write \(h(r)=1+\psi(r)\), after a tedious calculation, the ADM energy of the metric (2.20) is given by \[E_{ADM}=\lim_{r\to\infty}\frac{r}{2}\frac{\psi(r)}{\sqrt{h(r)}}.\] This formula can also be verified by taking the limit of the Hawking mass which is asymptotic to the ADM energy in the case of asymptotic flatness, giving \[\lim_{r\to\infty}M_{H}(S_{r}) =\lim_{r\to\infty}\sqrt{\frac{|S_{r}|}{16\pi}}\left(1-\frac{1}{16 \pi}\int_{S_{r}}H^{2}dS_{r}\right)=\lim_{r\to\infty}\frac{r}{2}\left(1-\frac{ 1}{h(r)}\right)\] \[=\lim_{r\to\infty}\frac{r}{2}\left(1-\frac{1}{h(r)}\right)=\lim_ {r\to\infty}\frac{r}{2}\frac{\psi(r)}{h(r)}.\] In the asymptotically flat case where \(\lim_{r\to\infty}h(r)=1\) both of these formulas yield the same limit, as they should. Therefore, if we look at the metric of the Jang surface, where \(\bar{h}=h+\phi^{2}f_{r}^{2}=1+\psi+\phi^{2}fr^{2}\), we need \[\lim_{r\to\infty}\phi^{2}(r)f_{r}^{2}(r)=0 \tag{2.30}\] to ensure asymptotic flatness. Moreover \[\bar{E}_{ADM} =\lim_{r\to\infty}\frac{r}{2}\frac{\psi(r)+\phi^{2}(r)f_{r}^{2}( r)}{\sqrt{1+\psi(r)+\phi^{2}(r)f_{r}^{2}(r)}}=\lim_{r\to\infty}\frac{r}{2}( \psi(r)+\phi^{2}(r)f_{r}^{2}(r))\] \[=E_{ADM}+\lim_{r\to\infty}\frac{r\phi^{2}(r)f_{r}^{2}(r)}{2}\] and so to have the ADM energies match up we need \[\phi^{2}(r)f_{r}^{2}(r)\leq\frac{C}{r^{1+2\varepsilon}}\] for some \(\varepsilon>0\), or using (2.12) and (2.30) we need \[|v(r)|\leq\frac{C}{r^{1/2+\varepsilon}}. \tag{2.31}\] ### Proof of Theorem 1.1 Proof.: We will take the initial data set \((M_{\epsilon},g_{\epsilon},k_{\epsilon})\) of Proposition 2.3 and show that there are no smooth solutions \(f=f(r)\) and \(\phi=\phi(r)>0\) for \(r\in(r_{0},\infty)\) to the system (1.10) having the asymptotics (2.31) for \(v(r)\). The proof is by contradiction. Suppose smooth solutions \(f(r)\) and \(\phi(r)\) with \(\phi(r)>0\) solving (1.10) with \(v(r)\) satisfying (2.31) exist for \((M_{\epsilon},g_{\epsilon},k_{\epsilon})\). Let us consider the solutions on the interval \([4,\infty)\) since \[k_{b\epsilon}(r)=0,\quad Tr_{g_{\epsilon}}k_{\epsilon}(r)=\epsilon k_{a}(r) \quad\text{for}\quad r\geq 4\] which makes the analysis considerably easier. First, notice that \(v(r)\not\equiv 0\) on \([4,\infty)\). For substituting \(v(r)\equiv 0\) into (2.14) yields \[\epsilon k_{a}=Tr_{g_{\epsilon}}k_{\epsilon}\equiv 0,\quad r\geq 4\] which is false by construction. Thus, we can take some point \(s_{1}\in[4,\infty)\) where \(v(s_{1})\neq 0\). Without loss of generality, we can assume \(v(s_{1})>0\). Now, let \(I=[s_{1},s_{*})\subset[s_{1},\infty)\) be the maximal interval on which \(v(r)>0\). Suppose \(s_{*}<\infty\) so that \(v(s_{*})=0\). In that case, (2.18) must hold for \(r\in[s,s_{*})\). Moreover, since \(r\geq 4\), the equations simplify considerably. We have \[F=\frac{\partial_{r}(h^{-1/2}r^{-1})}{h^{-1/2}r^{-1}}+\frac{2}{r} \tag{2.32}\] and so \[\frac{\phi_{r}}{\phi}=-\frac{2v_{r}}{v}-\frac{vv_{r}}{1-v^{2}}-\frac{\partial _{r}(h^{-1/2}r^{-1})}{h^{-1/2}r^{-1}}-\frac{2}{r}\] and therefore integrating \[\ln(\phi) =-2\ln(v)+\frac{1}{2}\ln(1-v^{2})-\ln(h^{-1/2}r^{-1})-2\ln(r)+C_{1}\] \[=\ln\left(\frac{r\sqrt{h}\sqrt{1-v^{2}}}{r^{2}v^{2}}\right)+C_{1}\] and so \[\phi(r)=C_{2}\frac{\sqrt{h(r)}\sqrt{1-v^{2}(r)}}{rv^{2}(r)},\quad r\in[s_{1}, s_{*})\] for some \(C_{2}>0\). But since \[\lim_{r\to s_{*}^{-}}v(r)=v(s_{*})=0\] then \[\lim_{r\to s_{*}^{-}}\phi(r)=\infty\] contradicting the assumed smoothness of \(\phi(r)\). The same argument works if we assume \(v(s_{1})<0\). Therefore, we can assume we are on some interval \([s_{1},\infty)\subset[4,\infty)\) where \(v(r)\neq 0\). First we assume \(v(r)>0\) on \([s_{1},\infty)\). In that case, \(q^{1}(r)\neq 0\) and \(v(r)\) must satisfy (2.19) on \([s_{1},\infty)\). In this case, (2.19) becomes \[\frac{1}{\sqrt{h}}(v^{2}-1)v_{r}+\frac{2}{r\sqrt{h}}v+\epsilon k_{a}v^{2}- \epsilon k_{a}-\frac{1}{\sqrt{h}}v(1-v^{2})\left(\frac{\partial_{r}(h^{-1/2}r ^{-1})}{h^{-1/2}r^{-1}}+\frac{2}{r}\right)=0\] which we can further simplify to get \[\frac{1}{\sqrt{h}}(v^{2}-1)v_{r}+\frac{2}{r\sqrt{h}}v+\epsilon k_{a}v^{2}- \epsilon k_{a}-\frac{1}{\sqrt{h}}v(1-v^{2})\left(\frac{1}{r}-\frac{h_{r}}{2h} \right)=0.\] We can then put the equation in the form \[\frac{1}{\sqrt{h}}v_{r}-\left(\frac{2}{r(1-v^{2})}-\frac{1}{r}+\frac{h_{r}}{2 h}\right)\frac{v}{\sqrt{h}}+\epsilon k_{a}=0. \tag{2.33}\] Next, to make things easier, we let \[\tilde{v}(r)=\frac{v(r)}{\sqrt{h(r)}}\] in which case \[v_{r}=\sqrt{h}\tilde{v}_{r}+\frac{1}{2}h^{-1/2}h_{r}\tilde{v}\] which upon substituting gives \[\tilde{v}_{r}-\left(\frac{2}{r(1-h\tilde{v}^{2})}-\frac{1}{r}\right)\tilde{v}+ \epsilon k_{a}=0. \tag{2.34}\] Now, if \(\lim_{r\to\infty}v(r)\neq 0\) then the solution does not have appropriate asymptotics for application to the Penrose inequality. Hence, we can assume \(\lim_{r\to\infty}v(r)=0\) and since \(\lim_{r\to\infty}h(r)=1\) we also have \(\lim_{r\to\infty}\tilde{v}(r)=0\). Thus, we can take some interval \([s_{2},\infty)\subset[s_{1},\infty)\) such that \[\frac{2}{r(1-h\tilde{v}^{2})}-\frac{1}{r}>\frac{0.9}{r}\quad\text{for}\quad r \geq s_{2}. \tag{2.35}\] Now take some \(s_{3}>s_{2}\) such that \[\frac{\sin(s_{3})}{s_{3}^{5}}<0\] and consider the initial value problem \[\underline{w}_{r}-\frac{0.9}{r}\underline{w}+\epsilon k_{a} =0\] \[\underline{w}(s_{3}) =\frac{1}{2}\tilde{v}(s_{3}).\] The solution can be explicitly calculated using the method of integrating factors to be \[\underline{w}(r) =r^{0.9}\left[\frac{\tilde{v}(s_{3})}{2s_{3}^{0.9}}+\int_{s_{3}}^{r} (-\epsilon s^{-0.9}k_{a}(s))ds\right] \tag{2.36}\] \[=r^{0.9}\left[\frac{\tilde{v}(s_{3})}{2s_{3}^{0.9}}+\frac{ \epsilon\sin(r)}{6r^{5}}-\frac{\epsilon\sin(s_{3})}{6s_{3}^{5}}\right]\] \[=r^{0.9}\left[P+\frac{\epsilon\sin(r)}{6r^{5}}\right]\] for \(r\geq s_{3}\), where, since \(v(s_{3})>0\) and \(\sin(s_{3})/s_{3}^{5}<0\), we have \(P>0\) is a positive constant. Notice, this calculation is precisely why we defined \(k_{a}\) in (2.22) the way we did. Also notice we have \[\lim_{r\to\infty}\underline{w}(r)=\infty\] because \(P>0\). We claim that \(\tilde{v}(r)>\underline{w}(r)\) for \(r\geq s_{3}\). Notice, \(\tilde{v}(s_{3})>\underline{w}(s_{3})\). Now let \(\tilde{s}>s_{3}\) be the smallest \(r>s_{3}\) where \(\underline{w}(\tilde{s})=\tilde{v}(\tilde{s})=\beta\). At such a point we must have \[\tilde{v}_{r}(\tilde{s})\leq\underline{w}_{r}(\tilde{s})\] Notice, since we are assuming \(v(r)>0\) on \([s_{1},\infty)\) then necessarily \(\beta>0\). Therefore, the following inequality holds: \[\tilde{v}_{r}(\tilde{s})=\left(\frac{2}{\tilde{s}(1-h(\tilde{s})\tilde{v}^{2} (\tilde{s}))}-\frac{1}{\tilde{s}}\right)\beta-\epsilon k_{a}(\tilde{s})>\frac {0.9}{\tilde{s}}\beta-\epsilon k_{a}(\tilde{s})=\underline{w}_{r}(\tilde{s})\] due to (2.35) which is a contradiction. Therefore, \(\tilde{v}(r)>\underline{w}(r)\) for \(r\geq s_{3}\) and so \(v(r)>\sqrt{h(r)}\underline{w}(r)\) for \(r\geq s_{3}\). Therefore \[\lim_{r\to\infty}v(r)=\infty\] contradicting the assumption \(\lim_{r\to\infty}v(r)=0\). Thus, assuming \(v(r)>0\) on \([s_{1},\infty)\) yields a contradiction to the necessary asymptotics of (2.31). So finally, we are left with the case \(v(r)<0\) on \([s_{1},\infty)\). By the same arguments, we can assume we are on the interval \([s_{2},\infty)\) where (2.35) holds. Now take some \(s_{4}>s_{2}\) such that \[\frac{\sin(s_{4})}{s_{4}^{5}}>0\] and consider the initial value problem \[\overline{w}_{r}-\frac{0.9}{r}\overline{w}+\epsilon k_{a} =0\] \[\overline{w}(s_{4}) =\frac{1}{2}\tilde{v}(s_{4}).\] Therefore \[\overline{w}(r) =r^{0.9}\left[\frac{\tilde{v}(s_{4})}{2s_{4}^{0.9}}+\int_{s_{4}}^{r}( -\epsilon s^{-0.9}k_{a}(s))ds\right] \tag{2.37}\] \[=r^{0.9}\left[\frac{\tilde{v}(s_{4})}{2s_{4}^{0.9}}+\frac{\epsilon \sin(r)}{6r^{5}}-\frac{\epsilon\sin(s_{4})}{6s_{4}^{5}}\right]\] \[=r^{0.9}\left[N+\frac{\epsilon\sin(r)}{6r^{5}}\right]\] for \(r\geq s_{4}\), where, since \(v(s_{4})<0\) and \(\sin(s_{4})/s_{4}^{5}>0\), we have \(N<0\) is a negative constant. Therefore we have \[\lim_{r\to\infty}\overline{w}(r)=-\infty\] because \(N<0\). We claim that \(\tilde{v}(r)<\overline{w}(r)\) for \(r\geq s_{4}\). Notice, \(\tilde{v}(s_{4})<\overline{w}(s_{4})\). Now let \(\tilde{s}>s_{4}\) be the smallest \(r>s_{4}\) where \(\overline{w}(\tilde{s})=\tilde{v}(\tilde{s})=\gamma\). At such a point we must have \[\tilde{v}_{r}(\tilde{s})\geq\overline{w}_{r}(\tilde{s})\] Notice, since we are assuming \(v(r)<0\) on \([s_{1},\infty)\) then necessarily \(\gamma<0\). Therefore, the following inequality holds: \[\tilde{v}_{r}(\tilde{s})=\left(\frac{2}{\tilde{s}(1-h(\tilde{s})\tilde{v}^{2}( \tilde{s}))}-\frac{1}{\tilde{s}}\right)\gamma-\epsilon k_{a}(\tilde{s})<\frac {0.9}{\tilde{s}}\gamma-\epsilon k_{a}(\tilde{s})=\underline{w}_{r}(\tilde{s})\] since multiplying (2.35) by \(\gamma<0\) reverses the inequality sign. This is again a contradiction and therefore, \(\tilde{v}(r)<\overline{w}(r)\) for \(r\geq s_{4}\) and so \(v(r)<\sqrt{h(r)}\overline{w}(r)\) for \(r\geq s_{4}\). Therefore \[\lim_{r\to\infty}v(r)=-\infty\] contradicting the assumption \(\lim_{r\to\infty}v(r)=0\). Thus, assuming \(v(r)<0\) on \([s_{1},\infty)\) yields a contradiction to the necessary asymptotics of (2.31). Therefore, all the possibilities lead to contradictions, hence we conclude there are no smooth \(v(r)\) and \(\phi(r)\) with the appropriate asymptotics (and hence no smooth \(f(r)\) and \(\phi(r)\) with the appropriate asymptotics) for application to the Penrose inequality solving (1.10) for the initial data set \((M_{\epsilon},g_{\epsilon},k_{\epsilon})\) of Proposition (2.3). ## Appendix A Proof of Proposition 2.1 Proof.: Since \(W(r)\geq 0\) is compactly supported and \(V(r)>0\) is smooth with \(V(r)=1/r^{4}\) for \(r\geq 4\), there exists some constant \(C_{1}>0\) such that \[W(r)\leq\frac{C_{1}}{r^{4}},\quad V(r)\leq\frac{C_{1}}{r^{4}}\] for \(r\geq 1\). Next, choose an \(\epsilon\) so small that \[\frac{1}{2}\epsilon^{2}C_{1}+2\epsilon C_{1}<\frac{1}{10}\] Next, let \[B_{1}=1+\max\{h(1),10\}.\] We claim that this constant acts as an upper barrier for the solution. Since \(h(1)>0\), (2.28) has a smooth solution on some maximal interval \([1,r^{*})\). Let \(s\in[1,r^{*})\) be the smallest value of \(r\) at which \(h(r)=B_{1}\). Then, since the solution starts out smaller than \(B_{1}\), we must have \(h^{\prime}(s)\geq 0\). However at \(r=s\) we have \[h^{\prime}(s) =\frac{B_{1}}{r}-\frac{B_{1}^{2}}{r}+\frac{1}{2}\epsilon^{2}rWB_{ 1}^{2}+2\epsilon rVB_{1}^{3/2}\] \[\leq\frac{B_{1}}{r}-\frac{B_{1}^{2}}{r}+\frac{1}{2}\epsilon^{2}r \left(\frac{C_{1}}{r^{4}}\right)B_{1}^{2}+2\epsilon r\left(\frac{C_{1}}{r^{4} }\right)B_{1}^{3/2}\] \[\leq\frac{B_{1}}{r}-\frac{B_{1}^{2}}{r}+\frac{1}{10}\frac{1}{r^{ 3}}B_{1}^{2}+\frac{1}{10}\frac{1}{r^{3}}B_{1}^{3/2}\] \[\leq\frac{B_{1}}{r}-\frac{B_{1}^{2}}{r}+\frac{1}{10}\frac{1}{r}B _{1}^{2}+\frac{1}{10}\frac{1}{r}B_{1}^{2}\] \[=\frac{B_{1}}{r}\left(1-\frac{8B_{1}}{10}\right)<0\] yielding a contradiction. Hence, \(h(r)<B_{1}\) for all \(r\in[1,r^{*})\). Similarly, we can construct a positive lower barrier. Let \[B_{0}=\frac{1}{2}\min\{h(1),1/10\}>0.\] Let \(s\in[1,r^{*})\) be the smallest value of \(r\) at which \(h(r)=B_{0}\). Then, since the solution starts out larger than \(B_{0}\), we must have \(h^{\prime}(s)\leq 0\). However at \(r=s\) we have \[h^{\prime}(s) =\frac{B_{0}}{r}-\frac{B_{0}^{2}}{r}+\frac{1}{2}\epsilon^{2}rWB_ {0}^{2}+2\epsilon rVB_{0}^{3/2}\] \[\geq\frac{B_{0}}{r}-\frac{B_{0}^{2}}{r}=\frac{B_{0}}{r}(1-B_{0})>0\] yielding a contradiction. Hence \(h(r)>B_{0}\) for all \(r\in[1,r^{*})\). Thus we have \(0<B_{0}<h(r)<B_{1}\) for all \(r\in[1,r^{*})\) which implies that \(r^{*}=\infty\). Since \(h(r)>0\), we can iteratively take as many derivatives of \(h\) as we want and we conclude they are all continuous, so \(h(r)\) is smooth. Next, we need to obtain the desired asymptotics. Notice, if \(h(r_{0})=1\) for some \(r_{0}\), then \(h(r)>1\) for all \(r>r_{0}\) since \(h=1\) implies \(h^{\prime}>0\). Let us suppose then that we are on some interval \([r_{0}^{*},\infty)\) with \(h(r)>1\). Consider (2.28) for \(r\geq 4\). Then the differential equation simplifies to \[h^{\prime}=\frac{h}{r}-\frac{h^{2}}{r}+\frac{2\epsilon}{r^{3}}h^{3/2}\] (A.1) We can assume \(r_{0}^{*}\geq 4\). On this interval, we can write \(h(r)=1+\psi(r)\) with \(\psi>0\). Then (A.1) can be written as \[\psi^{\prime}=-\frac{\psi}{r}-\frac{\psi^{2}}{r}+\frac{2\epsilon}{r^{3}}(1+ \psi)^{3/2}.\] (A.2) Let \(\psi(r_{0}^{*})=\mathcal{B}>0\). Consider the initial value problem \[w^{\prime}=-\frac{w}{r}+\frac{2\epsilon B_{1}^{3/2}}{r^{3}}\] \[w(r_{0}^{*})=2\mathcal{B}\] where \(B_{1}\) is the upper bound for \(h\) obtained earlier. We claim that \(w(r)>\psi(r)\) for all \(r\in[r_{0}^{*},\infty)\). As before, let \(s>r_{0}^{*}\) be the smallest value of \(s\) where \(w(s)=\psi(s)=B\). Since initially \(w(r)\) is larger, at \(s\) we must have \(w^{\prime}(s)\leq\psi^{\prime}(s)\). But \[\psi^{\prime}(s)=-\frac{B}{s}-\frac{B^{2}}{s}+\frac{2\epsilon}{s^{3}}(1+B)^{3/ 2}<-\frac{B}{s}+\frac{2\epsilon B_{1}^{3/2}}{s^{3}}=w^{\prime}(s)\] since \(1+B=h(s)<B_{1}\) and so \(w(r)>\psi(r)\) for \(r\in[r_{0}^{*},\infty)\). Now, the solution for \(w(r)\) can be written down explicitly using the method of integrating factors as \[w=\frac{1}{r}\left[2\mathcal{B}+\int_{r_{0}^{*}}^{r}\frac{2\epsilon B_{1}^{3/ 2}}{s^{2}}ds\right]\leq\frac{C_{2}}{r}\] which gives \[0<\psi(r)\leq\frac{C_{2}}{r}\] (A.3) for \(r\geq r_{0}^{*}\). Otherwise, we have \(h(r)<1\) for all \(r\in[1,\infty)\). In that case we again write \(h(r)=1+\psi(r)\) with \(-1<\psi(r)<0\). Let \(\psi(1)=\mathcal{C}\). Consider the initial value problem \[u^{\prime}=-\frac{u}{r}-\frac{u^{2}}{r}\] (A.4) \[u(1)=\frac{1}{2}(\mathcal{C}-1).\] (A.5) Notice, since \(-1<\mathcal{C}<0\), we have \(u(1)<\psi(1)\) and \(-1<u(1)<0\). We claim \(\psi(r)>u(r)\) for all \(r\geq 1\). Again, at the first value \(s\) where \(u(s)=\psi(s)=B\) we'd have \(u^{\prime}(s)\geq\psi^{\prime}(s)\). But at such a point we have \[u^{\prime}(s)=-\frac{B}{s}-\frac{B^{2}}{s}<-\frac{B}{s}-\frac{B^{2}}{s}+\frac {1}{2}\epsilon^{2}sW(s)(1+B)^{2}+2\epsilon sV(s)(1+B)^{3/2}=\psi^{\prime}(s)\] since \(V(r)>0\), giving a contradiction. Thus, \(\psi(r)>u(r)\) for \(r\geq 1\). The differential equation (A.4) is separable. It is easy to see that if \(-1<u(1)<0\) then \(-1<u(r)<0\) in which case the solution can be found explicitly to be \[u(r)=-\frac{C_{3}}{C_{3}+r}\geq-\frac{C_{3}}{r}\] for some \(C_{3}>0\). Therefore \[-\frac{C_{3}}{r}\leq u(r)<\psi(r)<0.\] (A.6) Putting together (A.3) and (A.6) gives \[|\psi(r)|=|h(r)-1|\leq\frac{C_{4}}{r}.\] (A.7) Since \(\psi^{\prime}=h^{\prime}\), substituting (A.7) into (A.2), we get \[|h^{\prime}(r)|\leq\frac{C_{5}}{r^{2}}\] (A.8) for \(r\geq 4\). Putting together (A.7) and (A.8) yields (2.29) for some constant \(C\).
2305.06328
Suggestion Bot: Analyzing the Impact of Automated Suggested Changes on Code Reviews
Peer code reviews are crucial for maintaining the quality of the code in software repositories. Developers have introduced a number of software bots to help with the code review process. Despite the benefits of automating code review tasks, many developers face challenges interacting with these bots due to non-comprehensive feedback and disruptive notifications. In this paper, we analyze how incorporating a bot in software development cycle will decrease turnaround time of pull request. We created a bot called SUGGESTION BOT to automatically review the code base using GitHub's suggested changes functionality in order to solve this issue. A preliminary comparative empirical investigation between the utilization of this bot and manual review procedures was also conducted in this study. We evaluate SUGGESTION BOT concerning its impact on review time and also analyze whether the comments given by the bot are clear and useful for users. Our results provide implications for the design of future systems and improving human-bot interactions for code review.
Nivishree Palvannan, Chris Brown
2023-05-10T17:33:43Z
http://arxiv.org/abs/2305.06328v1
# Suggestion Bot: Analyzing the Impact of Automated Suggested Changes on Code Reviews ###### Abstract Peer code reviews are crucial for maintaining the quality of the code in software repositories. Developers have introduced a number of software bots to help with the code review process. Despite the benefits of automating code review tasks, many developers face challenges interacting with these bots due to non-comprehensive feedback and disruptive notifications. In this paper, we analyze how incorporating a bot in software development cycle will decrease turnaround time of pull request. We created a bot called "Suggestion Bot" to automatically review the code base using GitHub's suggested changes functionality in order to solve this issue. A preliminary comparative empirical investigation between the utilization of this bot and manual review procedures was also conducted in this study. We evaluate Suggestion Bot concerning its impact on review time and also analyze whether the comments given by the bot are clear and useful for users. Our results provide implications for the design of future systems and improving human-bot interactions for code review. Pull Requests, Peer Code Reviews, Code Review Bot, Suggested changes ## I Introduction Software development on GitHub is pull-based [9], which allows branching and isolated development for individuals in a distributed software engineering team to make changes to a central repository. Millions of both open- and closed-source projects utilize hosting sites like GitHub1 with pull-based development features [8]. In software development processes on GitHub, pull requests are a way to collaborate and inspect changes. The inspection of pull requests (PRs) typically consists of a _peer code review_ of the supplied commits, or code changes from a contributor reviewed by a project maintainer with a discussion of the changes made in the pull request. The reviewer will examine the code modifications, review them, and then push them to the master branch. The process for a pull request approval in GitHub will involve getting the project maintainer(s) or peer workers to review your work; after which they will provide comments or, if your pull request is approved, will merge your changes directly into the main repository. Pull-requests as implemented by GitHub are a model for collaborating on distributed software development. Footnote 1: [https://github.com](https://github.com) In spite of having various advantages for improving code quality [14] and teamwork [18], there are various challenges with peer code reviews. For example, prior work suggests modern code review practices are time-consuming and slow [5], require in-depth understanding of code bases [1], and incorporate bias based on gender [11] and race [15]. Another key disadvantage of peer code reviews is the large burden placed on code reviewers [10]. Many pull requests in GitHub repositories are stagnated due to lack of code reviews [22]. This increases the likelihood that PRs will be rejected or ignored, ultimately discouraging developers from making future contributions to projects [12]. Moreover, complaints from developers on code review processes include untimely and unuseful or incomprehensible feedback from reviewers [13]. Projects have adopted bots to automate peer code review tasks, thereby assisting PR integrators and contributors in their work. Bots such as Review Bot have been shown to improve code quality while reducing reviewer effort at VMWare [2]. Further, Wessel et al. show that code review bots and automated tools are useful for reviewing processes and increase the number of pull requests merged [20]. However, prior work also suggests developers find bots challenging to work with in software development contexts. For instance, research shows software engineers report that bots have non-comprehensive feedback [19] and interrupt development workflows [3]. These poor human-bot interactions lead to frustration and distractions for software developers [21]. To overcome these challenges with code review bots, we developed a bot called Suggestion Bot. This system aims to reduce the noise from automated bots in pull request reviews by taking advantage of the GitHub suggested changes feature to provide concise feedback to developers and reduce interruptions in code review processes. We conducted an experiment to investigate on the impact of using Suggestion Bot for pull request reviewing. Our study explores our following research questions. _RQ1_ How quickly are pull requests are reviewed using Suggestion Bot? _RO2_ How useful are recommendations from suggested changes using a bot while reviewing pull requests? To analyze the usage and impact of our bot, we conducted a preliminary user study with 5 participants who have some prior experience with pull request reviews on GitHub. Study tasks included manual code review and code review using Suggestion Bot to compare and contrast the advantages of incorporating the bot into the development cycle. From the results of the study, we provide initial evidence suggesting Suggestion Bot provides clear and understandable feedback and decreases the turnaround time of PRs in code review processes. ## II Background ### _GitHub Pull Requests and Peer Code Reviews_ Pull requests are the primary feature for pull-based development on GitHub [9].2 PRs rely on peer code reviews, a manual inspection of source code by project maintainers, to integrate code into the main branch. The authors of PRs can then apply or reject the code changes suggested by collaborators and reviewers through comments as part of the code review process. In the past, code changes were reviewed through inspection where there were formal in-person meetings [7]. However, current code review processes through PRs are asynchronous and support geographically distributed reviewers [6]. Footnote 2: [https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests) ### _GitHub Suggested Changes_ In 2018, GitHub introduced _suggested changes_, a new feature to provide feedback on pull requests.3 After a PR is submitted to the repository, suggested changes allow code reviewers to make recommendations to contributors by suggesting specific lines of code on the PR. This feature also provides functionality for contributors to automatically apply, reject, or edit changes suggested by reviewers. An example is presented in Figure 1. GitHub users were "quick to adopt suggested changes" within a few weeks of its release, with development teams frequently utilizing them for code reviews and integrating this feature into their code review processes.4 Prior work empirically investigated the suggested changes feature, and found this tool is useful for making recommendations to improve the quality of code during PR reviews because of its concise communication and effortless integration into existing workflows [4]. Footnote 3: [https://github.blog/changelog/2018-10-16-suggested-changes/](https://github.blog/changelog/2018-10-16-suggested-changes/) Footnote 4: [https://github.blog/2018-11-01-suggested-changes-update/](https://github.blog/2018-11-01-suggested-changes-update/) ### _Code Review Bots_ Researchers have implemented a wide variety of automated bots to support pull request review tasks. Bots such as Dependabot5 and Codecov6 provide useful information to reviewers on outdated package dependencies and code testing coverage. Review Bot [2] consolidates the output from various static analysis tools, which has been shown to reduce reviewer effort in reviews [16]. Other bots, such as RevFinder [17], automatically recommend reviewers for pull requests to prevent stagnation. In general, research suggests bots can enhance code review processes [20]. We aim to build upon this work by introducing a novel bot to support code reviews by providing concise feedback and minimal interruptions to workflows. Footnote 5: [https://docs.github.com/en/code-security/dependabot/working-with-dependabot](https://docs.github.com/en/code-security/dependabot/working-with-dependabot) Footnote 6: [https://docs.codecov.com/docs/team-bot](https://docs.codecov.com/docs/team-bot) ## III Suggestion Bot To improve human-bot interactions during pull request reviews, we created a bot called Suggestion Bot. The goal of this bot is to examine pull requests and provide timely, concise, and non-interruptive feedback to developers. To do this, we leverage the GitHub suggested changes feature as the primary feedback mechanism for Suggestion Bot which provides clear feedback to users without interrupting existing workflows [4]. Suggestion Bot is able to analyze open pull requests on public repositories. The bot works by running static analysis tools on the modified version of contributors' code fetched using the GitHub API, and then generates recommendations for improvements to change the code by providing a suggested change automatically on the PR based on the static analysis tool output. An example suggestion from Suggestion Bot is presented in Figure 2. For our initial implementation of Suggestion Bot, we integrated Black7 into the workflow of our bot. Black is a popular static code analysis tool for Python that is is PEP 8 compliant and can be used for static error identification. It is open-sourced and available in free versions for continuous inspection of code quality and style during peer code reviews. Our preliminary implementation of Suggestion Bot involves code review analysis of Python code and automated suggested changes featuring formatted output from Black. We evaluate Suggestion Bot by analyzing the effectiveness and advantages of this bot in comparison with manual effort during pull request code reviews. Footnote 7: [https://black.readthedocs.io/en/stable/](https://black.readthedocs.io/en/stable/) ## IV Study Methodology We devised a preliminary experiment to understand the effects of Suggestion Bot on peer code review tasks. ### _Participants_ Our preliminary evaluation consists of a user study where participants completed code review tasks with and without using Suggestion Bot. We first asked participants to complete a demographic questionnaire. Using this initial survey, we Fig. 1: GitHub suggested changes example hoped to gauge participants' knowledge of and experience with GitHub and reviewing pull requests. The participants were required to have some background in software engineering as well as a working knowledge of GitHub and reviewing pull requests to participate. Overall, we had seven participants. All subjects had some prior experience with software development and GitHub, however most participants were unfamiliar with the GitHub suggested changes feature. Participants were a mix of professional developers and students with prior industry experience, averaging about two years of professional software engineering work. In addition, participants reported using GitHub at least "Most of the Time" or "Sometimes". ### _Study Tasks_ After the initial questionnaire, participants were asked to manually review of a pull request created by the authors for study purposes. The mock PR was created on an existing and popular public Python repository on GitHub. When customizing the pull request, we aimed to ensure the study environment was relevant to real-world software and also incorporated errors that reviewers and Suggestion Bot would be able to detect. We aimed to design our task to reflect authentic code review tasks in a development environment and be reviewable in a limited amount of time for the user study (approx. 30 minutes). Participants were asked to think-aloud and make observations about the code and point out any potential errors in the Python code. After the manual review, participants were asked to review the same set of code using Suggestion Bot. Participants were similarly asked to note any observations and differences between these two processes. To compare Suggestion Bot with manual inspection, we observed the amount of time for pull requests to be reviewed both with and without the bot to understand the effects of Suggestion Bot when performing code reviews. All study sessions were recorded for further retrospective analysis of the study tasks. We concluded the study session with a post survey to gain user feedback on our bot and participants' experiences using it for code reviews. The survey used rating scale questions for participants to rank responses from 0 to 100. We were also interested in whether participants would be willing to adopt this tool and the clarity of feedback. This method gave us a more thorough and nuanced grasp of the numerous approaches to enhance the code review process and to enhance Suggestion Bot in the future. For our study, we were specifically interested in the impact of our bot on pull request review times and the usefulness of feedback from Suggestion Bot. #### Iii-B1 Time Many pull requests experience delays in review due to variety of reasons [22]. For instance, in the study pre-questionnaire participants reported experiencing delays in pull request reviews due to a variety of issues such as other work tasks and meetings. Subjects also reported PR reviews consume a lot of time due to the need to read the code, understand functionality, inspect for design issues and to assess the code quality, and test the new implementation. However, participants noted their ideal turnaround time for pull requests would be the same day or within one week. To measure the impact of Suggestion Bot on time, we observed how long it took participants to review pull requests in the study tasks manually in comparison to reviewing them with Suggestion Bot and asked them to rank our system based on its ability to decrease PR turnaround time. We hypothesize that peer code reviews with Suggestion Bot will take less time compared to manual inspection of pull requests. #### Iii-B2 Usefulness Pull requests are an intermediate step for merging code into source code repositories [9]. It is a mechanism where the developer will notify team members that a new contribution is ready to be merged to the main branch, along with serving as a discussion forum for developers to review the code changes. However, this feedback is not always useful. For example, in our pre-questionnaire most participants reported receiving "Somewhat understandable" comments on their own pull requests. Additionally, bots can provide ineffective feedback on PRs [19]. To measure the usefulness of feedback from Suggestion Bot, we debriefed participants Fig. 2: Recommendation from Suggestion Bot on a pull request after the study tasks with a post-survey to provide additional insight into their experience using our bot to review a pull request. We speculate participants will find Suggestion Bot useful due to its concise and actionable feedback as well as its ability to seamlessly fit into code review processes. ## V Results ### _RQ1: Time_ One important conclusion from the study is that manual reviewing takes more time to review and make suggestions in comparison to Suggestion Bot. Participants averaged over seven minutes to manually inspect the pull request, while Suggestion Bot itself averaged approximately 50 seconds to run and make comments on PRs. Using a t-test to compare the average review time in each setting, we found these results are statistically significant (\(t=5.67406,p-value=0.0001\)). Most participants spent time studying the code before providing review comments. Some participants mentioned they normally take their time to acquire and understand code that is novel or whose design is unknown before giving comments. We also found particular ranked Suggestion Bot with a 97.4 on it's ability to reduce pull request review times. In addition to speeding up PR turnaround time, the tool also provided comments needed to improve the code quality and standards. Some participants completed the code review in two to three minutes, however there were some undetectable problems, such as white space issues and improper code styling. As opposed to this, employing Suggestion Bot will find problems and make suggestions for improvements. ### _RQ2: Usefulness_ To gain insight into the usefulness of Suggestion Bot, we surveyed participants after completing the study tasks. With respect to the usefulness of Suggestion Bot for reviewing pull requests, participants rated the bot with a 95, indicating they found it very useful for completing code review tasks. Further, Suggestion Bot was ranked 86.4 for whether subjects believed it would be adaptable for new projects and 95.7 for whether or not respondents would suggest this tool to coworkers. These results suggest participants found Suggestion Bot usable and would be willing to adopt this system for their own peer code review processes. Finally, concerning the feedback from Suggestion Bot, participants ranked our system with a 92.1 for clarity and a 93 for general perceptions of the comments. We found participants appreciated the feedback comments from Suggestion Bot using the suggested changes feature on GitHub. These results substantiate prior work, which shows the suggested changes feature is useful for code reviews and provides clear feedback to developers on pull requests. ## VI Discussion Our results suggest that bots are effective for influencing pull request reviews on GitHub, and participants found value in using Suggestion Bot to support code review tasks. Suggestion Bot was able to reduce the review time for PR reviews compared to manual inspection. However, as the majority of bots improve efficiency for completing manual tasks, we were also interested in the usability of Suggestion Bot. All of our participants found our bot useful for code reviews and providing feedback to developers by providing high ratings on the usability of Suggestion Bot--even expressing interest in adopting the tool for their own projects and recommending to colleagues. This points to a need for design improvements for code review bots to make better recommendations with clear feedback and reduced noise. Based on our experience implementing and evaluating Suggestion Bot, we encourage researchers and toolsmiths look beyond improving code review tasks with automation and also consider implementing concise and actionable feedback that can be easily integrated into project workflows when developing future bots to support peer code reviews. ## VII Summary and Future Work Automated bots have been implemented to automate code review tasks and reduce developer effort during reviews. However, software bots often generate poor interactions with humans due to incomprehensible feedback and disruption of workflows. In this work, we introduce Suggestion Bot as a novel system that utilizes GitHub suggested changes to provide feedback to submitted code. We analyzed the relationship between pull request code review processes with Suggestion Bot and manual inspection, and found that our bot reduced the turnaround time of pull requests and provided concise and useful feedback to users. This paper makes a step towards improving the design of code review bots by providing actionable feedback and minimizing interruptions to development processes. Our ongoing and future work consists of enhancing Suggestion Bot to work with other development tools for different programming languages beyond using Black to detect Pylint issues. Additionally, we plan to further evaluate the bot with more participants and compare with other code review bots to analyze the effects of Suggestion Bot on pull request review processes. Fig. 3: Time Comparison between manual code review and Suggestion Bot
2305.11654
V2X-Boosted Federated Learning for Cooperative Intelligent Transportation Systems with Contextual Client Selection
Machine learning (ML) has revolutionized transportation systems, enabling autonomous driving and smart traffic services. Federated learning (FL) overcomes privacy constraints by training ML models in distributed systems, exchanging model parameters instead of raw data. However, the dynamic states of connected vehicles affect the network connection quality and influence the FL performance. To tackle this challenge, we propose a contextual client selection pipeline that uses Vehicle-to-Everything (V2X) messages to select clients based on the predicted communication latency. The pipeline includes: (i) fusing V2X messages, (ii) predicting future traffic topology, (iii) pre-clustering clients based on local data distribution similarity, and (iv) selecting clients with minimal latency for future model aggregation. Experiments show that our pipeline outperforms baselines on various datasets, particularly in non-iid settings.
Rui Song, Lingjuan Lyu, Wei Jiang, Andreas Festag, Alois Knoll
2023-05-19T13:09:33Z
http://arxiv.org/abs/2305.11654v1
V2X-Boosted Federated Learning for Cooperative Intelligent Transportation Systems with Contextual Client Selection ###### Abstract Machine learning (ML) has revolutionized transportation systems, enabling autonomous driving and smart traffic services. Federated learning (FL) overcomes privacy constraints by training ML models in distributed systems, exchanging model parameters instead of raw data. However, the dynamic states of connected vehicles affect the network connection quality and influence the FL performance. To tackle this challenge, we propose a contextual client selection pipeline that uses Vehicle-to-Everything (V2X) messages to select clients based on the predicted communication latency. The pipeline includes: (i) fusing V2X messages, (ii) predicting future traffic topology, (iii) pre-clustering clients based on local data distribution similarity, and (iv) selecting clients with minimal latency for future model aggregation. Experiments show that our pipeline outperforms baselines on various datasets, particularly in non-iid settings. ## I Introduction Machine learning (ML), a subfield of artificial intelligence, focuses on developing learning algorithms and inference models that enable digital systems to make decisions and predictions in terms of the knowledge learned from data. Over the past years, ML-based approaches exhibited great potential to revolutionize various scientific, engineering, economic, and cultural fields with outstanding technological advancements such as Google AlphaGo and Open AI's Chat-GPT. In the filed of road transportation, ML is possible to empower numerous new applications for realizing Intelligent Transportation System (ITS), e.g., environmental perception, road traffic flow optimization, and trajectory planning, which can significantly enhance the safety and efficiency of transportation systems [1, 2, 3, 4, 5]. Recently, a new ITS concept referred to as Cooperative Intelligent Transportation System (C-ITS) attacked a lot of interests from both academia and industry [6]. In C-ITS, the cooperation between two or more ITS sub-systems (personal, vehicle, roadside and central) offers better quality and an enhanced service level, compared to that of the conventional ITS. As illustrated in Fig. 1, road participants -- specifically, connected automated vehicles (CAVs) -- can share information with one another through vehicle-to-everything (V2X) networks, which encompass vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-network(V2N), and infrastructure-to-network (I2N). The European standards for C-ITS define several types of V2X messages to facilitate decentralized information sharing. Specifically, for cooperative awareness and perception, dedicated message types - the Cooperative Awareness Message (CAM) and the Collective Perception Message (CPM)1 - are periodically exchanged among CAVs and with roadside infrastructure [7]. By sending and receiving V2X messages, enriched and improved environmental data of road traffic can be made available within vehicular networks. Footnote 1: European Telecommunications Standards Institute (ETSI) [http://etsi.org/standards](http://etsi.org/standards), specifically EN 302 637-2 for the cooperative awareness service and TS 103 324 for the collective perception service. In centralized model training using ML, CAV clients transmit data to a centralized system through vehicle-to-network communications. This process can generate an enormous volume of data, potentially exceeding the network's capacity. Moreover, data collected from CAV clients for ML model training cannot be directly shared due to privacy concerns. Differing from conventional ML, federated learning (FL) trains ML models using data from distributed systems, such as devices or clients, without centralizing the data [8]. In FL, connected clients share a model trained on their local data with a server, which aggregates the local models and updates Fig. 1: An overview of vehicular networks in C-ITS, including vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-network (V2N), and infrastructure-to-network (I2N) communication. the global model. The updated global model is then shared back with the clients. This process is repeated for a sufficient number of communication rounds until FL converges. The deployment of 5G-V2X vehicular networks has further facilitated the use of FL in C-ITS by providing higher data rates and greater reliability for data exchange. This allows for the training of larger ML models for C-ITS applications and services, such as [9, 10, 11]. Although FL has great potential to preserve privacy and utilize a broader range of data resources [12], the employment of FL in C-ITS has to address major challenges due to heterogeneity in data and networks, which can not only limit the performance but also lead to FL failures. **Data heterogeneity.** Data across clients is non-iid (non-identically independently distributed), resulting from various sensor types, combinations, poses, road scenes, traffic scenarios, climate and weather conditions, and more. **Network heterogeneity.** The diverse connection qualities of clients can slow down model sharing and cause communication delays for global model aggregation, which impedes the FL process. To address these challenges and enhance the application of FL in C-ITS, we propose a novel FL framework. The main idea is to select clients for upcoming communication rounds based on (_i_) the prediction of connection qualities in the context of road traffic status and (_ii_) the similarity of local data distribution in clients. ## II Background and Related Work We discuss greedy and gossip client selection in FL, data- and network-based strategies, and FL in vehicular networks considering road traffic features. **Greedy and gossip client selection.** FL, initially proposed by McMahan et al. [13], suffers from the straggler effect due to varying connection qualities [14]. Greedy client selection includes all clients in each communication round, while gossip (stochastic greedy) selection randomly selects connected clients. Both strategies struggle to avoid the straggler effect. **Network-based client selection.** Strategies focusing on network quality [15, 16, 17, 18] reduce the straggler effect but aren't specifically designed for vehicular networks with dynamic connection qualities and high-priority traffic services. Inspired by [19], we optimize client selection by predicting communication latency in vehicular networks. **Data-based client selection.** Client selection based on data distribution tackles heterogeneity. The approaches in [20, 21, 22, 23, 24, 25] consider data heterogeneity but overlook network parameters. Our work addresses both data distribution and network quality in vehicular networks. A comparison of the client selection paradigms is shown in Tab. II. ## III Framework Our _contextual client selection_ framework is illustrated in Fig. 2, comprising V2X information sharing, traffic topology prediction, data-level client clustering, and network-level client clustering. **V2X message fusion.** We first fuse V2X messages. Continuously receiving CAM and CPM enables dynamic road maps with traffic object states. Road-side infrastructure collects and forwards V2X messages to a server via V2I and I2N networks. The server filters and fuses messages, obtaining traffic object states, such as position, speed, and acceleration. Fused results form an road traffic topology graph (RTTG), with each CAV characterized by a node with attributes. The RTTG digitizes C-ITS and recreates vehicular networks virtually. **RTTG prediction.** We predict future RTTGs. After V2X message fusion, we initialize a prediction instance for each CAV to estimate its trajectory. Predicted trajectories build Fig. 2: Contextual client selection pipeline: (**I**) V2X message fusion; (**2**) Road traffic topology graph (RTTG) prediction; (**3**) Data-level client grouping; (**4**) Network-level client selection. future RTTGs. Predicted RTTGs integrate with digital C-ITS, providing possible connection quality for each CAV. We simulate networks in digital twin and calculate FL communication latency based on predictive transport scenarios. **Data-level client grouping.** We cluster clients into groups considering data heterogeneity. Our goal is to group clients with similar data distribution, ensuring each subset represents the whole group's data features. We observe model updates, considering gradient similarity as a data similarity criterion [20]. We group clients based on model parameter similarity. Clients must report gradient updates before a deadline for inclusion in data-level client grouping. After grouping, each subset represents its cluster. Selecting at least one client per cluster ensures satisfactory training performance. **Network-level client election.** We elect clients in each group based on contextual communication latency. Using predicted RTTG latency, we determine efficient client contributions for upcoming communication rounds. We employ the _Fast-\(\gamma\)_ rule, selecting the \(\gamma\) clients with the lowest communication delay (\(0<\gamma<1\)) per cluster. Through these stages, representative clients with minimal contextual communication latency are chosen for model aggregation. This process increases FL communication efficiency by optimizing communication rounds and round duration. De-selected clients save computational resources by not training models locally. ## IV Performance evaluation We implement and demonstrate FL with our pipeline as well as other four baselines, i.e. greedy, gossip, data-based and network-based client selection strategy, on a computer cluster with 4\(\times\) NVIDIA-A100-PCIE-40GB GPUs and 4\(\times\) 32-Core-AMD-EPYC-7513 CPUs. The environment is a Linux system with Pytorch 1.8.1 and Cuda 11.1. ### _Experiment setup_ We conduct the experimental evaluation by training models on three widely used open datasets MNIST [26], CIFAR-10 [27] and SVHN [28] distributed into 100 CAV clients in non-iid setting.2 Footnote 2: In our default non-iid setting, each client owns only 2 out of 10 classes. We compare our pipeline with four other client selection strategies as baselines, i.e., greedy, gossip, data-based and network-based, as described in Sec. II. The learning rate is 0.001 and the batch size is 64. The number of the local epochs is set as 3 for training on MNIST, and 1 for training on CIFAR-10 and SVHN, respectively. Except the greedy strategy (all clients are selected in each communication round), the general selection rate for FL clients is defined as 10%, i.e. around 10 clients are selected in each communication round. ### _Performance results_ We show the general performance of FL with contextual client selection for training models on three datasets distributed in 100 vehicle clients with respect to default non-iid setting. We train deep learning models with different sizes on MNIST, CIFAR-10 and SVHN as FL tasks. As the experimental results in Fig. 3 show for all three tasks, FL with our contextual client selection can outperform the other four baselines. Generally, the FL with contextual client selection can achieve remarkable higher test accuracy than the other four strategies. Even though the network-based strategy allows the ML-model to be trained to a comparable test accuracy on SVHN, the contextual client selection re Fig. 3: FL training performance (testing accuracy changes over time in seconds) using various client selection strategies on non-iid MNIST (left), CIFAR-10 (middle) and SVHN (right) data distributed in 100 clients. sults showcases much more stable convergence, as the data heterogeneity across CAVs are taken into account. We conduct the experiments with various connection rates and evaluate the performance of FL. We take the required time to reach 0.5 of test accuracy for FL with gossip client selection as a baseline, and evaluate the time reduction rate of FL with other strategies. As the comparison results show in Tab. I, FL with contextual client selection always needs less time than other two strategies at each connection rate. The time reduction rates are robustly over 20\(\times\) even when only 20% of clients are connected in networks. ## V Conclusion In this work, we reviewed the existing client selection strategies for FL and introduced a novel four-stage V2X-Boosted FL pipeline for C-ITS. The approach tackles both data and network heterogeneity in vehicular networks, boosting communication efficiency by reducing the number of communication rounds and shortening the time required for each round. Compared to other strategies, FL with contextual client selection achieves higher accuracy and more stable convergence performance by leveraging V2X messages disseminated in vehicular networks. Future work will further consider the analytical model of communication networks and conduct more validation in traffic scenario data, such as [29, 30, 31].
2303.04869
CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation
Beyond novel view synthesis, Neural Radiance Fields are useful for applications that interact with the real world. In this paper, we use them as an implicit map of a given scene and propose a camera relocalization algorithm tailored for this representation. The proposed method enables to compute in real-time the precise position of a device using a single RGB camera, during its navigation. In contrast with previous work, we do not rely on pose regression or photometric alignment but rather use dense local features obtained through volumetric rendering which are specialized on the scene with a self-supervised objective. As a result, our algorithm is more accurate than competitors, able to operate in dynamic outdoor environments with changing lightning conditions and can be readily integrated in any volumetric neural renderer.
Arthur Moreau, Nathan Piasco, Moussab Bennehar, Dzmitry Tsishkou, Bogdan Stanciulescu, Arnaud de La Fortelle
2023-03-08T20:22:08Z
http://arxiv.org/abs/2303.04869v2
# CROSSFIRE: Camera Relocalization On Self-Supervised Features from an Implicit Representation ###### Abstract Beyond novel view synthesis, Neural Radiance Fields (NeRF) are useful for applications that interact with the real world. In this paper, we use them as an implicit map of a given scene and propose a camera relocalization algorithm tailored for this representation. The proposed method enables to compute in real-time the precise position of a device using a single RGB camera, during its navigation. In contrast with previous work, we do not rely on pose regression or photometric alignment but rather use dense local features obtained through volumetric rendering which are specialized on the scene with a self-supervised objective. As a result, our algorithm is more accurate than competitors, able to operate in dynamic outdoor environments with changing lightning conditions and can be readily integrated in any volumetric neural renderer. ## 1 Introduction Visual localization, i.e. the problem of camera pose estimation in a known environment [33], enables to build camera-based positioning systems for various applications such as autonomous driving [25], robotics [2] or augmented reality [21]. Map-based navigation systems for such applications operate with a reference map of the environment, built from previously collected data. These maps are commonly defined with explicit 3D scenes representations (point cloud, voxels, meshes, etc.), which only store discrete information while the underlying environment they represent is continuous. Recently, Neural Radiance Fields (NeRF) [23] and related volumetric-based approaches [28, 50] have emerged as a new way to implicitly represent a scene. 3D coordinates are mapped to volume density and radiance in a neural network. NeRF is trained with a sparse set of posed images of a scene and learns its 3D geometry via differentiable rendering. The resulting model is continuous, i.e. the radiance of all 3D points in the scene can be computed, which enables the rendering of photorealistic views from any viewpoint. Beyond their rendering ability, implicit scene representations are actively investigated to be used as the map representation for navigation systems [1, 32, 18, 15]. This work focuses on one aspect of the navigation pipeline, understudied in the specific case of implicit scene representation, the image localization problem. Our motivation is to provide a camera relocalization algorithm (i.e. 6-DoF pose estimation) from one RGB image based only on a learned volumetric-based implicit map. We aim to design a method for robotics applications: it must be fast to compute, robust to outdoor conditions and could be deployed in dynamic environments. Existing localization methods that use implicit maps either have limited accuracy by lack of geometric reasoning [26, 6], or do not meet the aforementioned requirements because photometric alignment [52, 17] can be slow and assumes constant lightning conditions. Contribution.In this paper, we introduce local descriptors in NeRF's implicit formulation and we use the resulting model, named CROSSFIRE, as the scene representation of a 2D-3D features matching method. We train simultaneously a CNN feature extractor and a neural renderer to provide consistent scene-specific descriptors in a self-supervised way. During training, we leverage the 3D in Figure 1: **Visual localization in a neural renderer. Starting from a coarse localization prior, our algorithm estimates the pose of a query image by comparing image features to descriptors rendered from a neural scene representation.** formation learned by the radiance field in a metric learning optimization objective which does not require supervised pixel correspondences on image pairs nor a pre-computed 3D model. The proposed descriptors represent not only the local 2D image content but also the 3D position of the observed point, which enables to solve ambiguities in areas with repetitive patterns. Our method can use any differentiable neural renderer and, hence, can directly benefit from recent NeRF improvements. For instance, we make the model computationally tractable thanks to the multi-resolution hash encoding from Instant-NGP [28] and adapted to dynamic outdoor scenes thanks to appearance embeddings from Nerf-W [22]. Finally, we show that these features can be used to solve the visual relocalization task with an iterative algorithm composed of a dense features matching step followed by standard Perspective-n-Points (PnP) camera pose computation. We take inspiration from structure-based visual localization pipelines [38, 36] but replace the commonly used sparse 3D model obtained from Structure-from-Motion by our neural field from which dense features are extracted. For a given camera pose candidate, we render dense descriptors and depth maps. Descriptors are used to establish 2D-2D matches which are upgraded to 2D-3D matches by the rendered depth. We can iteratively refine the estimated pose by repeating the aforementioned procedure, as presented in figure 1. ## 2 Related work Localization with Neural Scenes Representations.Many algorithms have recently been developed to compute the camera pose of an image w.r.t. a NeRF model. One line of work has developed visual SLAM systems, where the implicit map is learned during the navigation. iMAP [42] and NICE-SLAM [55] leverages the depth information of RGB-D cameras to de-couple pose and scene geometry estimation. Then, NeRF-SLAM [34] extends these approaches to RGB images by using dense monocular SLAM as supervision for the NeRF map. In contrast with these methods, we target a relocalization approach, where the environment has already been visited. In this scenario, the map is pre-computed offline or derived from a SLAM approach. Our solution could be used as a relocalization module that can be plugged into implicit SLAM pipelines for continuous navigation and place re-visit. A first relocalization solution is to align iteratively a query and a rendered image by optimizing the camera pose based on the photometric error. This has been first proposed by iNeRF [52] which demonstrates accurate pose estimation on usual NeRF datasets, i.e. controlled environments such as synthetic or static indoor scenes. However, the localization process is slow because each iteration requires rendering and backpropagation through the entire NeRF model, and the convergence bassin is small. This idea has then been improved by using more efficient rendering models and parallel optimization based on Monte-Carlo sampling [17]. Another direction uses Absolute Pose Regression [14, 25] that directly connects images and camera poses in a deep network. While these methods usually present a low accuracy [39], they can be improved by leveraging a NeRF during the training step. Direct-PoseNet [7] renders the image at the estimated pose and uses the differentiability of the renderer to define an additional loss function based on the photometric error. Then DF-Net [6] iterates on this idea and defines a loss based on features matching. Finally, LENS [26] pre-computes a large set of synthetic views uniformly distributed across the scene and uses it as additional training data. Related to our work, Features Query Network [12] stores local descriptors in an implicit scene representation and uses it to perform local features matching in a structure-based formulation [38, 36, 31]. While we use a related localization process, our method is novel on two crucial aspects. First, FQN is limited to a pre-computed sparse 3D point cloud, while our proposal provides dense features from a radiance field. Then, instead of memorizing in a supervised way how descriptors vary w.r.t. viewpoint in an off-the-shelf features extractor, we take the opposite direction and learn scene-specialized descriptors without supervision through a metric learning objective and decide to model these features as not dependent on the viewing direction, in order to facilitate the matching process. To the best of our knowledge, learning visual localization descriptors in a neural radiance field without supervision has not been proposed before. Learning-based description of local features.Local descriptors provide useful descriptions of regions of interest that enable to establish accurate correspondences between pairs of images describing the same scene. While handcrafted descriptors such as SIFT [19, 20] and SURF [4] have known great success, the focus has shifted in recent years to learn features extraction from large amounts of visual data. Many learning-based formulations [8, 13, 41, 53, 44, 10] rely on siamese convolutional networks trained with pairs or triplets of images/patches supervised with correspondences. NeRF-Supervision [51] takes advantage of the geometric consistency of depth-supervised object-centric NeRFs to obtain correspondences between different views of the object in order to learn view-invariant dense object descriptors. Features extractors can be trained without annotated correspondences by augmenting two versions of a same image or using weak supervision. SuperPoint [9] uses homographies while Novotny et al. [30] leverage image warps. In a recent work, CAPS [48] have shown that accurate corre spondences between different views can be obtained using weak supervision through the use of relative camera poses. Our proposed method follow a different path to learn repeatable descriptors: we constraint the image feature extractor to provide the same descriptors map as the Neural Field. This approach allows us to learn dense scene-specific descriptors without annotated correspondences since the neural renderer provides similar features for rays which intersect the same point. ## 3 Method The proposed algorithm estimates the 6-DoF camera pose of a query image in an already visited environment. We first train our modules in an offline step, using a set of reference images with corresponding poses, captured beforehand in the area of interest. A 3D model of the scene is not a pre-requisite because we learn the scene geometry during the training process. ### Neural rendering of descriptors Background.NeRF [23] is capable of rendering a view from any camera pose in a given scene while being trained only with a sparse set of observations. Given a camera pose with known intrinsics, 2D pixels are back-projected in the 3D scene through ray marching. The density \(\sigma\) and RGB color \(c\) of each point \(p=(x,y,z)\) along the ray are evaluated by a MLP \(R_{\theta}\): \(c,\sigma=R_{\theta}(p,d)\) where \(d\) is the viewing direction. The final pixel color of a pixel is computed with differentiable volumetric rendering along the ray, which enables to train the implicit scene representation by minimizing the photometric error of rendered images. NeRF makes the assumption that illumination in the scene remain constant over time, which does not hold for many real world scenes. NeRF-W [22] overcomes this limitation by modeling appearance with a per-image latent codes \(\mathcal{L}_{i}^{(a)}\) (i.e. appearance embedding) that controls the appearance of each rendered view. Another limitation the original formulation of NeRF is the computation time: rendering an image requires \(H\times W\times N\) evaluations of the 8 layers MLP, where \(N\) is the number of points sampled per ray, resulting in slow training and rendering. Recently, Instant-NGP [28] proposes to use multi-resolution hash encoding to accelerate the process by storing local features in hash tables, which are then processed by much smaller MLPs compared to NeRF resulting in significant improvement of both training and inference times. Neural radiance and descriptors fields.CROSSFIRE combines the 3 aforementioned techniques to efficiently render dynamic scenes. However, our main objective is not photorealistic rendering but, rather, features matching with new observations. While it is possible to align a query image with a NeRF model by minimizing the photometric error [52], such approach lacks robustness w.r.t. variations in illumination. Instead, we propose to add positional features, i.e. \(D\)-dimensional latent vectors which describe the visual content of a region of interest in the scene, as an additional output of the radiance field function. In contrast with the rendered color, we model these descriptors as invariant to viewing direction \(d\) and appearance vector \(\mathcal{L}_{i}^{(a)}\) (_i.e._ we do not provide \(d\) and \(\mathcal{L}_{i}^{(a)}\) to the MLP head responsible of generating the positional feature, see Figure 2). We verify through ablation study in section 4.3 that this descriptor property makes the matching process more robust. Similar to color, the 2D descriptor of a camera ray is aggregated by the usual volumetric rendering formula applied on descriptors of each point along the ray. The architecture of our proposed neural renderer is summarized in Figure 2 and implementations details are provided in section. The training pipeline of CROSSFIRE is explained in the next section. ### Self-supervised training of features Motivation.In the previous section, we explained how our proposed neural renderer describes the map for relocalization purposes thanks to the introduced positional descriptors. Additionally, we also need to extract features from the query image. A simple solution, proposed by FQN [12], is to use an off-the-shelf pre-trained features extractor such as SuperPoint [9] or D2-Net [10], and train the neural renderer to memorize observed descriptors depending on the viewing direction. Optimizing scene-specific descriptors, however, allows to better differentiate repetitive patterns in the scene resulting, in a more robust localization and reducing failure cases. To this end, we propose to train jointly the feature extractor with the neural renderer by defining an optimization objective which leverages the scene geometry. We obtain descriptors specialized on the target scene which describe not only the visual content but also the 3D location Figure 2: **Neural radiance and descriptors fields. The input coordinate is encoded by the multi-resolution hash tables from Instant-NGP [28] enabling fast training and rendering. We use per-image appearance embeddings to handle varying illumination across training images. The descriptors heads is invariant to viewing direction and appearance vector allowing to learn robust localization features.** of the observed point, with better discriminant property than generic descriptors. The training procedure of our system is described in Figure 3. One training sample corresponds to a reference image with its corresponding camera pose. From one side, the image is processed by the features extractor to obtain the descriptors map \(F_{I}\). On the other side, we sample points along rays for each pixel using camera intrinsics, compute density, color and descriptor of each 3D point, and finally perform volumetric rendering to obtain a RGB view \(C_{R}\), a descriptors map \(F_{R}\) and a depth map \(D_{R}\). Features Extraction.Our features extractor is a simple fully convolutional neural network with 8 layers, ReLU activations and max poolings. The input is a RGB image \(I\) of size \(H\times W\) and produces a dense descriptors map \(F_{I}\in\mathbb{R}^{H/4\times W/4\times d}\). Learning the Radiance Field.Similar to NeRF [23], we use the mean squared error loss \(\mathcal{L}_{MSE}\) between \(C_{R}\) and the real image to learn the radiance field. As we render entire, although downscaled, images in a single training step, we can leverage the local 2D image structure and minimise the structural dissimilarity (DSSIM) loss \(\mathcal{L}_{SSIM}\)[49], which we observe to produce sharper images and more accurate scene geometry. Depth maps are used by the localization process to compute the camera pose, and then better depth results in more accurate poses. NeRF models trained with limited training views can yield incorrect depths, due to the shape-radiance ambiguity [54]. We add a regularization loss \(\mathcal{L}_{TV}\) which minimizes depth total variation of randomly sampled 5x5 image patches to encourage smoothness and limit artefacts on the rendered depth maps [29]. We verify in section 4.3 that using these 3 loss functions is beneficial for the localization accuracy. Learning the Descriptors Field.Our main goal is to match the descriptors map from the CNN features extractor and the corresponding one from the neural renderer. The self-supervised optimization objective encourages both models to produce identical features for a given pixel while preventing high matching scores between points far from each other in the 3D scene. We define a loss function with two terms \(\mathcal{L}_{pos}\) and \(\mathcal{L}_{neg}\), applied on a pair of descriptors maps, each containing \(n\) pixels. We use the cosine similarity, noted \(\otimes\), to measure similarity between descriptors. The first loss term \(\mathcal{L}_{pos}\) maximizes the similarity between descriptors maps \(F_{I}\) and \(F_{R}\) from both models: \[\mathcal{L}_{pos}=\frac{1}{n}\sum_{i=1}^{n}\text{max}(0,1-F_{I}[i]\otimes F_{R }[i]) \tag{1}\] The second loss term \(\mathcal{L}_{neg}\) samples random pairs of pixels and ensures that pixel pairs with large 3D distances have dissimilar descriptors: Figure 4: **Similarities of positional features. We show the dense matching map between one descriptor from the query image (red dots in left images) and the reference descriptors from the neural renderer. Thanks to our training objective, descriptors close (in 3D) to the selected points have high similarity whereas others do not match. This behaviour is enforced by our loss function.** Figure 3: **Training pipeline of CROSSFIRE. We jointly optimize the neural renderer and the features extractor to obtain robust, scene-specific localization descriptors. We use regularization losses (i.e. TV and SSIM) to increase the consistency of the neural renderer. We propose a two-terms loss that maximizes the similarity between corresponding feature maps while penalizing pixel pairs that are geometrically distant from each other.** \[\mathcal{L}_{neg}=\frac{1}{mn}\sum_{k,i=1}^{m,n}\text{max}(0,F_{I}[p_{k}(i)]\otimes F _{R}[i]-t_{\lambda}(p_{k}(i),i)) \tag{2}\] where \(t_{\lambda}(i,j)=max(0,1-\lambda\|xyz(i)-xyz(j)\|)\). \(xyz(i)\) is the 3D coordinate of the point represented by the \(i_{th}\) pixel in the descriptors map. We compute it from the camera parameters of the rendered view and predicted depth. It should be noted that we do not backpropagate the gradient of this loss to the depth map because the gradient of this loss does not provide meaningful signal to learn the scene geometry. \(\lambda\) is an hyperparameter which controls the maximum similarity between descriptors at a given 3D distance. \((p_{k})_{m}\) are random permutations of pixel indices from 1 to n. The proposed self-supervised objective is close to a classical triplet loss [3], but we show in fig 8 that scaling the loss by the 3D coordinates in the formulation is crucial to learn smooth and selective descriptors. A visualization of the similarity between descriptors enforced by the proposed loss is shown in Fig 4. Finally, we optimize the following loss function at each training step: \[\mathcal{L}=\mathcal{L}_{MSE}+\lambda_{1}\mathcal{L}_{SSIM}+\lambda_{2} \mathcal{L}_{TV}+\mathcal{L}_{pos}+\mathcal{L}_{neg} \tag{3}\] where \(\lambda_{1}=0.1\) and \(\lambda_{2}=1e^{-3}\) are hyper-parameters introduced to balance SSIM and TV losses, respectively. ### Visual Localization by iterative dense features matching This section describes the localization pipeline used to estimate the camera pose of a given query image using our learned renderer and features. An overview of this procedure is shown in Figure 5. The proposed solution combines simple and commonly used techniques and we do not claim algorithmic novelty on this part. The goal is, rather, to demonstrate that the quality and robustness of our learned features enables to reach precise localization while using basic features matching and pose estimation strategies. 1. Localization prior.Similar to related features matching methods [38, 36, 12], we assume to have access to a localization prior, i.e. a camera pose relatively close to the query pose. A view observed from the prior should have an overlapping visual content with the query image to make the matching process feasible. Such priors can be obtained by matching a global image descriptor against an image retrieval database [3, 36] or an implicit map [24]. 2. Features extraction.First, we extract dense descriptors from the query image through the CNN and descriptors and depth from the localization prior with the neural renderer. 3. Dense Features Matching.Query and reference descriptors are matched with cosine similarity. We consider that 2 descriptors are a match if the similarity is higher than a threshold \(\theta\) and if it represent the best candidate in the other map in both direction (mutual matching). We then compute the predicted 3D coordinate of rendered pixels which have been matched (thanks to camera parameters and depth) and obtain a set of 2D-3D matches. 4. Camera Pose Estimation.We use the Perspective-N-Points algorithm combined with RANSAC [11], in order to get a robust estimate by discarding outliers matches. 5. Iterative Pose Refinement.While classical 3D models only have access to a finite set of reference descriptors, our neural renderer can compute them from any camera pose. Similar to FQN [12] and ImPosing [24], we can then consider the camera pose estimate as a new localization prior and iterate the previously mentioned steps multiple times to refine the camera pose. ## 4 Experiments We first present a comparison of CROSSFIRE with related methods relocalization that use implicit map representations in section 4.1. We also evaluate the impact of the localization prior in section 4.2 and additional ablation studies Figure 5: **CROSSFIRE localization procedure.** Descriptors are extracted from the query image and matched against descriptors rendered from the localization prior. Depth information provides 2D-3D matches that enable to compute the pose with PnP + RANSAC. This process can be repeated iteratively, by rendering descriptors from the predicted pose. in 4.3. Qualitative visualizations are provided in Figure 6 and Figure 7, but also in the supplementary video. Implementation.Our system is implemented in PyTorch. The hash tables and MLPs of the neural renderer use tiny-cuda-nn [27]. We use the default PnP pose solver from PoseLib [16]. In all the proposed experiments, we use descriptors of size 32. We train the models for 100k iterations. The initial learning rate is set to \(1e^{-3}\) and reduced to \(1e^{-4}\) after 2000 iterations. For ensuring reproducibility, the detailed architecture of our neural networks are provided in supplementary materials. Datasets.We evaluate our method on 2 standard localization benchmarks. 7scenes [40] consists in indoor static scenes captured using a hand-held camera. Cambridge Landmarks [14] contains outdoor scenes representing buildings observed from different viewpoints and lighting conditions, with dynamic occluders such as pedestrians and cyclists in both train and test sets. Efficiency.The storage requirement of our modules is 50MB (48MB for the hash tables and 2MB for the neural networks). In contrast with explicit maps, this number does not grow with the amount of reference data. All trainings and inferences have been performed on a RTX3090 GPU. Trainings take approximately 5 hours for indoor scenes and 15 hours for larger outdoor scenes. Inference times are: 9ms for features extraction, 5ms for rendering, 5ms for dense matching and \(\approx 60\)ms for PnP+RANSAC (because we have a lot of matches), resulting in \(\approx 200\)ms for the total time with 3 iterations reported in the experiments. Speedup can be achieved easily by less refinements, at the cost of minor accuracy drop. ### Comparison to related methods We evaluate our method on both datasets using a maximum of 3 iterations of the localization process. We use as localization prior the top 1 reference pose retrieved by DenseVLAD [45]. In order to render reference frames efficiently, the matching step is done at a small resolution: 194x108 for Cambridge Landmarks and 161x120 for 7scenes. We compare our algorithm to the learning-based visual relocalization methods that use implicit map representations in their pipeline. * Direct-PoseNet [7] train an Absolute Pose Regressor with an additional photometric loss by rendering the estimated pose through NeRF. * DFNet [6] goes in the same direction but defines a features matching loss with the rendered view. * LENS [26] trains an absolute pose regressor with NeRF rendered views uniformly distributed across the scene. * FQN [12] regresses descriptors in an implicit representation of a sparse 3D model. This method is the closest Figure 6: **Success and failure cases:** we show inliers matches between the query image and the NeRF rendered image at prior pose. Using dense features field for localization enables to establish accurate correspondences in texture-less areas (left). Failure cases are observed in the presence of dynamic objects (middle), for which the PnP converges on a wrong pool of matches, and ambiguous cases (right) where the CNN mixes up the symmetrical parts of the church due to lack of long-range reasoning. Figure 7: **Visualization of rendered views, descriptors and matches in StMarsChurch. We show on the top row the query image (right), the RGB rendered view from the localization prior (left) and from the 1st estimated pose (middle). The second row represents a PCA visualization of the corresponding descriptors map from the neural renderer (left and middle) and the features extractor (right). The last row displays the inlier matches obtained by our pipeline.** to our work because it uses the same iterative localization process and store descriptors in a neural scene representation. The main differences are that descriptors are not trained specifically from the scene but memorized from a pretrained features extractors, and that the representation is sparse whereas ours is dense. Results are reported for D2-Net [10] and MobileNetv2 [35] descriptors. iNerf [52] and related methods are not present in our evaluation, first because results on usual localization benchmarks are not reported in the corresponding papers, but also because it does not meet the robotics requirements described before, i.e. fast inference for iNeRF and compatibility with outdoor dynamic environments. The results of the comparisons for both datasets are shown in Table 1. CROSSFIRE obtains the lowest error for both indoor localization and outdoor scenes. Results on the highly ambiguous Stairs scene are higher than in other scenes but still better than other methods for which the localization process sometimes totally fail. Furthermore, we consistently perform better than NeRF-assisted APR methods and, more importantly, than pre-trained implicit descriptors. Because the camera pose estimation process used in FQN is similar than in ours, these results indicate that our scene-specific features are beneficial compared to off-the-self features extractors. We hypothesize that the absolute localization accuracy in outdoor scenes is lower for 2 main reasons. First, we lack a way to handle dynamic content such as pedestrians during the test step, which we observe to degrade the quality of our matches. Second, the quality of depth maps in these scenes is less accurate than in indoor scenarios, especially for background, due to observable image content very far from the camera. As we use depth to compute the 3D coordinates of matches, this introduces noise in the localization process. ### How good the pose priors need to be? To measure how bad initialization impacts localization results, we conducted an experiment on the Chess scene where we replace the prior from image retrieval by using the same prior for all test images (shown in Figure 1). Results are shown in Table 2. We observe that, thanks to our iterative refinement, imprecise priors do not affect the final localization accuracy but rather require more iterations to reach the correct camera pose. ### Ablation studies Descriptor loss.The self-supervised loss used to train descriptors is similar to the triplet loss commonly used for metric learning, except an additional term for negative pairs which depends on the 3D distance between points. We propose a qualitative comparison between the triplet loss and our proposal in figure 8. We observe that the representation learned by our system is smooth and more expressive than the triplet loss which only separate the scene into few clusters. More details including a quantitative comparison is provided in supplementary materials. \begin{table} \begin{tabular}{|l|c c c|c|c c|} \hline Dataset / Methods & \multicolumn{2}{c|}{Absolute Pose Regression + NeRF} & \multicolumn{4}{c|}{Implicit local features} \\ \hline Cambridge & DirectPN [7] & DFNet [6] & LENS [26] & FQN-D2N [12] & FQN-MN [12] & **CROSSFIRE (Ours)** \\ \hline Kings College & - & 0.73m / 2.4\({}^{\circ}\) & 0.33m / 0.5\({}^{\circ}\) & 0.32m / 0.5\({}^{\circ}\) & **0.28m / 0.4\({}^{\circ}\)** & 0.47m / 0.7\({}^{\circ}\) \\ Old Hospital & - & 2.00m / 3.0\({}^{\circ}\) & 0.44m / 0.9\({}^{\circ}\) & 0.64m / 0.9\({}^{\circ}\) & 0.54m / 0.8\({}^{\circ}\) & **0.43m / 0.7\({}^{\circ}\)** \\ Shop Facade & - & 0.67m / 2.2\({}^{\circ}\) & 0.27m / 1.6\({}^{\circ}\) & 0.14m / 0.6\({}^{\circ}\) & **0.13m / 0.6\({}^{\circ}\)** & 0.20m / 1.2\({}^{\circ}\) \\ StMarys Church & - & 1.37m / 4.0\({}^{\circ}\) & 0.53m / 1.6\({}^{\circ}\) & 0.93m / 3.5\({}^{\circ}\) & 0.58m / 2.0\({}^{\circ}\) & **0.39m / 1.4\({}^{\circ}\)** \\ \hline Average & - & 1.19m / 2.9\({}^{\circ}\) & 0.39m / 1.2\({}^{\circ}\) & 0.51m / 1.4\({}^{\circ}\) & 0.38m / 1.0\({}^{\circ}\) & **0.37m / 1.0\({}^{\circ}\)** \\ \hline \hline \multicolumn{5}{|l|}{7 scenes} & \multicolumn{4}{c|}{} \\ \hline Chess & 0.10m / 3.5\({}^{\circ}\) & 0.05m / 1.9\({}^{\circ}\) & 0.03m / 1.3\({}^{\circ}\) & 0.06m / 1.9\({}^{\circ}\) & 0.04m / 1.3\({}^{\circ}\) & **0.01m / 0.4\({}^{\circ}\)** \\ Fire & 0.27m / 11.7\({}^{\circ}\) & 0.17m / 6.5\({}^{\circ}\) & 0.10m / 3.7\({}^{\circ}\) & 0.14m / 4.1\({}^{\circ}\) & 0.10m / 3.0\({}^{\circ}\) & **0.05m / 1.9\({}^{\circ}\)** \\ Heads & 0.17m / 13.1\({}^{\circ}\) & 0.06m / 3.6\({}^{\circ}\) & 0.07m / 5.8\({}^{\circ}\) & 0.05m / 3.5\({}^{\circ}\) & 0.04m / 2.4\({}^{\circ}\) & **0.03m / 2.3\({}^{\circ}\)** \\ Office & 0.16m / 6.0\({}^{\circ}\) & 0.08m / 2.5\({}^{\circ}\) & 0.07m / 1.9\({}^{\circ}\) & 0.14m / 4.1\({}^{\circ}\) & 0.10m / 3.0\({}^{\circ}\) & **0.05m / 1.6\({}^{\circ}\)** \\ Pumpkin & 0.19m / 3.9\({}^{\circ}\) & 0.10m / 2.8\({}^{\circ}\) & 0.08m / 2.2\({}^{\circ}\) & 0.10m / 2.6\({}^{\circ}\) & 0.09m / 2.4\({}^{\circ}\) & **0.03m / 0.8\({}^{\circ}\)** \\ Kitchen & 0.22m / 5.1\({}^{\circ}\) & 0.22m / 5.5\({}^{\circ}\) & 0.09m / 2.2\({}^{\circ}\) & 0.18m / 4.8\({}^{\circ}\) & 0.16m / 4.4\({}^{\circ}\) & **0.02m / 0.8\({}^{\circ}\)** \\ Stairs & 0.32m / 10.6\({}^{\circ}\) & 0.16m / 3.3\({}^{\circ}\) & 0.14m / 3.6\({}^{\circ}\) & 1.41m / 53.0\({}^{\circ}\) & 1.40m / 34.7\({}^{\circ}\) & **0.12m / 1.9\({}^{\circ}\)** \\ \hline Average & 0.20m / 7.3\({}^{\circ}\) & 0.12m / 3.7\({}^{\circ}\) & 0.08m / 3.0\({}^{\circ}\) & 0.30m / 10.6\({}^{\circ}\) & 0.28m / 7.3\({}^{\circ}\) & **0.04m / 1.1\({}^{\circ}\)** \\ \hline \end{tabular} \end{table} Table 1: **6-DoF median localization errors of visual localization methods based on implicit representations.** DirectPoseNet did not report results for Cambridge Landmarks. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline cm / \({}^{\circ}\) & Prior & Iter 1 & Iter 2 & Iter 3 \\ \hline Retrieval & 0.22 / 12.1 & 0.02 / 0.7 & 0.01 / 0.5 & 0.01 / 0.4 \\ \hline Constant & 1.82 / 32.2 & 0.12 / 2.8 & 0.02 / 0.6 & 0.01 / 0.5 \\ \hline \end{tabular} \end{table} Table 2: **Impact of prior accuracy:** Median error w.r.t. prior strategy and iterations. Conditioning descriptors with viewing direction.We visualized modeled the descriptors learned by the neural renderer as independent of the direction from which the point is observed. We verify that this choice is relevant by comparing it to the view-dependent case. Modeling the descriptors as dependent on the image appearance is not feasible because this parameter is unknown during the localization step. The comparison is shown in Figure 9. Reconstruction losses.We evaluated the benefits of the \(\mathcal{L}_{SSIM}\) and \(\mathcal{L}_{TV}\) terms of the loss function on the localization accuracy on Figure 10. On the Heads scene, the error is 3cm/2.3\({}^{\circ}\) with the proposed loss, 4cm/2.1\({}^{\circ}\) without \(\mathcal{L}_{SSIM}\) and 6cm/4.0\({}^{\circ}\) without \(\mathcal{L}_{TV}\). These terms actually improve the localization accuracy because they help to recover the correct scene geometry. ## 5 Limitations and Future Work Scalability.Similar to other Neural Scene Representations, our Neural Field struggles to represent large scale maps, such as the one used in autonomous driving, with a single radiance field instance. The current best solution, proposed by Block-NeRF [43], is to split the environment into several smaller neural fields and enforce consistency at their boundaries. This solution is successful at a city-scale and could be implemented in our method for large scale localizatoin. Localization pipeline.The proposed localization algorithm could be improved in many ways. Dense features matching could be performed by learning-based approaches [46, 5] instead of simple heuristics. Resulting 2D-3D matculs could be improved by co-visibility filtering [38, 31]. Finally, the estimated camera pose could be optimized by direct features alignment, similar to GNN [47] and PixLoc [37]. The contribution of this paper lies in the learning of descriptors in a neural renderer, and this proposal can be used as a backbone for different and more advanced localization solutions. ## 6 Conclusion We propose CROSSFIRE; a new way to learn and represent visual localization maps based on neural radiance fields. The proposed formulation has the advantage of densely representing local features of a scene in a compact way, and to be more robust to lightning changes than photometric alignment. We demonstrate that the non-supervised learned local features, which are specialized on the target area, perform better than related supervised techniques that use pre-trained features. The proposed implicit representation can serve as a backbone to more advanced features matching pipelines and should be compatible with future improvements in the neural rendering field that could enable to scale these models to larger scenes and yield better localization accuracy by improving further the quality of the learned scene geometry. We believe that replacing classical data structures by implicit scenes representations is an exciting research direction for the whole area of 3D computer vision as it enables to store dense information in a compact representation. Figure 8: **Qualitative comparison of descriptors between the proposed loss and a classical triplet loss. We visualize the PCA of descriptors from our loss (middle) and a triplet (right) for a given query image (left).** Figure 10: **Impact of additional reconstruction losses on localization accuracy. Translation and orientation error for several combinations of loss terms.** Figure 9: **Localization accuracy depending on descriptor head inputs. We compare the final accuracy on the “Chess” scene with and without the viewing direction as descriptor input in the neural renderer.**
2306.08655
Explainable Software Defect Prediction from Cross Company Project Metrics Using Machine Learning
Predicting the number of defects in a project is critical for project test managers to allocate budget, resources, and schedule for testing, support and maintenance efforts. Software Defect Prediction models predict the number of defects in given projects after training the model with historical defect related information. The majority of defect prediction studies focused on predicting defect-prone modules from methods, and class-level static information, whereas this study predicts defects from project-level information based on a cross-company project dataset. This study utilizes software sizing metrics, effort metrics, and defect density information, and focuses on developing defect prediction models that apply various machine learning algorithms. One notable issue in existing defect prediction studies is the lack of transparency in the developed models. Consequently, the explain-ability of the developed model has been demonstrated using the state-of-the-art post-hoc model-agnostic method called Shapley Additive exPlanations (SHAP). Finally, important features for predicting defects from cross-company project information were identified.
Susmita Haldar, Luiz Fernando Capretz
2023-06-14T17:46:08Z
http://arxiv.org/abs/2306.08655v1
# Explainable Software Defect Prediction from Cross Company Project Metrics Using Machine Learning ###### Abstract Predicting the number of defects in a project is critical for project test managers to allocate budget, resources, and schedule for testing, support and maintenance efforts. Software Defect Prediction models predict the number of defects in given projects after training the model with historical defect related information. The majority of defect prediction studies focused on predicting defect-prone modules from methods, and class-level static information, whereas this study predicts defects from project-level information based on a cross-company project dataset. This study utilizes software sizing metrics, effort metrics, and defect density information, and focuses on developing defect prediction models that apply various machine learning algorithms. One notable issue in existing defect prediction studies is the lack of transparency in the developed models. Consequently, the explain-ability of the developed model has been demonstrated using the state-of-the-art post-hoc model-agnostic method called Shapley Additive exPlanations (SHAP). Finally, important features for predicting defects from cross-company project information were identified. Software Defect Prediction, Defect Density, Machine Learning Explainability, SHapley Additive exPlanations ## I Introduction As the complexity of software increases, delivering quality and defect-free software seems challenging due to strict timeline, and limited budget. Managers strives to identifies the defects as early as possible in the software development life cycle because addressing defects in later stages can incur higher costs [1]. However, Identifying the potential number of defects in a project takes significant effort. This brings the need to have an automated process for predicting bugs. Defect prediction using machine learning is an emerging technology that leverages human experience by automating manual efforts to anticipate defects in software systems [2, 3, 4]. The majority of the SDP studies focused on identifying defect-prone vs. non-defect-prone modules. However, it is equally important to consider the number of defects as fixing a single defect may be less costly and require less resources compared to fixing a large quantity of defects. Obtaining real-world data for training machine learning models for defect prediction has limitations. Companies may be hesitant to share proprietary information regarding the actual number of defects found after project delivery. In addition, project managers may lack the technical skill-set to comprehend the predicted outcome. By understanding the reasoning behind a prediction, project managers could select and adjust the selected attributes for assessing the need to allocate more resources in these identified projects with high number of defects. Predicting defects from cross-company project information is important as conducting study on the same type of projects may not yield a reliable model due to differences in project types or data collection sources in real-world problems. Additionally, historical information may not be available for the same type of projects. Bai et al. [5] investigated the issues of transfer learning in cross-project defect prediction and proposed a three-stage weighted framework for multi-source transfer learning process-based SDP model. It is also important to understand which features to be extracted for building an effective SDP model. Recently, Balogun et al. [6] addressed the fact that the high dimensionality of software metric features can affect the performance of SDP models. They conducted feasibility studies on feature selection of reliable SDP model by applying hybrid feature selection algorithms. This study will contribute to software engineering research domain by answering to the following research questions: RQ1: Can we build a SDP model from generic cross-company project-level information without segregating the project level information based on project size or development types? RQ2: Are defect density and software size strong predictors for predicting the number of software defects in a project? RQ3: Can we identify at least three features from this cross-company project dataset that are important for predicting defects based on similar metrics? RQ4: Can we interpret the predicted number of defects from the developed SDP model? This paper is organized into several sections. The related work on defect prediction using machine learning is presented in section II. This is followed by the methodology used in this paper in section III. The SDP models developed using ML algorithms, and the results are presented in section IV. Section V summarizes the result analysis and discussion. Threats to the validity of our work are presented in Section VI. Finally, the conclusion and future work are described in section VII. ## II Background and related work Software defect prediction(SDP) has emerged as a popular research topic over the last several decades [3, 6, 7]. Researchers have utilized various classification techniques to build these models including Logistic Regression [8], Na've Bayes classifier [9], Support Vector Machine [8], Artificial Neural Networks [10], Decision Tree Classifiers [11], Random Forest Algorithms [12], kernel PCA [13], Deep Learning [14], combination of Kernel PCA and Deep Learning [15][16] and ensemble learning techniques [17] etc. Aleem et al. [3] explored different machine learning techniques for software bug detection and provided a comparative performance analysis of these algorithms. Several studies used discretizing continuous defect counts into defective and non-defective modules for SDP models [18, 19]. However, binning the continuous data as independent variable may lead to information loss that can affect the performance and interpretation of SDP models [20]. Rajbahadur et al. [20] recommended that future SDP studies should consider building regression-based classifiers. In this study, we have used regression-based machine learning techniques for predicting the total number of defects. Felix and Lee [21] proposed certain SDP models constructed using code design complexity, defect density, defect introduction time and defect velocity. Their results indicate that the number of defects shows a strong positive correlation with the average defect velocity, but a weak positive correlation with the average defect density and a negative correlation with the average defect introduction time. However, in this work, we can observe a significantly positive relationship with defect density, and number of defects. In recent years, the need for explainability in machine learning models has gained prominent importance. Gezici and Tarhan [22] utilized three existing model-agnostic-based techniques referred to as EL5, SHAP and LIME to develop an explainable defect prediction model based on gradient boost algorithm classifier. We will explain our SDP model with SHAP on various classifiers as the cost computation for this dataset is reasonable. In 2017, Almakadmeh et al. [23] analyzed the ISBSG MS-Excel based dataset on Six Sigma measurements and found that the ISBSG dataset has a high ratio of missing data within the data fields of the "Total Number of Defects" variable. They identified that this missing ratio represents a serious challenge when the ISBSG dataset is being used for software defect estimation. To overcome this challenge, along with other cleaning criteria, we have removed the records with missing values in the "Total Number of Defects variable". Fadi and Al-Manai [24] found a weak correlation between size and defects when conducting a study on these variables. However, our SDP model contradicts this study as functional size shows a significant correlation with the total number of defects in a project. Researchers can focus on collecting size-based metrics from various projects to assist project managers in determining the estimation of the number of defects for scheduling and allocating testing resources. He et al. [25] collected data from several open-source projects which provided them with information about faulty vs. non-faulty modules for cross-project defect prediction. Unlike their study, we are focusing on predicting the total number of defects instead of just identifying defective vs. non-defective modules. This approach will provide project managers with an approximate number of defects. Shao et al. [26] conducted research on cross-company project data for building SDP model for ensuring software security and reliability. Their study was facing limitation of conducting research with only part of NASA and PROMISE datasets, and they highlighted the need for collecting more datasets to verify the effectiveness. Shin et al. [27] showed the existing explain-ability studies using model-agnostic techniques exhibits inconsistencies in explaining the SDP models. Our contribution in this paper includes verifying if we can see alignment in the major contributing features. ## III Methodology ### _Dataset_ In this paper, the ISBSG Developments & Enhancements 2021 Release 2 [28] dataset was explored. This original data repository contains a total of 10600 records with 254 features encompassing a broad range of projects across various industry sectors and business areas. Also, this dataset contains projects from over thirty countries [29] worldwide. These features have different groupings based on application types, organization sectors, development types, development environments, scheduling, programming languages, documentation, tools, and methodologies used in the projects etc. as part of various project metrics, size metrics, effort metrics, defect density, quality metrics, effort and other relevant metrics [30, 31]. ### _Feature Extraction and pre-processing_ This dataset contains many missing values, and not all fields are required for our study. We have applied filtering to retain features that have an acceptable number of records without being highly correlated. A snapshot of the data filtering technique has been shown in Table I. This feature extraction process involved multiple steps. **Step 1** involved removing records with a blank value in total number of defects field as this feature serves our target variable, and after this step a total number of 2103 records remained. Next, in **step 2**, a field called "age" was derived by subtracting the project implementation date from the current date to assess if the maturity of the project can contribute to the development of SDP model. In **step 3**, to select records with high quality data for building a trustworthy SDP model, we removed records with poor ratings. According to ISBSG , a quality rating of C or D indicates that the integrity of the data could not be assessed or achieved little credibility. After this cleaning step, a total of 1542 records remained. **Step 4** dealt with removing irrelevant information such as project ID, rating related fields etc. **Step 5** involved finding records with more than 10% of missing values in the column values. Except for the "programming language" field, all other columns with missing values ranging from 1 to 10% were taken out. The missing values in the programming language field were filled with the value of "unknown'. After this cleaning step, we were left with 12 columns. **Step 6** involved removing records with highly internally correlated values where the correlation exceeded 70%. Fig. 2 shows the correlation matrix among the remaining non-categorical predictors in this dataset. From this figure, we can observe that summarized work effort and adjusted functional points are highly correlated with functional size and normalized effort. Consequently, these two features were removed. The final set of features selected from this dataset is shown in Fig. 1. The resulting dataset consists of 1254 (100, 1 records with 10 independent variables, as described in Fig. 1, and one dependent or target variable namely "total defects delivered" field. Next, the remaining records were further analyzed. The distribution of programming languages revealed that out of top 5 prominent programming languages, JAVA covered 44% of the projects as shown in Fig. 3. Additionally, the majority of the projects had a development type of enhancements among 67.3% of the total number of projects, followed by new development and redevelopment as shown in Fig. 4. As a preprocessing step, all categorical values were converted to numerical values using the label encoder from the Scikit-learn library of Python [31]. Next, the dataset was split into 70% for training and 30% for training and testing. We applied standardized scaling to the predictors. StandardScaler removes the mean and scales each feature to unit variance. This operation is performed feature-wise [32]. ### _Applied machine learning algorithms_ In 2020, citeb33 utilized ensemble tree-based machine learning algorithms for SDP and obtained acceptable results on classification problems. In this study, we have applied various tree-based regression-based machine learning techniques as tree-based algorithms are popular for regression problems. Several of these algorithms have already been applied in classification [33], and a few have been used for regression problems in SDP models in the existing literature. The selected existing tree-based machine learning algorithms utilized for our evaluation have been listed below: **Random Forest Regression:** Random Forest [34] is a combination of tree predictors where each tree depends on the values of a random vector sampled independently with the same distribution for all trees in the forest. **AdaBoost Regression:** AdaBoost Regression [35] is a meta-estimator that begins by fitting a regressor on the original dataset and then fits additional copies of the regressor on the same dataset. **Gradient Boosting Regression:** GBRT [36] is a flexible non-parametric statistical learning technique for regression. **Extra Tree Regression:** Proposed by Geurts et al. [37] in 2006, Extremely Randomize Tree algorithm is a tree-based method that implements a meta estimator which fits a number of randomized decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. **XGBoost Regression:** XGBoost is an efficient implementation of gradient boosting [38].**Catboost Regressor** CatBoost is a ML algorithm that uses gradient boosting on decision trees. [33] Finally, **SHAP**[39], a game-theory-based approach to explain the output of the SDP models, was applied. ### _Evaluation Criteria_ For the evaluation of the SDP models, we applied several commonly used evaluation metrics for regression models in Fig. 4: Distribution of development types. Fig. 3: Distribution of programming languages. Fig. 2: Correlation Matrix for non-categorical features. both training and testing datasets. Three of these metrics are Mean Absolute Error (MAE), Mean Squared Error (MSE) and Root Mean Squared Error (RMSE). These measures have been applied in various defect prediction studies [40, 41]. **MAE** is calculated as the sum of absolute errors divided by the sample size, representing the difference between predicted and actual value [41]. **MSE** represents the average of the squared difference of predicted and actual value. **RMSE** measures the standard deviation of the predicted errors which is the squared root of the MSE. To evaluate how well the developed SDP models explain the dataset, we also used state-of-the-art evaluation metrics \(R^{\text{e}}\) and the adjusted \(R^{\text{e}}\) values [42]. \(R^{\text{e}}\) can be defined as the proportion of the total variation in the dependent variable that is explained by the independent variables. Adjusted \(R^{\text{e}}\) is a modification of \(R^{\text{e}}\) which adjusts for the number of explanatory terms. The difference between R-squared and adjusted R-squared value is that R-squared value assumes that all the independent variables considered affect the model, whereas the adjusted R squared value considers only those independent variables that influence the performance of SDP models. ## IV Results In this section, we present the results of the empirical evaluation conducted to address the research questions. This study was performed on a dataset containing 1254 projects, which included projects developed in different programming languages. The analysis was carried out using Anaconda Navigator, and Python version 3.9.7. We used various Python libraries including scikit-learn ensemble and other relevant packages. We verified the correlation between the defect density and total number of defects, as shown in Fig. 5. This figure demonstrates a strong positive relationship between defect density and the total number of defects. The correlation coefficient has a statistical value of 0.7863112869053916 with a p-value of 4.223627046436646e-264. This indicates that defect density has a highly significant relationship with the number of defects. Next, the correlation between functional size and total number of defects were verified as depicted in Fig. 6. These two variables also show a positive relationship. The Pearson correlation metrics [43] confirms a significant relationship between the size of the project and the number of defects. The Pearson correlation has a regression coefficient value of 0.29419230162036336, and p-value of 1.861125251801106e-26. The results of the applied algorithms are shown in Fig. 7. The classifiers were evaluated on both training and testing data. For hyperparameter tuning, RandomSearchCV was utilized. Afterwards, a 5-fold cross- validation was applied for each of these algorithms. As expected, the training score was higher than the testing score in all models. Although GradientBoostingRegressor and XGBRegressor performed the best on the training dataset with R-squared, and adjusted R-squared values, as well as the lowest MAE and MSE, and Fig. 5: Correlation between defect density and number of defects. Fig. 6: Correlation between functional size and number of defects. RMSE, they did not do equally well on cross validation and testing data. This suggests that these models might have been overfitted during training. The ExtraTreeRegressor classifier shows \(R^{\text{e}}\) score of 93% during 5-fold cross-validation. The testing dataset also exhibits a relatively high \(R^{\text{e}}\) and adjusted \(R^{\text{e}}\) score of 89% accompanied by the lowest testing MAE score of 3.7, MSE of 251 and RMSE of 15.8 among the applied algorithms. Since this classifier demonstrates the lowest error, as well as the highest \(R^{\text{e}}\) and adjusted \(R^{\text{e}}\) values during cross-validation, and testing, the ExtraTreeRegressor model is considered the most efficient model among the other models utilized in this study. The next efficient model is the CatBoostRegressor model with test dataset \(R^{\text{e}}\) and adjusted \(R^{\text{e}}\) values of 86%, and 85%, a cross-validation \(R^{\text{e}}\) score of 92%, and lowest MAE value among the chosen models for this study. The RandomForest model also performed well in cross-validation and testing datasets. However, the AdaboostRegressor model performed well in cross-validation and testing datasets. The RandomForest model was trained on a 10000 test dataset with 10000 test dataset \(R^{\text{e}}\) and adjusted \(R^{\text{e}}\) values of 86%, and 85%, a relatively poor compared to the other selected classifiers, with a cross-validation \(R^{\mathsf{z}}\) score of approximately 79%, and slightly higher MAE, MSE and RMSE values than the other classifiers. Best on this analysis, the best performing model was the ExtraTreeRegressor followed by the CatBoostRegressor, and RandomForestRegressor. As the next step, we recorded the feature importance for each of the applied algorithms as illustrated in Fig. 8. The top 5 important features are represented by the green color for each algorithms, while the minimum 3 performers are shown in red. Defect density emerged as the strongest feature in all 6 classifiers, followed by functional size. For instance, in the ExtraTreeRegressor model, the defect density feature had a coefficient of 63% while the coefficient for the 1st language was only 0.0047. This indicates that, for this SDP model, the 1st language feature did not provide a significant role ine the prediction. This field scored the lowest among all models, except for AdaBoostRegressor, but it was not among the top 5 predictors for AdaBoostRegressor either. Functional size was the second strongest predictor among all these classifiers. It seems that programming language for the software project did not have a significant impact in any of these predictions. On the other hand, "normalized work effort" contributed in all these models, although this feature had less importance compared to defect density and functional size. To verify if these features can be explained using model agnostic approach, we applied SHAP on the top 3 models. The results are shown in Fig. 9, Fig. 10, and in Fig. 11 for the ExtraTreeRegressor, CatBoostRegressor and RandomForestRegressor models respectively. All these models aligned with the top three predictors in the same order which are defect density, functional size, and normalized work effort. However, for the 4th predictor, each model selected a different predictor. ExtraTreeRegressor identified Relative Size as the next important predictor, Catboost Regressor highlighted the counting approach, and RandomForestRegressor included Industry Sector. It appears that the programming language did not significantly contribute to the prediction of this SDP model. ## V Analysis and Discussion This study employed regression models including Extra Randomized Tree, CatBoost, RandomForest, XGBoost, AdaBoost, and GradientBoosting algorithms to predict the number of defects. The findings revealed that ExtaTree, CatBoost, and RandomForest demonstrated better performance in this regard. The reliability on these models was achieved as they exhibited comparatively lower MAE, MSE, RMSE values in training and testing dataset, along with comparatively higher \(R^{\mathsf{z}}\) and adjusted \(R^{\mathsf{z}}\) scores. The application of SHAP models provided reliable explanations of the features, and the top three models show consistency in their top three predictors. This contributes towards explainable SDP models. While SHAP has been used in recent SDP studies [44], to the best of our knowledge, direct application of SHAP models in cross-company project datasets for model explanations has not been widely explored. ## VI Threats to Validity This research was conducted on the ISBSG dataset. However, many of the records had to be removed due to missing values. Although the analysis was performed on a significant number of records, it is worth noting that adding the missing records could potentially alter the findings if information were readily available. ## VII Conclusion and Future Work This study utilized six supervised tree-based ML algorithms for developing SDP models using cross-company project information from the ISBSG dataset. By employing regression models, several findings were derived from the selected attributes based on software size, work effort, defect density, development type, organization type, and primary programming language etc. This study reaffirmed the promising value of regression ML in SDP predictions. Furthermore, the feature importance of the selected attributes was observed, and correlation between the predictors and the number of defects were identified. Finally, the SDP models were explainable. Future studies could adopt a more targeted approach by categorizing the dataset based on programming language, development type, and other factors to obtain more specific outcome in addition to this generic model presented here. Additionally, the selected attributes can be applied to other open-source projects to increase the reliability of the explained predictors after being validated by explainable machine learning models. ## Acknowledgment We would like to thank ISBSG for providing us with the data subscription. Also, we would like to acknowledge the support by Mrs. Mary Pierce, Dean of Faculty of Business, Fig. 11: RandomForestRegressor model explainability using SHAP. Information Technology and Part-time Studies, Fanshawe College and Dr. Dev Sainani, Associate Dean of School of Information Technology of Fanshawe College, London, Ontario for supporting this research work.
2310.04951
CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation
Recent code translation techniques exploit neural machine translation models to translate source code from one programming language to another to satisfy production compatibility or to improve efficiency of codebase maintenance. Most existing code translation datasets only focus on a single pair of popular programming languages. To advance research on code translation and meet diverse requirements of real-world applications, we construct CodeTransOcean, a large-scale comprehensive benchmark that supports the largest variety of programming languages for code translation. CodeTransOcean consists of three novel multilingual datasets, namely, MultilingualTrans supporting translations between multiple popular programming languages, NicheTrans for translating between niche programming languages and popular ones, and LLMTrans for evaluating executability of translated code by large language models (LLMs). CodeTransOcean also includes a novel cross-framework dataset, DLTrans, for translating deep learning code across different frameworks. We develop multilingual modeling approaches for code translation and demonstrate their great potential in improving the translation quality of both low-resource and high-resource language pairs and boosting the training efficiency. We also propose a novel evaluation metric Debugging Success Rate@K for program-level code translation. Last but not least, we evaluate LLM ChatGPT on our datasets and investigate its potential for fuzzy execution predictions. We build baselines for CodeTransOcean and analyze challenges of code translation for guiding future research. The CodeTransOcean datasets and code are publicly available at https://github.com/WeixiangYAN/CodeTransOcean.
Weixiang Yan, Yuchen Tian, Yunzhe Li, Qian Chen, Wen Wang
2023-10-08T00:16:18Z
http://arxiv.org/abs/2310.04951v2
# CodeTransOcean: A Comprehensive Multilingual Benchmark ###### Abstract Recent code translation techniques exploit neural machine translation models to translate source code from one programming language to another to satisfy production compatibility or to improve efficiency of code-base maintenance. Most existing code translation datasets only focus on a single pair of popular programming languages. To advance research on code translation and meet diverse requirements of real-world applications, we construct **CodeTransOcean**, a large-scale comprehensive benchmark that supports the largest variety of programming languages for code translation. CodeTransOcean consists of three novel multilingual datasets, namely, **MultilingualTrans** supporting translations between multiple popular programming languages, **NicheTrans** for translating between niche programming languages and popular ones, and **LLMTrans** for evaluating executability of translated code by large language models (LLMs). CodeTransOcean also includes a novel cross-framework dataset, **DLTrans**, for translating deep learning code across different frameworks. We develop multilingual modeling approaches for code translation and demonstrate their great potential in improving the translation quality of both low-resource and high-resource language pairs and boosting the training efficiency. We also propose a novel evaluation metric _Debugging Success Rate@K_ for program-level code translation. Last but not least, we evaluate LLM ChatGPT on our datasets and investigate its potential for fuzzy execution predictions. We build baselines for CodeTransOcean and analyze challenges of code translation for guiding future research. The CodeTransOcean datasets and code are publicly available at [https://github.com/WeixiangYAN/CodeTransOcean](https://github.com/WeixiangYAN/CodeTransOcean). ## 1 Introduction Early software systems are developed using programming languages such as Fortran and COBOL, which have a significantly smaller user base compared to modern mainstream programming languages (e.g., Python and Java). Hence maintaining and modernizing early software systems are expensive Opidi (2020). Moreover, the readability and compatibility of the mixed multitude of programming languages are challenging when migrating existing software systems to new technology ecosystems or integrating software systems using different programming languages. The code translation task aims to convert source code from one programming language to another and is of great value in industry. Code translation methods evolve from the inefficient, costly, and error-prone manual rewriting method to automatic methods. Automatic code translation methods can be categorized into _compilers and transpilers_, _rule-based methods_, and _neural network based methods_. Neural models Feng et al. (2020); Wang et al. (2021, 2023) have become dominant in code translation. Details of code translation methods are presented in Appendix A.1. The performance of neural models relies heavily on large-scale high-quality parallel data. However, existing code translation datasets are limited by **insufficient coverage of programming languages and mostly focusing on a single pair of popular programming languages**, **limited scale**, and **uneven data distribution**. The widely used CodeTrans Lu et al. (2021) is a small dataset containing only Java-C# parallel data for quite short code samples. Other datasets Ahmad et al. (2023); Roziere et al. (2020); Zhu et al. (2022); Nguyen et al. (2013); Chen et al. (2018) suffer from the same limitations. Consequently, existing code translation models Feng et al. (2020); Wang et al. (2021); Ahmad et al. (2021) are confined to a narrow range of one-to-one code translation scenarios. Moreover, deep learning has been broadly used and achieved unprecedented success. However, there are barriers between different deep learning frameworks during the actual production process. Existing code translation datasets also neglect important demands from real-world applications, including **modernizing early software systems developed in _niche_ programming languages and migrating code across different deep learning frameworks**. To address these limitations and advance neural code translation models, we construct a large-scale comprehensive multilingual code translation benchmark **CodeTransOcean**, summarized in Table 1. CodeTransOcean is an innovative benchmark that aims to provide a **unified platform** for evaluating various models _on a comprehensive set of code translation tasks_ that reflect real-world demands. Based on this goal, each dataset in CodeTransOcean is specifically designed to tackle a key challenge in the field of code translation. CodeTransOcean includes three _multilingual_ datasets, namely, the **MultilingualTrans** dataset (including eight popular programming languages), the **NicheTrans** dataset (translating between thirty-seven niche programming languages and the eight popular ones1), and a specialized dataset **LLMTrans** (including 350 data samples and their executed results) to evaluate executability of code translated by large language models (LLMs), and a _cross-framework_ dataset **DLTrans** facilitating our proposed task for translating code between deep learning frameworks to enhance code reusability. DLTrans includes 408 samples covering four mainstream deep learning frameworks. Footnote 1: We define popular and niche programming languages based on the TIOBE Programming Community Index, which is a metric of the popularity of programming languages. Multilingual modeling shows great potential in neural machine translation (Aharoni et al., 2019; Wang et al., 2020; Zhu et al., 2023), but it has not been systematically explored for code translation. We investigate multilingual modeling for code translation using our MultilingualTrans, NicheTrans, and DLTrans datasets. Experimental results demonstrate that multilingual modeling significantly improves translation quality for both _high-resource_ and _low-resource_ language pairs and improves the model training efficiency. Recent research indicates that the proficiency of the LLM ChatGPT in natural language translation is on par with commercial-grade translation systems (Jiao et al., 2023). **To the best of our knowledge, our work is the first to systematically investigate the potential of ChatGPT in code translation**. We develop a fully automated translation-execution-evaluation pipeline **AutoTransExecuter** to support this study. Note that _match-based metrics_ and _execution-based metrics_ have been used for evaluating code translation methods, with details in Appendix A.1. In order to accurately evaluate the usability of translated code from ChatGPT, we propose a novel execution-based evaluation metric **Debugging Success Rate @K (DSR@K)**, which is the percentage of samples with translation results that successfully execute and produce the expected functionality after \(K\) debugging rounds. On our LLMTrans dataset, the baseline ChatGPT setting achieves 48.57% DSR@0. We find that self-debugging and one-shot improve the performance while chain-of-thought strategies degrade the translation accuracy. Since our AutoTransEx \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Category** & **Language/Framework** & **Dataset Name** & **Train/Dev/Test \#Samples** & **Avg. \#Tokens/Sample** & **Avg. Length** \\ \hline \multirow{8}{*}{Multilingual} & Python, C, C++, & \multirow{4}{*}{MultilingualTrans} & \multirow{2}{*}{19,115 / 3,759 / 7,545} & \multirow{2}{*}{398 / 421 / 491} & \multirow{2}{*}{1099 / 1135 / 1358} \\ & Visual Basic, Go, & & & & \\ \cline{1-1} & PHP, Java, C\# & & & & \\ \cline{1-1} \cline{2-6} & Swift, R, Rust, & \multirow{2}{*}{NicheTrans} & \multirow{2}{*}{165,457 / 23,509 / 47,502} & \multirow{2}{*}{292 / 375 / 505} & \multirow{2}{*}{785 / 995 / 1372} \\ & Fortran, Ada, Perl, & & & & & \\ \cline{1-1} & COBOL, Lua,... & & & & \\ \cline{1-1} \cline{2-6} & Python, C, C++, & \multirow{2}{*}{LLMTrans} & \multirow{2}{*}{\(-\) / 350} & \multirow{2}{*}{\(-\) / 270} & \multirow{2}{*}{\(-\) / 745} \\ & Visual Basic, Go, & & & & \\ \cline{1-1} & PHP, Java, C\# & & & & \\ \hline \multirow{2}{*}{Cross-Framework} & PyTorch, TensorFlow, & \multirow{2}{*}{DLTrans} & \multirow{2}{*}{282 / 36 / 90} & \multirow{2}{*}{625 / 1102 / 875} & \multirow{2}{*}{1318 / 2441 / 1841} \\ \cline{1-1} & MXNet, Paddle & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of our **CodeTransOcean**. We report #Samples, Avg. #Tokens/Sample and Avg. Length for Train/Dev/Test sets of each dataset. Note that LLMTrans is only for testing. #Samples are on the **program-level**. #Tokens are based on RoBERTa tokenizer (Liu et al., 2019). Length is the number of characters. ecuter still cannot cover arbitrary programming languages, we also propose a novel metric _fuzzy execution_, attempting to address the limitations of existing evaluation metrics for code translation. Our preliminary study using ChatGPT shows that ChatGPT is still inadequate to predict fuzzy execution for any arbitrary programming language, which demands future research. Our contributions can be summarized as follows: * **A large-scale multilingual code translation benchmark**: CodeTransOcean covers the largest number of popular and niche programming languages so far with the largest scale. It also includes an unprecedented dataset for translating code across different deep learning frameworks and a dataset and an automated pipeline for evaluating LLMs on code translation. We establish baselines for all datasets in CodeTransOcean. * **Multilingual modeling for code translation**: We are the first to systematically evaluate multilingual modeling on code translation for both high-resource and low-resource language pairs. Experimental results demonstrate that multilingual modeling significantly improves translation quality for both _high-resource_ and _low-resource_ language pairs and improves training efficiency. * **ChatGPT on code translation**: We conduct the first comprehensive study of the potential of ChatGPT on code translation, investigating efficacy of prompting strategies, hyperparameters, self-debugging, One-shot, and Chain-of-Thought. * **New evaluation metrics**: We propose \(DSR@K\) to evaluate translation and debugging capabilities of LLMs. We also propose a _fuzzy execution_ metric based on LLMs and conduct a preliminary study using ChatGPT on this metric. ## 2 Related Work Code Translation Datasets The success of neural models for code translation relies heavily on large-scale high-quality parallel data. However, existing code translation datasets are plagued by issues such as _insufficient coverage of programming languages_, _limited scale_, and _imbalanced data distribution_. The widely used code translation dataset CodeTrans (Lu et al., 2021) in the CodeXGLUE benchmark consists of Java-C# function pairs. The small parallel corpus AVATAR (Ahmad et al., 2023) is constructed for Java-Python code translation. Nguyen et al. (2013) construct a Java-C# dataset to explore statistical machine translation on code translation tasks2. Chen et al. (2018) explore this dataset from Nguyen et al. and also construct a CoffeeScript-JavaScript parallel dataset for investigating tree-to-tree neural models for code translation. Roziere et al. (2020) create a dataset containing 852 programs to evaluate unsupervised methods. Recently, Zhu et al. (2022) construct a new translation dataset CoST from the GeeksForGeeks website3. Subsequently, they release the translation dataset XLCoST (Zhu et al., 2022), which expands the CoST dataset by 7.3 times. However, the limited language coverage of these datasets and their imbalanced data distribution hinder their practical applications. Roziere et al. (2022) construct the TransCoder-ST dataset to perform unsupervised code translation using automated unit tests. Details of these datasets are summarized in Table 2. Rithy et al. (2022) proposes a code translation dataset XTest containing nine programming languages with unit tests, but it is not open-sourced4. Although CodeNet (Puri et al., 2021) comprises many problem statements and provides corresponding solutions, experts have proven that about half of the CodeNet dataset has incorrect solutions (Zhu et al., 2022), making it unsuitable for code translation tasks. With the limitations of existing code translation datasets, neural models trained on them may encounter overfitting, underfitting, and poor generalizability. Clearly, these issues impede the development of neural models for code translation. Therefore, constructing datasets that effectively address these problems is critical to enhance performance of code translation algorithms. Footnote 2: It was not possible to count specific information about this dataset because it was not released to the public and we were unable to obtain response from the authors. Footnote 3: In Table 2, we report the **program-level** counts for the CoST dataset to facilitate a fair comparison with our own program-level datasets. Code Translation Methods and Evaluation Metrics Details of code translation methods and evaluation metrics are presented in Appendix A.1. ## 3 The CodeTransOcean Benchmark In this section, we provide detailed descriptions and analyses of our CodeTransOcean benchmark, including the code translation tasks, their associated datasets, and dataset statistics. Details of data collection methods and licensing information as well as quality control and quality assessment are presented in Appendix A.2. Note that the vast majority of the samples in CodeTransOcean provides explicit input and output, which is equivalent to unit tests. Overall, CodeTransOcean consists of 270,507 samples (over 200K unit tests), covering 45 programming languages for multilingual code translation and 4 deep learning frameworks for cross-framework code translation5. Note that all samples in all CodeTransOcean datasets are constructed at the **program-level**. We ensure a balanced distribution of each language/framework when constructing the datasets (Appendix A.2). There is no overlap between CodeTransOcean datasets and existing code translation datasets. Footnote 5: Code Translation also extends to conversions between different versions of the same language, e.g., Python 2 to Python 3. However, according to our survey, these translation tasks are quite straightforward. Naive Copy methods, specific translation tools, and tutorials (e.g., Python 2 to 3 Conversion Guide) already achieve high translation accuracy. As a result, we no longer include these types of tasks in our benchmark. ### Multilingual Code Translation With the increasing need to unify the language variety when implementing system integration or extensions with multilingual programming environments, we construct the MultilingualTrans dataset for multiple popular programming languages6. Among programming languages in the rankings, we select the Top-10 languages as popular ones except JavaScript and SQL7 and construct the MultilingualTrans dataset based on the 8 programming languages. We treat the other languages in the rankings as niche languages and construct the NicheTrans dataset for translating between niche languages and popular languages. Additionally, in order to quantitatively evaluate the execution capabilities of the code generated by LLMs (e.g., ChatGPT, PaLM2 (Anil et al., 2023)), we construct LLMTrans, which includes the execution results for a subset of MultilingualTrans and facilitates evaluating LLMs for multilingual code translation. Footnote 6: We categorize languages as popular or niche based on the THOBE Index Programming Language Rankings released in April 2023 [https://www.tiobe.com/tiobe-index/](https://www.tiobe.com/tiobe-index/). Footnote 7: It is important to note that JavaScript and SQL, both within the top 10, are mainly used for front-end programming and database management respectively, signifying considerable differences in their usage scenarios compared to the other 8 languages. MultilingualTrans DatasetThis dataset contains 30,419 program samples covering eight popular programming languages, namely, C, C++, C#, Java, Python, Go, PHP, and Visual Basic. Table 11 shows the statistics of each language pair. Note that XLCoST (Zhu et al., 2022) is the only existing multilingual code translation dataset. Compared to XLCoST, MultilingualTrans is advantageous in more balanced data distribution across various programming languages, practicality of language pairs, and data quality. For example, the real-world requirement for translating Java into JavaScript as in XLCoST is quite limited. As to data quality, our MultilingualTrans originates from a programming chrestomathy website, with all data already reviewed and verified by the website. NicheTrans DatasetThe NicheTrans dataset contains 236,468 program samples, covering code translation pairs from thirty-seven niche programming languages, including Ada, COBOL, Pascal, Perl, Erlang, Fortran, Scala, Julia and others, to the eight popular ones. Table 12 shows statistics of each niche language. Although many studies have highlighted the practical necessity of code translation for modernizing niche programming languages (Chen et al., 2018; Zhu et al., 2022b; Roziere et al., \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset Source** & **Programming Languages** & **\#Samples** & **Avg. \#Tokens/Sample** & **Avg. Length** \\ \hline CodeTrans (Lu et al., 2021) & Java, C\# & 11,800 & 59 / 63 / 58 & 205 / 218 / 202 \\ Avatar (Ahmad et al., 2023) & Java, Python & 9,517 & 239 / 235 / 234 & 691 / 687 / 688 \\ Nguyen et al.(Nguyen et al., 2013) & Java, C\# & 16,966 & – & – \\ Lachaux et al.(Rozière et al., 2020) & C++, Java, Python & 852 & - / 119 / 120 & - / 313 / 311 \\ CoST (Zhu et al., 2022b) & C++, Java, Python, C\#, & 16,738 & 272 / 180 / 199 & 770 / 458 / 511 \\ TransCoder-ST (Rozière et al., 2022) & Java, C++, Python & 437,030 & – & – \\ XLCoST (Zhu et al., 2022a) & C++, Java, Python, C\#, & 122,151 & 234 / 232 / 222 & 644 / 634 / 606 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of existing code translation datasets. For #Samples, we report **program-level** counts for Avatar, CoST, and XLCoST. Given that the original samples from other datasets are not organized at the program-level, we report counts at the snippet-level for these datasets. Avg. #Tokens/Sample and Avg. Length are counted in the same way as Table 1. 2020), our NicheTrans dataset is the first dataset for code translation between these niche languages and popular ones. We believe this dataset will not only facilitate modernization of outdated programming languages more effectively, but also augment and evaluate generalizability of neural models. LLMTrans DatasetThe LLMTrans dataset aims to provide a benchmark for evaluating the performance of LLMs on code translation. The dataset translates seven popular programming languages to Python, totaling 350 program samples. We compile and test these samples and record the execution results. Based on this dataset, we design and implement an automated pipeline, **AutoTransExecuter8**, automatically using LLMs to conduct code translation, execution, debugging, and calculating the success rate. This dataset and the automated pipeline ease investigation of the actual debugging success rate of LLMs on code translation and effectively measure the practical usability of LLMs. Details of the LLMTrans dataset are in Table 1. Footnote 8: AutoTransExecuter only supports translation from any source language to Python. We discuss it in Limitations. ### Cross-framework Code Translation Cross-Deep-Learning-Framework Translation TaskThe widespread applications of deep learning (DL) has spawned emergence of various DL frameworks, such as PyTorch, TensorFlow, MXNet, and Paddle. However, there are significant differences in syntax and dependency libraries between different frameworks, severely impeding reusability of projects9. Moreover, studies illustrate significant disparities in energy consumption and economic costs during training and inference between various frameworks (Georgiou et al., 2022). Selecting an appropriate DL framework for green AI has become paramount in an era of large models (Ananthaswamy, 2023). Code reusability and energy-economic efficiency in DL have emerged as critical considerations for both research and practical engineering implementation. Converting code between different DL frameworks is challenging, mainly due to differences between frameworks, code complexity, structural inconsistencies, and cross-platform compatibility (more details are in Appendix A.3). Existing cross-DL-framework adaptive technologies such as the ONNX10 model conversion protocol require both parties to import and export based on agreed data formats or to convert only the final model through the computation graphs. These technologies have obvious limitations. In contrast, we propose a **Cross-DL-framework Translation** task for code migration between different DL frameworks through code translation (Appendix A.4). Compared to existing cross-framework adaptive technologies, Cross-DL-framework Translation achieves re-implementation under multiple DL frameworks through an automated process, which not only generates highly readable code and enables secondary development, but also provides developers with flexibility on combining advantages of multiple frameworks. Footnote 10: [https://onnx.ai/](https://onnx.ai/) DLTrans DatasetWe construct the **DLTrans dataset** for Cross-DL-framework Translation, including four deep learning frameworks and spanning twelve directions. To the best of our knowledge, our work is the first to define the cross-DL-framework translation task and construct a corresponding dataset. We create two subsets of different granularities based on the collected code, namely, _coarse-grained_ at the program level and _fine-grained_ at the function or class level. Each code pair comprises code that shares the same functionality but is written in different popular DL frameworks, including PyTorch, TensorFlow, MXNet, and Paddle. The coarse-grained and fine-grained datasets have 408 and 3,270 samples, respectively. In this work, we only experiment on the coarse-grained subset. ## 4 Experiments We present experiments of multilingual training for code translation (Section 4.1). We then introduce a novel evaluation metric **Debugging Success Rate@K** for **program-level** code translation (Section 4.2) and the first comprehensive exploration of ChatGPT for code translation (Section 4.3). ### Multilingual Modeling Multilingual modeling has been pivotal in broadening the applicability of neural machine translation (Aharoni et al., 2019; Wang et al., 2020; Zhu et al., 2023; Johnson et al., 2017). This is primarily evidenced in enhancing the performance of low-resource languages and cross-language transfer learning (Mohammadhahi et al., 2022; Zoph et al., 2016; Nguyen and Chiang, 2017; Johnson et al., 2017). CodeTransOcean covers nearly fifty programming languages and deep learning frameworks. We use its datasets to explore multilingual modeling on code translation tasks. Experimental SetupsIn this work, we use pre-trained CodeT5+ (Wang et al., 2023)11 as the backbone based on its superior performance on code understanding and generation evaluations reported in (Wang et al., 2023). We use the MultilingualTrans dataset to investigate four multilingual modeling strategies based on data sharing in the source or target language or both, namely, _One-to-One_, _One-to-Many_, _Many-to-One_, and _Many-to-Many_, with One-to-One as the baseline. Details of the four strategies are in Appendix A.5. To understand the strengths and weaknesses of the four strategies, we compare their average performance on _all language pairs_ and focus on _low-resource_ and _high-resource pairs_. Since the CodeBLEU metric (Ren et al., 2020) does not cover all eight languages in MultilingualTrans, we use BLEU to measure translation accuracy for the four strategies. Then, we establish baselines for the DLTrans and NicheTrans datasets. Footnote 11: We will conduct evaluations of a broader selection of models on our datasets in future work, including LLaMA (Touvron et al., 2023), WizardCoder (Luo et al., 2023), etc. We rank the resource richness of the eight programming languages in MultilingualTrans in descending order based on their amounts in the CodeT5+ pre-training data, as Java, PHP, C, C#, Python, C++, and Go (Visual Basic is not covered by the CodeT5+ pre-training data). Based on this ranking, we consider Visual Basic, C++, and Go as low-resource languages and Java, PHP and C as high-resource languages. Results and AnalysisDetailed experimental results are shown in Table 14 in Appendix. For **All** language pairs, the performance of the four strategies is ranked as **One-to-Many > Many-to-Many > Many-to-One > One-to-One**. (1) Under One-to-Many strategy, the model encoder can provide more comprehensive information for source language translation due to its ability to absorb more source language features, thereby improving generalizability of the model. (2) Many-to-Many can be considered as expanding the One-to-Many strategy by employing a greater volume of non-source language data for training. Since the encoder must be attuned to the features of various languages simultaneously under Many-to-Many, parameter sharing may potentially undermine the performance. (3) Many-to-One helps the model to learn from a broader range of data than the baseline. Specific patterns or expressions in diverse source languages assist the model in more precisely comprehending how to translate into the target language. The shared semantic representations across different source languages allow the model to implement effective transfer learning strategies. Furthermore, increase in training samples enables the model to optimize the loss function more stably. These results are consistent with previous findings on multilingual modeling for natural language translation (Aharoni et al., 2019): Many-to-Many models, trained across multiple target languages instead of just one target language, can function effectively as a regularization strategy for Many-to-One, thereby reducing the possibility of over-matching. For _High-resource_ and _Low-resource_ languages, as shown in Table 3, the ranking of the four strategies is the same as for _All_, but there is notable difference in their adaptability across languages of varying resource scales. High-resource languages can take advantage more effectively from the shared information across multiple source languages; whereas, low-resource languages are relatively less equipped to handle the additional uncertainty and noise introduced by shared parameters, and thus often have to rely on a larger volume of source language data to optimize their benefits. Results from the Many-to-Many strategy on DLTrans and NicheTrans datasets are shown in Tables 4 and 5. The experimental results suggest that significant improvements in translation accuracy can be achieved by swapping the source and target languages in the training set to facilitate data augmentation and training a bidirectional model. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Average** & **One-to-One (baseline)** & **Many-to-One** & **Many-to-Many** & **One-to-Many** \\ \hline High-resource & 4.68 & 5.56 (\(\uparrow\) 0.88) & 5.94 (\(\uparrow\) 1.26) & 6.18 (\(\uparrow\) 1.50) \\ Low-resource & 4.83 & 4.85 (\(\uparrow\) 0.02) & 4.95 (\(\uparrow\) 0.12) & 5.84 (\(\uparrow\) 1.01) \\ All & 5.19 & 5.31 (\(\uparrow\) 0.12) & 5.81 (\(\uparrow\) 0.62) & 6.42 (\(\uparrow\) 1.23) \\ \hline \hline \end{tabular} \end{table} Table 3: Average BLEU scores of the four multilingual modeling strategies, **One-to-One**, **Many-to-One**, **Many-to-Many**, and **One-to-Many**, for All language pairs, High-resource language pairs, and Low-resource language pairs. Notably, prior studies on multilingual neural machine translation often overlook the comparison between One-to-Many and other strategies. Nevertheless, One-to-Many demonstrates superiority over the One-to-One baseline across all our experiments. Overall, our results strongly recommend a targeted multilingual modeling strategy for code translation, as it not only can translate multiple language pairs with a single model, but also achieves better and more stable accuracy than baselines. ### Debugging Success Rate@K For evaluations, we adopt existing code translation evaluation metrics in our experiments, including **Exact Match (EM)**, **BLEU**, and **CodeBLEU** (details are in Appendix A.1.2). However, all these metrics are based on surface-form matching (or with some adaptations as for CodeBLEU) and are not suitable for our **program-level** translation tasks since they cannot reliably evaluate functional correctness of translated code. Moreover, in real-world software development scenarios, developers typically ensure the functionality of code by testing and debugging upon completion, rather than writing and testing multiple versions of the code to achieve the expected functionality as measured by the existing pass@k (Kulal et al., 2019) metric. Meanwhile, recent research shows that LLMs such as ChatGPT demonstrate preliminary code debugging capabilities (Chen et al., 2023, 2023). Hence, we propose a novel and robust evaluation metric for LLM on code translation, **Debugging Success Rate@K (DSR@K)**, by measuring whether the translated code can be compiled and executed with the same behavior as the input source code, with K rounds of debugging. **To the best of our knowledge, _DSR@K_ is the first metric designed to accurately reflect real-world software development scenarios. _DSR@K_ is the percentage of the samples that successfully execute and produce the expected results among all samples. Each sample is given \(K\) generation and debugging attempts by an LLM. If the generated code successfully executes and produces the expected results with these \(K\) rounds, the sample is marked as successful. _DSR@K_ is computed as \(\frac{1}{N}\sum_{i=1}^{N}S(i,K)\), where \(N\) denotes the total number of samples. If the \(i^{th}\) code sample succeeds within \(K\) attempts, then \(S(i,K)\) = 1; otherwise, \(S(i,K)\) = 0. Note that DSR@0 can be used for program-level code translation evaluation for any models. In this work, we employ DSR@K to evaluate the ability of LLMs such as ChatGPT for debugging code and translating code with debugging results. ### ChatGPT for Code Translation The recent LLM ChatGPT demonstrates competitive performance on language generation tasks such as summarization and machine translation (Yang et al., 2023; Peng et al., 2023; Gao et al., 2023). However, ChatGPT for code translation has not been systematically explored. We study the effectiveness and potential of ChatGPT on code translation and investigate strategies to improve its performance. We use **DSR@K** as the principal evaluation metric since we focus on the practical usability of ChatGPT. We use the ChatGPT API and gpt-3.5-turbo as the default model and evaluate on the \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Metric**} & \multicolumn{2}{c}{**PyTorch**} & \multicolumn{2}{c}{**TensorFlow**} & \multicolumn{2}{c}{**MNNat**} & \multicolumn{2}{c}{**Paddle**} \\ \cline{3-13} & & EM & BLEU & CodeBLEU & EM & BLEU & CodeBLEU & EM & BLEU & ELU & CodeBLEU & EM & BLEU & CodeBLEU \\ \hline \multirow{4}{*}{Nane} & PyTorch & \multirow{2}{*}{27.27} & \multirow{2}{*}{66.32} & 27.27 & 66.25 & 69.46 & 28.18 & 72.77 & 76.63 & 30.91 & 80.35 & 83.13 \\ & TensorFlow & 27.27 & 66.32 & 68.92 & – & – & – & 20.09 & 63.79 & 67.94 & 27.27 & 63.04 & 65.81 \\ & MaxNet & 28.18 & 72.86 & 74.15 & 20.99 & 63.34 & 66.06 & – & – & – & 28.18 & 69.49 & 71.09 \\ & Paddle & 30.91 & 80.52 & 84.83 & 27.27 & 62.94 & 67.78 & 28.18 & 69.43 & 75.09 & – & – & – \\ \hline \multirow{4}{*}{CadetPS+} & PyTorch & \multirow{4}{*}{34.85\(\pm\)1.38} & \multirow{4}{*}{71.97\(\pm\)0.45} & \multirow{4}{*}{71.06\(\pm\)0.73} & \multirow{4}{*}{75.04\(\pm\)0.75} & \multirow{4}{*}{72.73\(\pm\)2.41} & \multirow{4}{*}{81.76\(\pm\)0.48} & \multirow{4}{*}{52.52\(\pm\)0.56} & \multirow{4}{*}{43.64\(\pm\)1.58} & \multirow{4}{*}{58.76\(\pm\)0.60} & \multirow{4}{*}{58.76\(\pm\)0.74} \\ & TensorFlow & & & & & & & & & & & \\ & MCNet & 32.12\(\pm\)2.29 & 77.79\(\pm\)0.13 & 76.43\(\pm\)1.34 & 31.82\(\pm\)1.58 & 67.22\(\pm\)0.39 & 67.68\(\pm\)0.27 & & 36.99\(\pm\)0.91 & 32.46\(\pm\)0.46 & 73.22 \\ & Paddle & 43.03\(\pm\)4.10 & 86.25\(\pm\)0.86 & 86.90\(\pm\)0.88 & 29.39\(\pm\)2.93 & 69.43\(\pm\)0.57 & 69.57\(\pm\)0.51 & 35.75\(\pm\)0.43 & 73.68\(\pm\)0.62 & 79.46\(\pm\)0.38 & – & – & – \\ \hline \hline \end{tabular} \end{table} Table 4: Results on DLTrans of Naive and CodeT5+_220M with **Many-to-Many** strategy. We run each experiment with 3 random seeds and report the mean and standard deviation of EM, BLEU, and CodeBLEU scores. \begin{table} \begin{tabular}{c c c c} \hline \hline **BLEU** & **Naive** & **Two-way** & **One-way** \\ \hline Many-to-C & 2.36 & 4.60 & 4.86 \\ Many-to-C\# & 2.53 & 4.48 & 3.82 \\ Many-to-C++ & 1.99 & 4.78 & 3.32 \\ Many-to-Go & 3.11 & 5.24 & 3.19 \\ Many-to-Java & 3.18 & 5.23 & 5.34 \\ Many-to-PHP & 4.37 & 2.46 & 1.98 \\ Many-to-Python & 2.87 & 2.38 & 1.67 \\ Many-to-VB & 1.69 & 2.17 & 1.97 \\ Average & 2.76 & **3.92** & 3.27 \\ \hline \hline \end{tabular} \end{table} Table 5: BLEU scores on NicheTrans of Naive and CodeT5+_220M with **Many-to-Many** strategy. **One-way** denotes training models only from niche to popular, while **Two-way** denotes training in both directions. **LLMTrans** dataset for all experiments. We investigate the efficacy of prompts and hyperparameters and context in zero-shot setting, then compare one-shot versus zero-shot and study Chain-of-Thought. Effect of Prompts and HyperparametersPrior works show that prompts can influence the performance of ChatGPT (Zhong et al., 2023; Peng et al., 2023; Jiao et al., 2023). We set an initial prompt "Translate [SL] to [TL] : [SC]." as the baseline, where [SL] and [TL] denote the source language and the target language respectively and [SC] denotes the source code. We also add "Do not return anything other than the translated code." for each prompting strategy to require ChatGPT to return only code in order to ease code execution. We design three prompt variants. Details of the experimental settings and prompt variants are in Appendix A.6. We also investigate the effect of hyperparameters on code translation performance. As shown in Table 6, implementing role assignments, clarifying usage, and polite inquiry in prompts all degrade the performance compared to the baseline prompt. These results show that the baseline with the most straightforward prompt produces the best performance, possibly because it provides clear, short, and unambiguous instructions for the task to the model. More intricate prompting strategies may introduce noise and confuse ChatGPT. The performance of polite inquiry prompt is comparable to but still worse than the baseline performance. We speculate that the improvement from polite inquiries in prior studies (Akin, 2023) may stem from their explicit and comprehensive formulations which make it easier for the model to understand the task requirements. We also observe in Table 6 that same as prior findings, BLEU and CodeBLEU have no obvious positive correlations with the debugging success rate (DSR@0). Since the reference target code exhibits the same functionality as the source language code but their execution results could differ slightly, EM also does not correlate with DSR@0. Therefore, in subsequent experiments, we only report DSR@0. We also evaluate the CodeT5+_220M model on LLMTrans with the Many-to-Many strategy and find that DSR@0 is 0, suggesting that CodeT5+_220M Zero-shot is unable to generate executable translation results. a successful execution. Otherwise, feedback from the compiler will be also fed to ChatGPT for the next round of translation, and this process is repeated until reaching a pre-defined number \(K\) of debugging rounds. The whole process is shown in Table 17 in Appendix. As shown in Table 7, DSR improves significantly with multiple rounds of self-debugging. The first self-debugging improves DSR by **3%** absolutely. Each subsequent round of self-debugging brings further gain but DSR begins to plateau after the second debugging round. This suggests that ChatGPT has limitations in its capacity to rectify errors after multiple debugging cycles, which is consistent with human behaviors. Effect of One-shotIn-context learning Brown et al. (2020) allows the model to learn from input examples, enabling it to understand and manage each new task. This method has been validated as an effective strategy for enhancing the performance of model inference Peng et al. (2023); Liu et al. (2023). Therefore, we explore one-shot learning for ChatGPT on code translation. We investigate three one-shot learning sample selection strategies. Descriptions of the strategies and the corresponding prompts are in Appendix A.7. Table 8 shows that all three One-shot learning strategies effectively improve DSR@0 of ChatGPT over the Zero-shot baseline. The Experiment#2 strategy (provided contextual example has both same source and target languages as the original task) achieves the best performance, yielding **1.72%** absolute gain in DSR@0, with Experiment #1 (example has the same target language but different source language) and #3 (example has different source and target languages) following closely with 1.14% and 0.29% absolute gains, respectively. These results show that One-shot learning entirely tailored to the translation requirements is most effective in boosting code translation performance for ChatGPT. The results corroborate previous findings in natural language translation Peng et al. (2023) that the performance of ChatGPT is sensitive to the provided contextual example in One-shot learning. Effect of Chain-of-ThoughtChain-of-Thought (CoT) allows the model to simulate an orderly and structured way of thinking by sorting out the thinking process. It helps guide the model to output the final answer step by step Wei et al. (2022); Peng et al. (2023); Kojima et al. (2022). For code translation, we investigate four CoT strategies. Detailed descriptions and translation prompts for each strategy are in Appendix A.8. As shown in Table 8, CoT degrades executability of the translated code. In Experiment #2, DSR@0 even declines by 6% absolutely. We study the translation results of ChatGPT and find that when CoT strategies are applied, the model tends to translate the source code line by line, neglecting compatibility issues between libraries and functions in different languages. CoT also compromises the global planning ability of the model. These observations are consistent with the findings in Peng et al. (2023) that CoT may lead to word-by-word translations of natural language, thereby degrading the translation quality. Fuzzy ExecutionTo address the limitations of existing evaluation metrics and our AutoTransExecuter, we propose another novel code translation evaluation metric **fuzzy execution** using LLMs in Section Limitations, inspired by recent progress in using LLMs as evaluation metrics for NLP tasks. Our preliminary studies evaluates the performance of ChatGPT for predicting whether a given code can be executed or not, and if executable, also for predicting the executed output. Experimental results show that using ChatGPT for fuzzy execution is not yet practical and demands future research. ## 5 Conclusion We construct CodeTransOcean, a comprehensive code translation benchmark that includes multilingual and cross-framework datasets. We demonstrate that multilingual modeling has remarkable potential in enhancing code translation quality. We also reveal the superior code translation capability of ChatGPT and advanced strategies lead to significant performance gains. Moreover, we introduce fuzzy execution that may overcome limitations of existing metrics but requires future research. In summary, we provide a comprehensive suite of resources, tools, and baselines for code translation. \begin{table} \begin{tabular}{l c c|c c} \hline \hline **Strategy** & **Expts \#num** & **DSR@0** & **Strategy** & **Expts \#num** & **DSR@0** \\ \hline Baseline & – & **48.57\%** & & 1 & 46.00\% \\ \hline \multirow{3}{*}{One-shot} & 1 & 49.71\% & CoT & 2 & 42.57\% \\ & 2 & **50.29\%** & & 3 & **48.29\%** \\ \cline{1-1} & 3 & 48.86\% & & 4 & 45.43\% \\ \hline \hline \end{tabular} \end{table} Table 8: Performance of ChatGPT with One-shot and CoT strategies compared to the Zero-shot Baseline. Details of Expt #num are in Appendix A.7 and A.8. ## 6 Limitations Existing match-based evaluation metrics for code translation [12, 13, 14, 15, 16] focus solely on semantics, overlooking executability of the code and the functional equivalence under different implementations. Execution-based metrics [13, 14, 15, 16, 17] that require providing test cases are expensive to conduct in practice, and the significant overhead of executing numerous test cases and the heightened security risks during the execution process remain unresolved. It is crucial to establish an evaluation metric that overcomes these limitations. Our proposed DSR@K and the automated AutoTransExecuter aim to measure the executability of the code and reflect the real-world software development scenarios. However, AutoTransExecuter currently only supports Python as the target language. This is mainly due to the fact that different programming languages necessitate distinct runtime environments and libraries, making it particularly challenging to automatically detect and install the required dependencies for each code. While certain existing tools, such as Dynatrace12, can carry out dependency detection, the range of supported programming languages remains limited. Moreover, the configuration methods for compilers vary substantially among different programming languages, which further complicates automated configuration. In addition, fully automated execution systems could be exploited by malicious code, thus necessitating further security measures. Therefore, achieving this goal requires overcoming many technical and practical difficulties. Footnote 12: [https://www.dynatrace.com/platform/artificial-intelligence/dependency-detection/](https://www.dynatrace.com/platform/artificial-intelligence/dependency-detection/) To address limitations of existing evaluation metrics and limitations of AutoTransExecuter, we propose another novel code translation evaluation metric **fuzzy execution**. Recent studies have begun to utilize LLMs as evaluation metrics in the field of NLP [13, 14, 15, 16, 17, 18]. Inspired by these works, we create a new dataset **ExecuteStatus** by randomly selecting 300 executable samples from MultilingualTrans and 300 non-executable samples from the translation results of ChatGPT. Each entry in this dataset includes the execution status and, if executable, the result of the execution. We use ExecuteStatus and AutoTransExecuter to evaluate the performance of ChatGPT for predicting whether a given code can be executed or not, and if executable, also predict the executed output. The Zero-shot prompts are shown in Table 18 in Appendix. For the Few-shot strategy, in addition to the Zero-shot baseline, we include an example of executable code and an example of non-executable code, as detailed in Table 18. We define fuzzy execution as first testing the consistency between the actual pass rate and the predicted pass rate of ChatGPT, followed by further testing the accuracy in predicting execution results using ChatGPT without relying on a compiler. Since we are interested in the ability of ChatGPT to identify samples that cannot actually be executed accurately, we present the confusion matrix in Table 9 based on the results. To evaluate the performance of ChatGPT on the fuzzy execution prediction task, we use the standard accuracy, precision, recall, and F1 scores. Experimental results based on these evaluation metrics are in Table 10. The low accuracy, recall and F1 scores show that ChatGPT still has difficulty in identifying errors in the code, exhibiting about an 88% tendency to predict that the code is executable. Overall, ChatGPT has low accuracy in the binary classification task of "whether it can be executed", and its ability to predict execution results, being at a scant 4%, clearly \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Metrics**} & \multirow{2}{*}{**Calculation formula**} & \multirow{2}{*}{**Zero-Shot**} & \multirow{2}{*}{**Few-Shot**} \\ \cline{3-3} \cline{5-5} & & \(\frac{\text{TP}+\text{TN}}{\text{PN}+\text{FN}+\text{FP}}\) & & 59.00\% \\ \multirow{2}{*}{**Precision**} & \(\frac{\text{TP}}{\text{TP}+\text{FP}}\) & & 88.57\% & 93.55\% \\ \multirow{2}{*}{**Recall**} & \(\frac{\text{TP}}{\text{TP}+\text{FN}}\) & & 20.67\% & 19.33\% \\ \multirow{2}{*}{**F1 scores**} & \multirow{2}{*}{\(2\cdot\frac{\text{Precision}\cdot\text{Recall}}{\text{Precision}+\text{ Recall}}\)} & \multirow{2}{*}{33.52\%} & \multirow{2}{*}{32.04\%} \\ \hline \hline \end{tabular} \end{table} Table 10: Performance of ChatGPT on predicting fuzzy execution. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{5}{c}{**Zero-Shot**} & \multicolumn{5}{c}{**Few-Shot**} \\ \hline **TN** & **FP** & **FN** & **TP** & **TN** & **FP** & **FN** & **TP** \\ \hline 292 & 8 & 238 & 62 & 294 & 4 & 242 & 58 \\ \hline \multirow{2}{*}{\(\gamma\)12} & \multirow{2}{*}{\(\times\)280} & & & \(\gamma\)14 & \(\times\)282 & & & \\ \hline \hline \end{tabular} \end{table} Table 9: Confusion matrix of fuzzy execution prediction by ChatGPT with Zero-shot and Few-shot settings. requires further enhancement. Thus, using Chat-GPT for fuzzy execution is not yet practical (Liu et al., 2023). Despite this, fuzzy execution with LLMs holds the potential to overcome the deficiencies of current code translation evaluation metrics. We will continue this exploration in future work.
2309.01792
Explicit families of congruences for the overpartition function
In this article we exhibit new explicit families of congruences for the overpartition function, making effective the existence results given previously by Treneer. We give infinite families of congruences modulo $m$ for $m = 5, 7, 11$, and finite families for $m = 13, 17, 19$.
Nathan C. Ryan, Nicolás Sirolli, Jean Carlos Villegas-Morales, Qi-Yang Zheng
2023-09-04T20:09:00Z
http://arxiv.org/abs/2309.01792v3
# Explicit families of congruences for the overpartition function ###### Abstract. In this article we exhibit new explicit families of congruences for the overpartition function, making effective the existence results given previously by Treneer. We give infinite families of congruences modulo \(m\) for \(m=3,5,7,11\), and finite families for \(m=13,17,19\). Key words and phrases:Overpartitions, Congruences 2020 Mathematics Subject Classification: Primary: 11P83 - Secondary: 11F37, 11M36 ## 1. Introduction Let \(p(n)\) be the number of partitions of a positive integer \(n\); that is, the number of ways \(n\) can be written as a sum of non-increasing positive integers. Ramanujan [1] proved congruences of the form: \[p(5n+4) \equiv 0\pmod{5},\] \[p(7n+5) \equiv 0\pmod{7},\] \[p(11n+6) \equiv 0\pmod{11},\] for every \(n\). For decades it was difficult to find more congruences like these; nevertheless, Ono proved in [1] that for each prime \(m\geq 5\) there exists an infinite family of congruences for the partition function modulo \(m\): more precisely, he proved that a positive proportion of the primes \(\ell\) are such that \[p\left(\frac{m\ell^{3}n+1}{24}\right)\equiv 0\pmod{m}.\] for every \(n\) coprime to \(\ell\). The number of overpartitions \(\overline{p}(n)\) of a positive integer \(n\) is defined to be the number of ways in which \(n\) can be written as a non-increasing sum of positive integers in which the first occurrence of a number may be overlined (see [1]). The numbers of both partitions and overpartitions can be described in terms of eta-quotients; in particular, they are known to be coefficients of weakly holomorphic modular forms of half-integral weight, with integral coefficients. Treneer showed in [11] that Ono's existence results were valid, more generally, for the coefficients of such modular forms. In the particular case of the overpartition function, her results imply that for every prime \(m\geq 5\), for sufficiently large \(r\), a positive proportion of the primes \(\ell\equiv-1\pmod{16m}\) have the property that \[\overline{p}\left(m^{r}\ell^{3}n\right)\equiv 0\pmod{m}.\] for every \(n\) coprime to \(m\ell\). We will see in Theorem 4.11 that we can take \(r=1\). The main goal of this article is to show explicit instances of these (families of) congruences, as well as for certain variations similar to those considered by Ono for the partition function. Weaver devised a strategy in [11] for making Ono's results explicit: she exhibited 76,065 new families of congruences for the partition function by finding congruences between its generating function and appropriate holomorphic modular forms, and then verifying a finite number of congruences for the partition function. Her computations were extended by Johansson [12], who used efficient algorithms for computing the partition function to find more than \(2.2\cdot 10^{10}\) such families of congruences. Using Weaver's techniques along with the theory of Eisenstein series of half-integral weight from [10], we were able to find _infinitely many_ families of congruences for the overpartition function. Our first main results are the following two theorems. For an odd prime \(m\), throughout the article we denote \[k_{m}=\begin{cases}m+2,&m=3\\ m-2,&m>3.\end{cases}\] **Theorem 1.1**.: _Let \(m\in\{3,5,7,11\}\), and let \(\ell\) be an odd prime such that \(\ell^{k_{m}-2}\equiv-1\pmod{m}\). Then_ \[\overline{p}\left(m\ell^{3}n\right)\equiv 0\pmod{m}\] _for every \(n\) prime to \(\ell\)._ We remark that for \(m=3\) and \(m=5\) the result was proved, respectively, in [14, Coro. 1.5] and [10, Prop. 1.4]. We include those cases in our results to highlight our unified approach. **Theorem 1.2**.: _Let \(m\in\{3,5,7,11\}\), and let \(\ell\) be an odd prime such that \(\ell^{k_{m}-2}\equiv-1+\epsilon_{m,\ell}\,\ell^{\frac{k_{m}-3}{2}}\pmod{m}\), with \(\epsilon_{m,\ell}\in\{\pm 1\}\). Then_ \[\overline{p}\left(m\ell^{2}n\right)\equiv 0\pmod{m}\] _for every \(n\) prime to \(\ell\) such that_ \[\left(\frac{(-1)^{\frac{k_{m}-1}{2}}n}{\ell}\right)=\epsilon_{m,\ell}.\] For primes \(m\geq 13\) the appearance of cusp forms in level \(16\) and weight \(k_{m}/2\) makes it more difficult to find infinitely many families of congruences. Using the results from [1] for efficiently computing the overpartition function, we obtain the following families of congruences. **Theorem 1.3**.: _Let \(m,\ell\) be primes as in Table 1. Then_ \[\overline{p}\left(m\ell^{3}n\right)\equiv 0\pmod{m}\] _for every \(n\) prime to \(\ell\)._ **Theorem 1.4**.: _Let \(m,\ell\) be primes, and let \(\epsilon_{m,\ell}\in\{\pm 1\}\) be as in Table 2. Then_ \[\overline{p}\left(m\ell^{2}n\right)\equiv 0\pmod{m}\] for every \(n\) prime to \(\ell\) such that_ \[\left(\frac{(-1)^{\frac{k_{m}-1}{2}}n}{\ell}\right)=\epsilon_{m,\ell}.\] We point out that using different techniques, in [11, 12] the authors found (finite) families of congruences for the overpartition function modulo \(m\) for \(m=3,5,7\); see also [11] for \(m=5\), and [10] for powers of \(m=3\). As far as we know, the results in this article give the first known congruences for \(m>7\). The rest of the article is organized as follows. In the next section we give the necessary notation and preliminaries regarding half-integral weight modular forms and eta-quotients. In Section 3 we state the results we need on Eisenstein series of half-integral weight and level \(16\). We conclude the article with the proofs of our main results in Section 4. ## 2. Preliminaries ### Half-integral weight modular forms We refer the reader to [24, Sect. 5] for details on this subsection. Given a non zero integer \(m\) we denote by \(\chi_{m}\) the primitive Dirichlet character such that \(\chi_{m}(a)=\left(\frac{m}{a}\right)\) for every \(a\) such that \((a,4m)=1\). Given an odd integer \(k\geq 3\), we denote \(\lambda=\frac{k-1}{2}\). Furthermore, given a positive integer \(m\) we denote \(\omega_{n}=\chi_{m}\), with \(m=(-1)^{\lambda}n\). Given \(k\) as above, a positive integer \(N\) divisible by \(4\) and a character \(\chi\) modulo \(N\), we denote by \(\mathcal{M}_{k/2}(N,\chi)\) the space of holomorphic modular forms of weight \(k/2\), level \(N\) and character \(\chi\). We denote by \(\mathcal{S}_{k/2}(N,\chi)\) and \(\mathcal{E}_{k/2}(N,\chi)\) the subspace of cusp forms and the Eisenstein subspace, respectively. When \(\chi\) is the trivial character, we omit it from the notation. We consider the following operators acting on half-integral weight modular forms. Let \(g=\sum_{n\geq 0}a(n)q^{n}\in\mathcal{M}_{k/2}(N)\). \begin{table} \begin{tabular}{l|l} \(m\) & \(\ell\) \\ \hline \(13\) & \(1811,1871,1949,2207,3301,4001,4079,4289,4931\) \\ \(17\) & \(2039,2719,3331,4079\) \\ \(19\) & \(151,1091,2659,3989\) \\ \end{tabular} \end{table} Table 1. Congruences for primes \(m\geq 13\). See Theorem 1.3. \begin{table} \begin{tabular}{l|l} \(m\) & \((\ell,\epsilon_{m,\ell})\) \\ \hline \(13\) & \((431,1),(2459,1),(4513,1),(4799,1)\) \\ \(17\) & \((167,1),(541,1),(911,-1),(1013,-1),(1153,1)\), \\ & \((1867,1),(1931,-1),(2543,-1),(2683,1),(2887,1)\), \\ & \((3019,-1),(3023,1),(3329,1),(4243,-1),(4651,-1)\) \\ \(19\) & \((2207,-1)\) \\ \end{tabular} \end{table} Table 2. Congruences for primes \(m\geq 13\). See Theorem 1.4. * The Fricke involution \(W(N)\), given by \[W(N) :\mathcal{M}_{k/2}(N,\chi)\to\mathcal{M}_{k/2}(N,\chi\chi_{N}),\] \[(g|W(N))(z)=(Nz)^{-k/2}g(-1/Nz).\] We include here an extra factor of \(N^{-k/2}\) not present in [12]. * For a prime \(\ell\), the Hecke operator \(T(\ell^{2})\), given by \[T(\ell^{2}) :\mathcal{M}_{k/2}(N,\chi)\to\mathcal{M}_{k/2}(N,\chi),\] (2.1) \[g|T(\ell^{2})=\sum_{n\geq 0}\left(a(\ell^{2}n)+\chi(\ell)\ell^{\lambda- 1}\omega_{n}(\ell)a(n)+\chi(\ell^{2})\,\ell^{2\lambda-1}a(n/\ell^{2})\right)q^ {n}.\] * For an integer \(m\geq 1\), the \(V(m)\) operator, given by \[V(m) :\mathcal{M}_{k/2}(N,\chi)\to\mathcal{M}_{k/2}(mN,\chi\chi_{m}),\] \[g|V(m)=\sum_{n\geq 0}a(n)q^{mn}.\] * For an integer \(m\geq 1\), the \(U(m)\) operator, given by \[U(m) :\mathcal{M}_{k/2}(N,\chi)\to\mathcal{M}_{k/2}(M,\chi\chi_{m}),\] \[g|U(m)=\sum_{n\geq 0}a(mn)q^{n},\] where \(M\) is the smallest multiple of \(N\) which is divisible by every prime dividing \(m\), and such that the conductor of \(\chi_{m}\) divides \(M\). The latter two act as well on rings of formal power series. The following is the Sturm bound for general weights. Its proof follows from the integral weight case; see [13, Prop. 4.1]. **Proposition 2.2**.: _Let \(k\geq 3\) be an integer, and let \(m\) be a prime. Suppose that \(g=\sum_{n\geq 0}a(n)q^{n}\in\mathcal{M}_{k/2}(N)\cap\mathbb{Z}\llbracket q\rrbracket\). Let_ \[n_{0}=\left\lfloor\frac{k}{24}\cdot[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(N) ]\right\rfloor.\] _If \(a(n)\equiv 0\pmod{m}\) for \(1\leq n\leq n_{0}\), then \(g\equiv 0\pmod{m\mathbb{Z}\llbracket q\rrbracket}\)._ The result is also valid for proving equalities, namely when \(m=0\). ### Eta-quotients Let \(\eta(z)\) denote the Dedekind eta function, which is given by \[\eta(z)=q^{\frac{1}{24}}\prod_{n=1}^{\infty}\left(1-q^{n}\right),\qquad q=e^{ 2\pi iz}.\] Given a finite set \(X=\{(\delta,r_{\delta})\}\subseteq\mathbb{Z}_{>0}\times\mathbb{Z}\), denote \(s_{X}=\sum\delta r_{\delta}\). Assuming that \(s_{X}\equiv 0\pmod{24}\), the eta-quotient defined by \(X\) is \[\eta^{X}(z)=\prod_{X}\eta(\delta z)^{r_{\delta}}=q^{\frac{s_{X}}{24}}\prod_{X }\prod_{n=1}^{\infty}\left(1-q^{\delta n}\right)^{r_{\delta}}\qquad\in q^{ \frac{s_{X}}{24}}\left(1+q\mathbb{Z}\llbracket q\rrbracket\right). \tag{2.3}\] Note that \(1/\eta^{X}\) is also an eta-quotient. Let \(k=\sum_{X}r_{\delta}\), and let \(N\) be the smallest multiple of every \(\delta\), and of \(4\) if \(k\) is odd, such that \[N\sum_{X}\frac{r_{\delta}}{\delta}\equiv 0\pmod{24},\] Finally, letting \(m^{\prime}=\prod_{X}\delta^{\tau_{\delta}}\) we let \(m=m^{\prime}\) for even \(k\), and \(m=2m^{\prime}\) for odd \(k\). Then (see [1, Thm. 3] and [13, Coro. 2.7]) we have the following result. **Proposition 2.4**.: _With the notation as above, \(\eta^{X}\) is a weakly holomorphic modular form of weight \(k/2\), level \(N\) and character \(\chi_{m}\)._ Thus, \(\eta^{X}\) is holomorphic and nonzero in the upper half-plane, but it can have poles and zeros at the cusps. Furthermore, following [10], if \(\gcd(a,c)=1\), then the order of vanishing of \(\eta^{X}\) at a cusp \(s=a/c\in\mathbb{Q}\cup\{\infty\}\) is given by \[\operatorname{ord}_{s}\left(\eta^{X}\right)=\frac{N}{24\gcd(c^{2},N)}\,\sum_ {X}\gcd(c,\delta)^{2}\,\frac{\tau_{\delta}}{\delta}. \tag{2.5}\] **Proposition 2.6**.: _Let \(\Delta_{2}=\eta^{8}(z)\eta^{8}(2z)\). Then \(\Delta_{2}\in\mathcal{S}_{8}(2)\). Furthermore,_ \[\mathcal{M}_{k}(2) \to\mathcal{S}_{k+8}(2),\] \[g \mapsto g\cdot\Delta_{2}\] _is an isomorphism._ Proof.: The above proposition gives that \(\Delta_{2}\in\mathcal{M}_{8}(2)\). Furthermore, by (2.5) we see that \(\Delta_{2}\) has simple zeros at the cusps for \(\Gamma_{0}(2)\), namely \(0\) and \(\infty\). Hence the second claim follows, since \(\Delta_{2}\) does not vanish on the upper half-plane. **Eisenstein spaces of integral weight and level 2.** We consider the subgroup \(\Gamma_{\infty}=\{\pm\left(\begin{smallmatrix}1&n\\ 0&1\end{smallmatrix}\right)\,:\,n\in\mathbb{Z}\}\leq\operatorname{SL}_{2}( \mathbb{Z})\). Let \(k\geq 2\) be an even integer. Denote \[E_{k}(z) =\sum_{\gamma\in\Gamma_{\infty}\setminus\operatorname{SL}_{2}( \mathbb{Z})}\frac{1}{(c_{\gamma}z+d_{\gamma})^{k}}\quad\in\mathcal{M}_{k}(1),\] \[D_{2} =2E_{2}|V(2)-E_{2}\quad\in\mathcal{M}_{2}(2).\] Then \(E_{k}\in 1+q\mathbb{Z}[\![q]\!]\). Furthermore, \[E_{k}=1-\frac{2k}{B_{k}}\sum_{n\geq 1}\sigma_{k-1}(n)q^{n}. \tag{2.7}\] The following result will not be used in our proofs, but explains the type of forms \(h_{m}\) appearing in Table 4 below (see also Remark 4.10). Though it is probably well known, we give a proof for the sake of completeness. **Proposition 2.8**.: _Let \(D_{2},E_{4}\) be as above. Then \(\left\{D_{2}^{a}E_{4}^{b}\,:\,2a+4b=k\right\}\) is a basis for \(\mathcal{M}_{k}(2)\)._ Proof.: Denote by \(\mathcal{V}_{k}\) the subspace of \(\mathcal{M}_{k}(2)\) generated by \(\left\{D_{2}^{a}E_{4}^{b}\,:\,2a+4b=k\right\}\). Let \(\Delta_{2}=\eta^{8}(z)\eta^{8}(2z)\). Using Proposition 2.2 and (2.7) we get that \[576\Delta_{2}=5D_{2}^{2}E_{4}-E_{4}^{2}-4D_{2}^{4}.\] Hence \(\Delta_{2}\in\mathcal{V}_{8}\). Thus, to prove that \(\mathcal{M}_{k}(2)=\mathcal{V}_{k}\) for every \(k\), by Proposition 2.6 it suffices to show that for every \(f\in\mathcal{M}_{k+8}(2)\) there exists \(g\in\mathcal{V}_{k+8}\) such that \(f-g\in\mathcal{S}_{k+8}(2)\). For this purpose it suffices to prove that there exist \(g_{\infty},g_{0}\in\mathcal{V}_{k+8}\) such that \(g_{\infty}\) does not vanish at \(\infty\), and \(g_{0}\) vanishes at \(\infty\) but not at \(0\); equivalently, \(g_{0}\) vanishes at \(\infty\) but is not cuspidal. We can clearly let \(g_{\infty}=D_{2}^{a}\) with \(a=\frac{k+8}{2}\). In the case of \(g_{0}\), it suffices to consider \(k\in\{0,2,4,6\}\). Then using explicit bases for \(\mathcal{S}_{k+8}(2)\) we see that we can let \(g_{0}\) be as in Table 3. Finally, the independence of the forms \(D_{2}^{a}E_{4}^{b}\) follows using the formulas for \(\dim(\mathcal{M}_{k}(2))\) (see [1]). ## 3. Eisenstein spaces of half-integral weight and level 16 Wang and Pei ([20]) considered the Eisenstein spaces of half-integral weights, giving bases of eigenforms for these spaces in the case of level \(4D\), with \(D\) odd and squarefree. Relying on their definitions and results, we consider the case of level 16. The main result of this section is the following. **Proposition 3.1**.: _Let \(\ell\geq 3\) be prime, and let \(k\geq 3\) be an odd integer. Then \(T(\ell^{2})\) acts by multiplication by \(\sigma_{k-2}(\ell)\) on \(\mathcal{E}_{k/2}(16)\)._ We also give in Proposition 3.5 exact formulas for the coefficients of the Eisenstein series, which are needed to prove the congruence in (4.13). As in Section 2, let \(\Gamma_{\infty}=\{\pm\left(\begin{smallmatrix}1&n\\ 0&1\end{smallmatrix}\right):n\in\mathbb{Z}\}\leq\operatorname{SL}_{2}( \mathbb{Z})\). Let \(k\geq 3\) be an odd integer. Denote \(\lambda=\frac{k-1}{2}\). Let \(N\in\{4,8\}\). For \(\gamma\in\Gamma_{0}(N)\), let \(j(\gamma,z)\) be the automorphy factor of weight \(1/2\). For \(k>3\) we denote \[E_{k,N}(z) =\sum_{\gamma\in\Gamma_{\infty}\setminus\Gamma_{0}(N)}\frac{1}{ j(\gamma,z)^{k}},\] \[E_{k,N}^{\prime} =\tfrac{2^{k}N^{\lambda}}{1-(-1)^{\lambda}i}\cdot E_{k,N}|W(N).\] For \(k=3\) we consider the difference \(E_{3,N}-2\sqrt{N}\,E_{3,N}^{\prime}\) defined by the formulas above, which, for simplicity, we will denote by \(E_{3,N}\). We start by giving the Fourier expansions of these Eisenstein series, following [20]. For this purpose we introduce the following notation, which will not used in other parts of the article. For an even integer \(v\) denote \[c_{k}^{\pm}(v)=\frac{1-2^{(2-k)v/2}}{1-2^{2-k}}\pm 2^{(2-k)v/2}.\] \begin{table} \begin{tabular}{l l} \(k\) & \(g_{0}\) \\ \hline \(0\) & \(D_{2}^{4}-E_{4}^{2}\) \\ \(2\) & \(D_{2}^{5}-D_{2}E_{4}^{2}\) \\ \(4\) & \(D_{2}^{6}-E_{4}^{3}\) \\ \(6\) & \(D_{2}^{7}-D_{2}E_{4}^{3}\) \\ \end{tabular} \end{table} Table 3. Forms in \(\mathcal{V}_{k+8}\) vanishing at \(\infty\) but not at \(0\). Used in the proof of Proposition 2.8. Given a positive integer \(n\), let \(v_{n}=\operatorname{val}_{2}(n)\) and \(n^{\prime}=(-1)^{\lambda}n/2^{v_{n}}\), and denote \[C_{k}(n) =\begin{cases}c_{k}^{-}(v_{n}-1),&2\nmid v_{n},\\ c_{k}^{-}(v_{n}),&2\mid v_{n},\,n^{\prime}\equiv 3\pmod{4},\\ c_{k}^{+}(v_{n})+2^{((2-k)v_{n}+(3-k))/2}\left(\frac{n^{\prime}}{2}\right),&2 \mid v_{n},\,n^{\prime}\equiv 1\pmod{4},\\ \end{cases}\] \[\gamma_{k,4}(n) =\begin{cases}C_{k}(n),&k>3,\\ C_{3}(n)-2,&k=3,\end{cases}\] \[\gamma_{k,8}(n) =\begin{cases}0,&(-1)^{\lambda}n\equiv 2,3\pmod{4},\\ C_{k}(n)-1,&(-1)^{\lambda}n\equiv 0,1\pmod{4},\,k>3,\\ C_{3}(n)-2,&(-1)^{\lambda}n\equiv 0,1\pmod{4},\,k=3.\end{cases}\] Let \(\omega\) denote a Dirichlet character of conductor \(f\). Let \(B_{\lambda}\) denote the \(\lambda\)-th Bernoulli polynomial. Then we consider the generalized Bernoulli number \[B_{\lambda,\omega}=f^{\lambda-1}\sum_{a=1}^{f}\omega(a)B_{\lambda}(\tfrac{a}{f})\] Furthermore, letting \(\mu\) denote the Mobius function, for each positive integer \(n\) we denote \[\beta_{\lambda,\omega}(n)=\sum_{a,b}\mu(a)\omega(a)a^{-\lambda}b^{-2\lambda+1},\] where \(a,b\) run over all positive odd integers such \((ab)^{2}\mid n\). Recall that for each positive integer \(m\) we consider the primitive Dirichlet character \(\omega_{m}\) such that for \((a,4m)=1\) we have \[\omega_{m}(a)=\bigg{(}\frac{(-1)^{\lambda}m}{a}\bigg{)}.\] We denote by \(f_{m}\) its conductor, and we remark that \(f_{m}/m\) is the square of a rational number. We let \[\alpha_{\lambda,m}=\frac{\sqrt{f_{m}/m}\,B_{\lambda,\omega_{m}}}{f_{m}^{ \lambda}\,B_{2\lambda}}\frac{1-\omega_{m}(2)2^{-\lambda}}{1-2^{-2\lambda}}. \tag{3.2}\] Finally, for each positive integer \(n\) we denote \[a_{k,N}(n) =\alpha_{\lambda,n}\,\beta_{\lambda,\omega_{n}}(n)\,\gamma_{k,N} (n)\,n^{\lambda}, \tag{3.4}\] \[a^{\prime}_{k,N}(n) =\alpha_{\lambda,nN}\,\beta_{\lambda,\omega_{nN}}(n)\,n^{\lambda}. \tag{3.3}\] **Proposition 3.5**.: _For \(N\in\{4,8\}\) and odd \(k\geq 3\) we have_ \[E_{k,N} =1+\sum_{n\geq 1}a_{k,N}(n)\,q^{n},\quad k\geq 3,\] \[E^{\prime}_{k,N} =\sum_{n\geq 1}a^{\prime}_{k,N}(n)\,q^{n},\quad k>3.\] The proof is essentially given in [12]; their formulas for the coefficients of these Eisenstein series involve values \(L(\lambda,\omega_{m})\) of \(L\)-series of quadratic characters at positive integers. The latter are well known; we use them in the result below. Given a positive integer \(\lambda\) we denote \[e_{\lambda}=\frac{2^{2+\lambda-k}\big{(}\frac{2\lambda+3}{2}\big{)}\lambda!}{(1-2 ^{-2\lambda})B_{2\lambda}\pi^{\lambda}}.\] **Lemma 3.6**.: _For every positive integer \(m\) we have that_ \[\alpha_{\lambda,m}=e_{\lambda}\,\left(1-\omega_{m}(2)2^{-\lambda}\right)\,L( \lambda,\omega_{m})\,m^{-1/2}.\] _Moreover, \(\operatorname{sgn}(\alpha_{\lambda,m})=\left(\frac{2\lambda+1}{2}\right)\)._ Proof.: From [13, p. 337] and [13, Thm. 9.17] we have that for every quadratic character \(\omega\) with conductor \(f\) and such that \(\omega(1)=(-1)^{\lambda}\) we have that \[L(\lambda,\omega)=\frac{\big{(}\frac{2\lambda+3}{2}\big{)}2^{\lambda-1}\pi^{ \lambda}\sqrt{f}}{\lambda!f^{\lambda}}\,B_{\lambda,\omega},\] from which the first claim follows. The second claim follows from the fact that for every such \(\omega\) we have that \(L(\lambda,\omega)>0\); hence \[\operatorname{sgn}(\alpha_{\lambda,m})=\operatorname{sgn}(e_{\lambda})= \left(\frac{2\lambda+3}{2}\right)\operatorname{sgn}(B_{2\lambda})=\left( \frac{2\lambda+1}{2}\right).\qed\] **Corollary 3.7**.: _Let \(n\) be a squarefree positive integer. Then \(a^{\prime}_{k,N}(n)\neq 0\). Furthermore, \(a_{k,N}(n)=0\) if and only if \(\gamma_{k,N}(n)=0\)._ Proof of Proposition 3.5.: Using the well known formulas for \(\zeta(2\lambda)\) and \(\Gamma(\lambda+1/2)\), and using that \[\frac{(-i)^{\lambda+1/2}\left(1+(-1)^{\lambda}i\right)}{\sqrt{2}}=\left(\frac {2\lambda+1}{2}\right),\] we obtain that \[e_{\lambda}=\frac{(-2\pi i)^{\lambda+1/2}\left(1+(-1)^{\lambda}i\right)}{2^{2 \lambda+1}\Gamma(\lambda+1/2)\zeta(2\lambda)(1-2^{-2\lambda})}.\] Then the result follows straightforwardly from Lemma 3.6 and the formulas [13, (2.30), (2.33), (2.35), (2.36) and (2.38)]. Proposition 3.5 shows that \(E_{k,N}\) and \(E^{\prime}_{k,N}\), which a priori have their coefficients in a cyclotomic field ([20, Thm. 2.3]), actually have rational coefficients. The following results shows that, as in the integral weight case (see (2.7)), their denominators are controlled by \(k\) and can be described in terms of Bernoulli numbers. Its proof will require the following result from Carlitz ([14, Thms. 1 and 3]). **Lemma 3.8**.: _Let \(d\) be a fundamental discriminant, and let \(\lambda\) be a positive integer._ 1. _If_ \(d=-4\) _and_ \(\lambda\) _is odd, then_ \(2B_{\lambda,\chi_{d}}/\lambda\in\mathbb{Z}\)_._ 2. _If_ \(d=\pm p\)_, with_ \(p>2\) _prime, then_ \(B_{\lambda,\chi_{d}}/\lambda\in\mathbb{Z}_{(p)}\)_. Moreover, if_ \(2\lambda/(p-1)\) _is an odd integer, then_ \(pB_{\lambda,\chi_{d}}\in\mathbb{Z}\)_._ 3. _Otherwise,_ \(B_{\lambda,\chi_{d}}/\lambda\in\mathbb{Z}\)_._ We denote \[S_{\lambda}=\begin{cases}2^{\operatorname{val}_{p}(\lambda)+1},&2\mid\lambda, \\ 1,&2\nmid\lambda.\end{cases}\] Furthermore, we denote \(S^{\prime}_{\lambda,N}=S_{\lambda}\) (see Remark 3.11 below). **Proposition 3.9**.: _For \(N\in\{4,8\}\) and odd \(k\geq 3\) we have_ \[E_{k,N} \in 1+\tfrac{\lambda}{2^{\lambda-1}(2^{2\lambda}-1)B_{2\lambda}S_{ \lambda}}\,\mathbb{Z}[\![q]\!],\] \[E^{\prime}_{k,N} \in\tfrac{\lambda 2^{\lambda}}{(2^{2\lambda}-1)B_{2\lambda}S_{ \lambda,N}^{\lambda}}\,\mathbb{Z}[\![q]\!].\] Proof.: We prove the claim for \(E_{k,N}\); the proof for \(E^{\prime}_{k,N}\) follows by similar arguments, using (3.4). Let \(n\) be a positive integer. Recalling that \(f_{n}\) denotes the conductor of \(\omega_{n}\), write \(n=f_{n}q_{n}^{2}=f_{n}^{\prime}(q_{n}^{\prime})^{2}\) with \(f_{n}^{\prime}\) squarefree, so that \(\sqrt{f_{n}/n}=1/q_{n}\) and \(2q_{n}/q_{n}^{\prime}\in\{1,2\}\). Moreover, let \(w_{n}=\operatorname{val}_{2}(q_{n}^{\prime})\) and write \(q_{n}^{\prime}=2^{w_{n}}q_{n}^{\prime\prime}\). Then letting \[r_{n} =q_{n}^{\prime\prime}{}^{2\lambda-1}\,\beta_{\lambda,\omega_{n}}(n),\] \[s_{n} =S_{\lambda}\left(2^{\lambda}-\omega_{n}(2)\right)B_{\lambda, \omega_{n}}/\lambda\] \[t_{n} =(2q_{n}/q_{n}^{\prime\prime})^{2\lambda-1}\,\gamma_{k,N}(n),\] and using (3.2), according to (3.3) we can decompose \[a_{k,N}(n)=\frac{\lambda}{2^{\lambda-1}\,\left(2^{2\lambda}-1\right)B_{2 \lambda}\,S_{\lambda}}\cdot r_{n}\,s_{n}\,t_{n}\,.\] From the definition of \(\beta_{\lambda,\omega}(n)\) it is easy to see that \(r_{n}\in\mathbb{Z}\). Furthermore, by the definition of \(\gamma_{k,N}(n)\), we have that \(t_{n}\in\mathbb{Z}\). To prove the result it suffices then to show that \(s_{n}\in\mathbb{Z}\). First assume that \(\lambda\) is odd and \(n\) is a square. Then \(s_{n}/S_{\lambda}=2B_{\lambda,\omega_{n}}/\lambda\), hence the claim follows by part (a) of Lemma 3.8. Assume now that \(\lambda\) is odd or \(n\) is not a square. In case (c) of Lemma 3.8, the claim follows immediately. In case (b), let \(p=f_{n}\). Then the claim follows from quadratic reciprocity and \[\begin{pmatrix}2\cr p\end{pmatrix}^{\frac{2\lambda}{p-1}}\equiv 2^{\lambda} \pmod{p^{\operatorname{val}_{p}(\lambda)+1}}, \tag{3.10}\] which holds for even \(2\lambda/(p-1)\) as well. Finally, assume that \(\lambda\) is even and \(n\) is a square. Then \(s_{n}=(2^{\lambda}-1)B_{\lambda}/\lambda\) (unless \(n=1\), when they differ by a sign). In this case the result follows from (3.10) and a result of Von Staudt, which asserts that the denominator of \(B_{\lambda}/\lambda\) equals \[\prod_{p-1|\lambda}p^{\operatorname{val}_{p}(\lambda)+1}.\qed\] _Remark 3.11_.: Making considerations about the \(2\)-adic valuation of the generalized Bernoulli numbers, the result also holds letting \[S_{\lambda}=\begin{cases}1/2^{\lambda-2},&\text{for even $\lambda$},\\ 1/2^{\lambda-1},&\text{for odd $\lambda$},\end{cases}\qquad S^{\prime}_{ \lambda,4}=\begin{cases}1/2,&\text{for $\lambda=2$},\\ 1/2^{\lambda+1},&\text{for even $\lambda>2$},\\ 1/2^{\lambda-1},&\text{for odd $\lambda$},\end{cases}\] and \(S^{\prime}_{\lambda,8}=1/2^{\lambda}\). Furthermore, the normalized Eisenstein series according to these sharper constants seem to be primitive. **Proposition 3.12**.: _Let \(k\geq 3\) be odd. Then_ \[\dim\mathcal{E}_{k/2}(16)=\begin{cases}4,&k=3,\\ 6,&k>3.\end{cases}\] _Furthermore,_ \[\mathcal{E}_{k/2}(16)=\\ \begin{cases}\langle E_{3,4},E_{3,4}|V(4),E_{3,8},E_{3,4}|U(2)|V(2) \rangle,&k=3,\\ \Big{\langle}E_{k,4},E_{k,4}|V(4),E^{\prime}_{k,4},E^{\prime}_{k,4}|V(4),E_{k,8 },E^{\prime}_{k,8}|V(2)\Big{\rangle},&k>3.\end{cases} \tag{3.13}\] Proof.: The first claim follows from [10]. Let \(N\in\{4,8\}\). In [12, Thm. 7.6] it is proved that \(E_{k,N}\in\mathcal{E}_{k/2}(N)\). Considering the codomains of the operators \(W(N),V(2),V(4),U(2)\) (see Section 2) we get that \(\mathcal{E}_{k/2}(16)\) contains the subspace on the right hand side of (3.13), for \(k\geq 3\). We now prove that the generators on the right hand side of (3.13) are linearly independent, using the formulas for their coefficients given by Proposition 3.5. Assume first that \(k\equiv 5\pmod{4}\). Then \[E_{k,4} =1+a_{k,4}(1)q+a_{k,4}(2)q^{2}+a_{k,4}(3)q^{3}+a_{k,4}(4)q^{4}+a_ {k,4}(5)q^{5}+O(q^{6}),\] \[E_{k,4}|V(4) =1+a_{k,4}(1)q^{4}+O(q^{6}),\] \[E^{\prime}_{k,4} =a^{\prime}_{k,4}(1)q+a^{\prime}_{k,4}(2)q^{2}+a^{\prime}_{k,4}(3 )q^{3}+a^{\prime}_{k,4}(4)q^{4}+a^{\prime}_{k,4}(5)q^{5}+O(q^{6}),\] \[E^{\prime}_{k,4}|V(4) =a^{\prime}_{k,4}(1)q^{4}+O(q^{6}),\] \[E_{k,8} =1+a_{k,8}(1)q+a_{k,8}(4)q^{4}+a_{k,8}(5)q^{5}+O(q^{6}),\] \[E^{\prime}_{k,8}|V(2) =a^{\prime}_{k,8}(1)q^{2}+a^{\prime}_{k,8}(2)q^{4}+O(q^{6}).\] Then, since \(a^{\prime}_{k,4}(1)\,a^{\prime}_{k,8}(1)\neq 0\) (see Corollary 3.7), it suffices to prove that \[\begin{pmatrix}a_{k,4}(1)&a_{k,4}(3)&a_{k,4}(5)\\ a^{\prime}_{k,4}(1)&a^{\prime}_{k,4}(3)&a^{\prime}_{k,4}(5)\\ a_{k,8}(1)&0&a_{k,8}(5)\end{pmatrix}\] is non-singular. We have that \(\beta_{\lambda,\omega}(n)=1\) for squarefree \(n\). Furthermore, we have that \(\gamma_{k,4}(1)>0,\gamma_{k,4}(3)<0,\gamma_{k,4}(5)>0\) and that \(\gamma_{k,8}(1)>0,\gamma_{k,8}(5)<0\). Then by Lemma 3.6 the signs of the matrix above are given by \[\left(\frac{2\lambda+1}{2}\right)\begin{pmatrix}+&-&+\\ +&+&+\\ +&0&-\end{pmatrix},\] hence its determinant is non-zero. The case \(k\equiv 7\pmod{4},k\neq 3\), can be proved similarly, using the \(7\)-th coefficient instead of the \(5\)-th coefficient in the matrix above. Finally, for \(k=3\) using Proposition 3.5 we get that \[E_{3,4} =1+6q+12q^{2}+8q^{3}+O(q^{4}),\] \[E_{3,4}|V(4) =1+O(q^{4}),\] \[E_{3,8} =1+8q^{3}+O(q^{4}),\] \[E_{3,4}|V(2)|U(2) =1+12q^{2}+O(q^{4}),\] which completes the proof. Proof of Proposition 3.1.: Denote by \(\mathcal{V}\subseteq\mathcal{E}_{k/2}(16)\) the \(\sigma_{k-2}(\ell)\)-eigenspace for \(T(\ell^{2})\). We claim first that \(E_{k,4},E_{k,8}\in\mathcal{V}\). For every \(n\) we see easily from the definitions and Lemma 3.6 that \[\omega_{\ell^{2}n} =\omega_{n},\] \[\alpha_{\lambda,\ell^{2}n} =\ell^{-1}\,\alpha_{\lambda,n},\] \[\gamma_{k,N}\left(\ell^{2}n\right) =\gamma_{k,N}(n).\] Then the claim follows directly from (2.1), using the equalities above and the transformation formulas for computing \(\beta_{\lambda,\omega_{\ell^{2}n}}(\ell^{2}n)\) in terms of \(\beta_{\lambda,\omega_{n}}(n)\) given in [22, p. 209]; we remark that though Wang and Pei are considering \(k>3\) and level \(4D\) with \(D\) odd and squarefree, these particular computations hold in our setting. The result follows, then, by noting that the remaining generators for \(\mathcal{E}_{k/2}(16)\) given in Proposition 3.12 belong to \(\mathcal{V}\), since by [22, Thm. 5.19] the Hecke operators \(T(\ell^{2})\) with \(\ell\neq 2\) commute with the operators \(W(N)\), and by (2.1) they commute with \(U(2),V(2),V(4)\). ## 4. Proofs This section is devoted to give the proofs of our main results, namely Theorems 1.1, 1.2, 1.3 and 1.4. We first state the following result for obtaining congruences for coefficients of (modulo \(m\)) eigenforms of half-integral weight used by [1, 19, 20, 21], among others. **Proposition 4.1**.: _Let \(g=\sum_{n\geq 0}a(n)q^{n}\in\mathcal{M}_{k/2}(N)\cap\mathbb{Z}\llbracket q\rrbracket\), and let \(\ell,m\) be primes such that \(g|T(\ell^{2})\equiv\lambda_{m,\ell}\,g\pmod{m\mathbb{Z}\llbracket q \rrbracket}\)._ 1. _If_ \(\lambda_{m,\ell}\equiv 0\pmod{m}\)_, then_ \(a(\ell^{3}n)\equiv 0\pmod{m}\) _for every_ \(n\) _prime to_ \(\ell\)_._ 2. _If there exists_ \(\epsilon\in\{\pm 1\}\) _such that_ \[\lambda_{m,\ell}\equiv\epsilon\,\ell^{\frac{k-3}{2}}\pmod{m},\] _then_ \(a(\ell^{2}n)\equiv 0\pmod{m}\) _for every_ \(n\) _prime to_ \(\ell\) _such that_ \(\omega_{n}(\ell)=\epsilon\)_._ Proof.: Both claims follow directly from (2.1); for part (a), by replacing \(n\) by \(\ell n\), with \(n\) prime to \(\ell\). The goal of the following series of results is to prove that for prime \(m\) the numbers \(\overline{p}(mn)\) are congruent to the Fourier coefficients of a holomorphic modular form. We start with two preliminary results. **Lemma 4.2**.: _Let \(f\) and \(g\) be power series, and let \(m\geq 1\). Then_ \[\left(\left(f|V(m)\cdot g\right)|U(m)=f\cdot\left(g|U(m)\right).\right.\] Proof.: Let \(f=\sum_{n=0}^{\infty}a(n)q^{n}\) and \(g=\sum_{n=0}^{\infty}b(n)q^{n}\). Denote \[\widetilde{a}(h) =\begin{cases}a(n),&\text{if }h=nm,\\ 0,&\text{otherwise},\end{cases}\] \[\widetilde{c}(h) =\sum_{k=0}^{h}\widetilde{a}(k)b(h-k).\] Now, note that \[\widetilde{c}(hm)=\sum_{k=0}^{hm}\widetilde{a}(k)b(hm-k)=\sum_{k=0}^{h}a(k)b(hm-km).\] Then we have \[((f|V(m))\cdot g)|U(m) =\left(\left(\sum_{h=0}^{\infty}\widetilde{a}(h)q^{h}\right) \left(\sum_{n=0}^{\infty}b(n)q^{n}\right)\right)|U(m)\] \[=\left(\sum_{h=0}^{\infty}\widetilde{c}(h)q^{h}\right)|U(m)=\sum_ {h=0}^{\infty}\widetilde{c}(hm)q^{h}\] \[=\sum_{h=0}^{\infty}\left(\sum_{k=0}^{h}a(k)b(mh-k)\right)q^{h}=f \cdot(g|U(m))\,.\qed\] **Lemma 4.3**.: _Let \(f\) be an eta-quotient. Then for every prime \(m\geq 1\) we have that_ \[f|V(m)\equiv f^{m}\pmod{m\mathbb{Z}[\![q]\!]}.\] Proof.: Write \(f\) as in (2.3). Since both operators \(V(m)\) and \(g\mapsto g^{m}\) are multiplicative, it suffices to verify the congruence for every factor \(g\) of \(f\). For \(g=q^{\frac{s_{X}}{24}}\) both operators clearly agree, and for \(g=1-q^{\delta n}\) the congruence follows from the fact that \((r+s)^{m}\equiv r^{m}+s^{m}\pmod{m\mathbb{Z}[\![q]\!]}\) for every \(r,s\in\mathbb{Z}[\![q]\!]\). In what follows we consider the eta-quotient related to the generating function for \(\overline{p}(n)\) (see [1, (1.1)]). Namely, we let \[f=\frac{\eta(2z)}{\eta^{2}(z)}=\sum_{n\geq 0}\overline{p}(n)q^{n}, \tag{4.4}\] We remark that \(f\) is not holomorphic: by (2.5), it has a simple pole at \(s=0\). **Lemma 4.5**.: _Denote \(F=1/f\). Then \(F\in\mathcal{M}_{1/2}(16)\)._ Proof.: By Proposition 2.4 we have that \(F\) is a weakly holomorphic modular form of level \(16\) and weight \(1/2\), with trivial character. Its possible singularities lie at the cusps \(s\) for \(\Gamma_{0}(16)\), namely \(s\in\{0,1/8,1/4,1/2,3/4,\infty\}\). Then the claim follows from (2.5), which shows that the order of vanishing of \(F\) at each such \(s\) is nonnegative (moreover, it is positive only for \(s=0\)). For the following two results we let \(0<a_{m}<8\) be such that \(a_{m}\equiv-m\pmod{8}\), and we denote \[r_{m}=\frac{1}{2}\left((16-a_{m})(m-1)-1\right).\] **Proposition 4.6**.: _Let \(m\geq 3\) be a prime. Then there exists \(h^{\prime}_{m}\in\mathcal{M}_{r_{m}}(2)\cap\mathbb{Z}[\![q]\!]\) such that_ \[f|U(m)\equiv F^{a_{m}}h^{\prime}_{m}\pmod{m\mathbb{Z}[\![q]\!]}. \tag{4.7}\] Proof.: Recall the eta-quotient \(\Delta_{2}(z)=\eta^{8}(z)\eta^{8}(2z)\). We consider the eta-quotients \[\alpha=\Delta_{2}F^{-a_{m}},\qquad\beta=f\alpha^{m}.\] By (2.5) we have that \(\beta\in\mathcal{S}_{r_{m}+8}(2)\). Since Hecke operators preserve cuspforms, by Proposition 2.6 there exists \(h^{\prime}_{m}\in\mathcal{M}_{r_{m}}(2)\) such that \[\beta|T(m)=\Delta_{2}h^{\prime}_{m}.\] Note that since \(\beta\in q\mathbb{Z}\llbracket q\rrbracket\) and \(\Delta_{2}\in q\mathbb{Z}\llbracket q\rrbracket^{\times}\) we have that \(h^{\prime}_{m}\in\mathbb{Z}\llbracket q\rrbracket\). On the other hand, using Lemmas 4.2 and 4.3 we have that \[\beta|U(m)\equiv(f\cdot(\alpha|V(m)))|U(m)\equiv f|U(m)\cdot\alpha\pmod{m \mathbb{Z}\llbracket q\rrbracket}.\] Since over integral weights \(T(m)\) agrees with \(U(m)\) modulo \(m\mathbb{Z}\llbracket q\rrbracket\), the above congruences give that \[f|U(m)\cdot\alpha\equiv\Delta_{2}h^{\prime}_{m}\pmod{m\mathbb{Z}\llbracket q \rrbracket},\] which, since \(\alpha,\Delta_{2}\in q\mathbb{Z}\llbracket q\rrbracket^{\times}\), concludes the proof. The above result shows that \(f|U(m)\) is congruent to a holomorphic modular form. We now show that, at least for small values of \(m\), the weight of the latter can be improved. **Proposition 4.8**.: _Let \(3\leq m\leq 19\) be a prime. Let \(h_{m}\) be the corresponding form given in Table 4. and let \(g_{m}=F^{a_{m}}h_{m}\). Then \(g_{m}\in\mathcal{M}_{k_{m}/2}(16)\cap\mathbb{Z}\llbracket q\rrbracket\), and_ \[f|U(m)\equiv g_{m}\pmod{m\mathbb{Z}\llbracket q\rrbracket}. \tag{4.9}\] Proof.: The first claim follows from Lemma 4.5, since for every \(m\), we have that \(h_{m}\in\mathcal{M}_{\frac{k_{m}}{2}-a_{m}}(2)\). We now prove (4.9). Let \(e=(15-a_{m})/2\). Since \(E_{m-1}\equiv 1\pmod{m\mathbb{Z}\llbracket q\rrbracket}\), it suffices to prove that the form \(h^{\prime}_{m}\) from the above proposition satisfies that \[h^{\prime}_{m}\equiv h_{m}E^{e}_{m-1}\pmod{m\mathbb{Z}\llbracket q\rrbracket}.\] With the above choice of \(e\), both forms in this congruence belong to \(\mathcal{M}_{r_{m}}(2)\cap\mathbb{Z}\llbracket q\rrbracket\). Thus, by Proposition 2.2 and (4.7) it suffices to prove that the \(n\)-th coefficients of \(f|U(m)\cdot f^{a_{m}}\) and \(h_{m}\) agree, modulo \(m\), up to \(n\) equal to \(n_{0}=\left\lfloor\frac{r_{m}}{36}\right\rfloor\). In each case, this can be proved by computing these numbers explicitly. _Remark 4.10_.: In fact, using the techniques from the above proof and Proposition 2.8, we have found forms \(h_{m}\) as in Proposition 4.8 for every prime \(m<1000\). Proposition 4.6 implies the following refinement of [14, Thm 1.1] for the case of the overpartition function. **Theorem 4.11**.: _Let \(m\) be an odd prime. Then a positive proportion of the primes \(\ell\equiv-1\pmod{16m}\) have the property that_ \[\overline{p}\left(m\ell^{3}n\right)\equiv 0\pmod{m}.\] _for every \(n\) coprime to \(m\ell\)._ \begin{table} \begin{tabular}{c l} \(m\) & \(h_{m}\) \\ \hline 3 & 1 \\ 5 & 1 \\ 7 & \(D_{2}\) \\ 11 & \(D_{2}\) \\ 13 & \(E_{4}\) \\ 17 & \(13D_{2}^{2}+5E_{4}\) \\ 19 & \(11D_{2}^{3}+9D_{2}E_{4}\) \\ \end{tabular} \end{table} Table 4. Holomorphic modular forms used in Proposition 4.8. Proof.: The proof of [10, Thm. 1.1] holds in our setting with no changes. The key fact which implies our stronger assertion is that, since we proved in Proposition 4.6 that \(f|U(m)\) is congruent to a holomorphic modular form, the claim from [10, Prop. 3.5] holds for \(m\) rather than for a sufficiently large power of \(m\). From here on, given primes \(m,\ell\), we denote \[\lambda_{m,\ell}=1+\ell^{k_{m}-2},\] the eigenvalue of \(T(\ell^{2})\) acting on \(\mathcal{E}_{k_{m}/2}(16)\) (see Proposition 3.1). **Proposition 4.12**.: _Let \(3\leq m\leq 19\) be a prime, and let \(g_{m}\) be the form given in Proposition 4.8._ 1. _If_ \(3\leq m\leq 11\) _then_ \(g_{m}|T(\ell^{2})\equiv\lambda_{m,\ell}g_{m}\pmod{m\mathbb{Z}[\![q]\!]}\) _for every prime_ \(\ell>2\)_._ 2. _If_ \(13\leq m\leq 19\) _then_ \(g_{m}|T(\ell^{2})\equiv\lambda_{m,\ell}g_{m}\pmod{m\mathbb{Z}[\![q]\!]}\) _for every prime_ \(\ell\) _in Table_ 5_._ Proof.: To prove part (a) we can use Proposition 3.1, once we verify that for \(3\leq m\leq 11\) we have that \(g_{m}\in\mathcal{E}_{k_{m}/2}(16)+m\mathbb{Z}[\![q]\!]\). The latter claim, in the case \(3\leq m\leq 7\) follows from the fact that \(\mathcal{S}_{k_{m}/2}(16)=\{0\}\). In the case \(m=11\), since for \(N\in\{4,8\}\) by Proposition 3.9 we have that \[2^{3}17\cdot E_{9,N},\;2^{4}17\cdot E_{9,4}^{\prime},\;2^{5}17\cdot E_{9,8}^{ \prime}\quad\in\mathbb{Z}[\![q]\!],\] we can use Proposition 2.2 to obtain that \[g_{11}(z)\equiv 9E_{9,4}+4E_{9,4}|V(4)+7E_{9,4}^{\prime}+4E_{9,4}^{\prime}|V( 4)+7E_{9,8}^{\prime}|V(2)\pmod{11\,\mathbb{Z}[\![q]\!]}. \tag{4.13}\] For proving part (b), by Proposition 2.2 it suffices to prove that the \(n\)-th coefficients of \(g_{m}|T(\ell^{2})\) and \(\lambda_{m,\ell}\,g_{m}\) agree, modulo \(m\), for \(n\) up to \[\tfrac{k_{m}}{24}\cdot[\operatorname{SL}_{2}(\mathbb{Z}):\Gamma_{0}(16)]=m-2.\] Moreover, by (4.9) it suffices to prove that \[\left(f|U(m)|T(\ell^{2})\right)(n)\equiv\lambda_{m,\ell}\,(f|U(m))(n)\pmod{ m},\quad 1\leq n\leq m-2,\] which in each case can be proved by computing these numbers explicitly. _Remark 4.14_.: The proof of Proposition 4.8 involves computing \(\overline{p}(mn)\) modulo \(m\) for small values of \(n\). This can be accomplished easily by expanding the infinite product (4.4) defining \(f\). The proof of Proposition 4.12 involves computing \(\overline{p}(mn)\) modulo \(m\) for large values of \(n\) (e.g. \(n=m(m-2)\ell^{2}\) with large \(\ell\)); in this case we resort to the efficient method provided by [1]. \begin{table} \begin{tabular}{l|l} \(m\) & \(\ell\) \\ \hline \(13\) & \(431,1811,1871,1949,2207,2459,3301,4001,4079,4289,4513,4799,4931\) \\ \(17\) & \(1999,2207,2243,4759\) \\ \(19\) & \(151,1091,2207,2659,3989\) \\ \end{tabular} \end{table} Table 5. Primes \(\ell\) giving congruences modulo \(m\). See Proposition 4.12. The proofs of our main results now follow easily. Proof of Theorems 1.1 and 1.3.: They follow using Proposition 4.1 (a) and Proposition 4.12. Proof of Theorems 1.2 and 1.4.: They follow using Proposition 4.1 (b) and Proposition 4.12; the eigenvalues \(\lambda_{m,\ell}\) in Table 5 satisfy the hypothesis of Proposition 4.1 (b), namely they are such that \[\lambda_{m,\ell}\equiv\epsilon_{m,\ell}\,\ell^{\frac{k_{m}-3}{2}}\pmod{m},\] where \(\epsilon_{m,\ell}\) is as in Table 2. _Remark 4.15_.: We found that \(g_{m}\) is, modulo \(m\mathbb{Z}[\![q]\!]\), an eigenfunction of \(T(\ell^{2})\) for more primes \(\ell\) than those appearing in Table 5, but the eigenvalues are not useful for our purposes, since they do not satisfy any of the hypotheses of Proposition 4.1. Moreover, the primes given are all the primes \(\ell<5000\) giving congruences. For \(m=23\) we found that \(\ell=5303,8783\) yield eigenvalues, but they do not give congruences. For larger \(m\) we have not been able to find eigenvalues.
2305.00759
Multiplexing-based control of wavefront propagation: the interplay of inter-layer coupling, asymmetry and noise
We show how multiplexing influences propagating fronts in multilayer networks of coupled bistable oscillators. Using numerical simulation, we investigate both deterministic and noise-sustained propagation. In particular, we demonstrate that the multiplexing allows to reduce the intra-layer dynamics to a common regime where the front propagation speed in all the interacting layers attains the same fixed value. In the presence of noise the dynamics is more complicated and is characterized by the ability of the system to adjust to the common propagation speed for varying the multiplexing strength. In addition, we find that the noise-induced stabilization of wavefront propagation in multilayer networks allows to obtain less pronounced deviations of the wavefront compared to the stabilization achieved in the isolated layer. Finally, we demonstrate that the reduction of the wavefront deviations can be enhanced by increasing the number of interacting layers.
Vladimir V. Semenov, Sarika Jalan, Anna Zakharova
2023-05-01T10:29:35Z
http://arxiv.org/abs/2305.00759v1
Multiplexing-based control of wavefront propagation: the interplay of inter-layer coupling, asymmetry and noise ###### Abstract We show how multiplexing influences propagating fronts in multilayer networks of coupled bistable oscillators. Using numerical simulation, we investigate both deterministic and noise-sustained propagation. In particular, we demonstrate that the multiplexing allows to reduce the intra-layer dynamics to a common regime where the front propagation speed in all the interacting layers attains the same fixed value. In the presence of noise the dynamics is more complicated and is characterized by the ability of the system to adjust to the common propagation speed for varying the multiplexing strength. In addition, we find that the noise-induced stabilization of wavefront propagation in multilayer networks allows to obtain less pronounced deviations of the wavefront compared to the stabilization achieved in the isolated layer. Finally, we demonstrate that the reduction of the wavefront deviations can be enhanced by increasing the number of interacting layers. + Footnote †: preprint: Multiplexing-based control of wavefront propagation ## I Introduction A broad variety of spatially-extended dynamical systems can experience a non-equilibrium transition manifested though formation and propagation of domains and waves [13; 18; 26]. The simplest types of dynamical systems exhibiting such propagation are ensembles and media, where bistability results from the coexistence of two steady states in the phase space of individual oscillators. In such a case, a system evolves from its initial state such that two kinds of domains corresponding to quiescent steady state regimes are formed in space. After that, the domains as well as the boundaries between them called 'fronts' or 'wavefronts' can propagate. The wavefront propagation in bistable media plays an important role in terms of chemistry (for instance, see Schlogl model developed to describe an autocatalytic reaction mechanism [27; 32; 33]), flame propagation theory [48], electronics [34], to name just a few. Domain growth and wavefront propagation observed in 2D- and 3D-space is often referred to as 'coarsening'. It occurs in the same way when compared to classical front propagation in 1D-space, but with respect to the shape of domains. Coarsening represents a fundamental phenomenon and unites a wide spectrum of processes studied in the context of physics of liquid crystals [46] and magnetism [3; 5; 7; 8], physics and chemistry of materials [14; 16; 49; 50], laser physics [15; 17; 21], electronics [36] and animal population statistics [9]. Besides spatially-extended systems [3], coarsening can be exhibited by single time-delay oscillators prepared in an inhomogeneous state characterized by coexisting equilibria. This effect becomes clearly identified when purely temporal dynamics of time-delay oscillators is mapped on space by means of virtual space-time representation [15; 21; 36]. In addition to the occurrence in deterministic systems, propagating fronts and coarsening processes can emerge in stochastic systems as phenomena accompanying the noise-induced phase transitions [4; 6]. The ability to demonstrate propagating fronts and coarsening is connected directly to the symmetry properties of bistable systems and with the spatial interaction intensity (for instance, diffusion coefficient in reaction-diffusion models or coupling strength in ensembles of interacting oscillators). To control the front propagation speed and direction, one can vary parameters responsible for the symmetry of the local dynamics as well as adjust the interaction strength [27]. In addition, one can apply stochastic forcing for this purpose. In particular, it is known that multiplicative noise influences the front dynamics [10; 13; 25]. In the present work, we introduce a new scheme for controlling the front propagation, which can be implemented in multilayer networks of bistable oscillators. We show that connecting a one-layer network to another one-layer network through coupling between replica nodes, i.e., multiplexing, provides a tool for controlling the front propagation. Multiplexing-based schemes have been applied for controlling a wide spectrum of phenomena. For instance, such approach has been reported for deterministic networks with static inter-layer topology in the context of chimera [47] and solitary [35] states as well as for controlling transitions between two network states [28; 31]. Moreover, it has been demonstrated that topological asymmetry in multilayer networks's stabilization of chaotic dynamics manifested by the establishing of stable periodic orbits and equilibria (so-called asymmetry-induced order) [24]. Furthermore, the significant impact of the dynamic inter-layer topology allows to achieve the inter-layer synchronization at lower inter-layer coupling strength [11]. Multiplexing can have different impact on the dynam ics in stochastic multilayer networks. In particular, multiplexing noise (noisy modulation of the inter-layer coupling strength) can be applied for the regulation of the inter-layer synchronization of spatio-temporal patterns in multilayer networks [30; 44]. A multiplexing-based approach [23; 37; 41] has been successfully applied to control the phenomenon of coherence resonance [2; 22; 29] observed in a multilayer networks of excitable oscillators as well as the phenomenon of stochastic resonance [1; 12] exhibited by a multilayer network of bistable oscillators. For both cases, the multiplexing-based control allows to enhance or suppress the considered effects. Therefore, multiplexing can play both constructive and destructive role for the resonant stochastic phenomena associated with noise-induced regularity (coherence) of the stochastic dynamics. Here, we highlight a new facet of multiplexing in its impact on the deterministic and stochastic dynamics. The constructive role of multiplexing which we demonstrate here consists in two facts: varying the inter-layer coupling strength, one can (i) change the front propagation speed and direction, and (ii) minimize deviations of noise-driven fronts. The presented results can be potentially applied for controlling stochastic multilayer network dynamics actively studied in the context of deep learning [38; 39; 40]. Indeed, the property of bistability can be easily achieved in various kinds of artificial neural networks including bistable neural networks with tanh-nonlinearity [42; 45]. Thus, one can expect the occurrence of front propagation and coarsening in such systems, which can potentially influence neural network characteristics and performance. For this reason, we expect that the presented results would be interesting for experts in artificial intelligence and machine learning besides specialists in nonlinear dynamics and theory of stochastic processes. ## II Single-layer dynamics Before studying the impact of multiplexing, we discuss the case of a single ring of locally coupled overdamped bistable oscillators [Fig. 1 (a)] in order to compare the isolated- and coupled-layer dynamics. The system equations take the following form: \[\frac{dx_{i}}{dt}=-x_{i}(x_{i}-a_{x})(x_{i}+b_{x})+\frac{\sigma_{x}}{2}\sum \limits_{j=i-1}^{i+1}(x_{j}-x_{i}), \tag{1}\] where \(x_{i}\) are the dynamical variables, \(i=1,2,...,N\) with \(N\) being the total number of elements in the layer. In this study, all the network layers consist of \(N=200\) elements. The strength of the coupling within the layer (intra-layer coupling) is given by \(\sigma_{x}\). Parameters \(a\) and \(b\) determine the dynamics of individual network elements. They define whether the individual element nonlinearity is symmetric (\(a_{x}=b_{x}\)) or asymmetric (\(a_{x}\neq b_{x}\)). In the current study, we assume that all the elements are in the bistable regime (\(a_{x}\) and \(b_{x}\) are positive). We perform our investigations by means of numerical simulations. In more detail, we integrate the differential equations numerically using the Heun method [20] with the time step \(\Delta t=0.001\) and the total integration time \(t_{\rm total}=10^{4}\). These numerical integration method parameters are chosen for modeling of all the cases discussed in the paper. In fact, ensemble Eq. (1) represents the reaction-diffusion equation \(\frac{dx}{dt}=-x(x-a_{x})(x+b_{x})+k\nabla^{2}x\) rewritten for discretized space by means of the finite difference method (this technique is widely used for the modeling of reaction-diffusion systems [19; 43]). It is well-known that bistable reaction-diffusion systems exhibit front propagation in the presence of asymmetry (\(a\neq b\) in ensemble Eq. (1)) [27]. As mentioned in Sec. I, the propagation can be stopped (stabilized) and then made inverse by using parametric noise. The model Eq. (1) demonstrates the same effect illustrated in Fig.1 (b)-(d) for the parameter values \(a_{x}=1.0\), \(b_{x}=0.9\), \(\sigma_{x}=1.0\) and the initial conditions \(x_{i}(t=0)=-b_{x}\) for \(i\in[50:150]\) and \(x_{i}(t=0)=a_{x}\) elsewhere (this kind of initial conditions is used throughout the paper). In the absence of noise, one observes expansion of the domain corresponding to the state \(x_{i}(t)=a_{x}\) (red domain in Fig. 1 (b)) which invades the entire available space. However, the front propagation can be slowed down, stopped [Fig. 1 (c)] and reversed [Fig. 1 (d)] in the presence of parametric noise modulating the parameter \(b_{x}\) by increasing the noise level. Here, we introduce parametric noise modulating parameter \(b_{x}\) of each oscillator \(x_{i}\) in the form \(b_{x}=0.9+\sqrt{2D}m_{i}(t)\). Further, \(\sqrt{2D}m_{i}(t)\in\mathbb{R}\) Figure 1: Stochastic control of front propagation (coarsening) in a single-layer network (Eq. (1) in the presence of parametric noise in each oscillator: \(b_{x}=0.9+\sqrt{2D}m_{i}(t)\)). (a) Schematic representation of a one-layer network (layer \(x\)); (b)-(d) Spatio-temporal dynamics for increasing noise intensity, \(D\). System parameters: \(a=1\), \(\sigma_{x}=1\). is Gaussian white noise with intensity \(D\), i.e., \(<n_{i}(t)>=0\) and \(<n_{i}(t)n_{j}(t)>=\delta_{ij}\delta(t-t^{\prime})\), \(\forall i,j\). The opposite effect is achieved if instead modulating parameter \(b_{x}\), the parametric noise is introduced for the parameter \(a_{x}\) in the form: \(a_{x}=1.0+\sqrt{2D}n_{i}(t)\). In such a case, increasing noise speeds up the front propagation and the state \(x_{i}=a_{x}\) invades the available space faster than in the deterministic case. One aspect of the noise-based control is important to note here. It is shown in Fig. 1 that one can control the mean front propagation speed by applying multiplicative noise and pass through the stabilized state corresponding to zero speed. However, the stochastic stabilization of the front propagation is achieved at high level of noise and the deviations of the instantaneous front position become evident and significant (see Fig. 1 (c)). As we demonstrate in Sec. 3, such deviations can be minimized in a multilayer network due to the action of multiplexing. ## III Multilayer network Next, we consider a two-layer multiplex network depicted in Fig. 2 (a), where each layer represents a ring of locally coupled bistable oscillators. The oscillators in the first layer \(x_{i}\) are not under direct action of noise, while the second-layer oscillators \(y_{i}\) contain statistically independent sources of multiplicative white Gaussian noise. The system equations take the form \[\begin{split}&\frac{dx_{i}}{dt}=-x_{i}(x_{i}-a_{x})(x_{i}+b_{x}) \\ &+\frac{\sigma_{x}}{2}\sum_{j=i-1}^{i+1}(x_{j}-x_{i})+\sigma(y_{i }-x_{i}),\\ &\frac{dy_{i}}{dt}=-y_{i}(y_{i}-a_{y})(y_{i}+b_{y}+\sqrt{2D}n_{i} (t))\\ &+\frac{\sigma_{y}}{2}\sum_{j=i-1}^{i+1}(y_{j}-y_{i})+\sigma(x_{i }-y_{i}).\end{split} \tag{2}\] The strength of the coupling within the layer (intra-layer coupling) is given by \(\sigma_{x}\) and \(\sigma_{y}\) for the first and second layer, respectively. The coupling between the layers (inter-layer coupling) is bidirectional, diffusive and its strength is characterized by parameter \(\sigma\). We consider a multiplex network, where the layers contain the same number of nodes and the inter-layer links are allowed only for replica nodes, i.e., there is a one-to-one correspondence between the nodes in different layers. ### Deterministic model First, we consider the deterministic system (see Eqs. (2) at \(D=0\)). To reveal and visualize the action of multiplexing, we study two interacting layers. The first layer consists of asymmetric (\(a_{x}=1.0\), \(b_{x}=0.9\)) oscillators, while the second-layer elements are symmetric (\(a_{y}=1.0\), \(b_{y}=1.0\)). The intra-layer coupling strength is fixed. The initial conditions for the Figure 2: Multiplexing-based control of front propagation (coarsening) in a two-layer multiplex network Eqs. (2) in the absence of noise. (a) Schematic representation of the network (layers \(x\) and \(y\)); (b) Dependence of the front propagation speed on the multiplexing strength in layer \(x\) (solid curves) and \(y\) (dashed curves) for intra-layer coupling \(\sigma_{x,y}=1\) (red curves) and \(\sigma_{xy}=2\) (blue curves); (c)-(e) Spatio-temporal dynamics of layer \(x\) (upper panels) and \(y\) (lower panes) for fixed intra-layer coupling (\(\sigma_{x}=\sigma_{y}=2\)) and increasing multiplexing strength: \(\sigma=10^{-3}\) (panels (c)), \(\sigma=8\times 10^{-3}\) (panels (d)), \(\sigma=5\times 10^{-2}\) (panels (e)). Other parameters: \(a_{x}=1.0\), \(b_{x}=0.9\), \(a_{y}=1.0\), \(b_{y}=1.0\). first layer are the same as in Sec. 2: \(x_{i}(t=0)=-b_{x}\) for \(i\in[50:150]\) and \(x_{i}(t=0)=a_{x}\) elsewhere. The initial conditions for the second layer are similar: \(y_{i}(t=0)=-b_{y}\) for \(i\in[50:150]\) and \(y_{i}(t=0)=a_{y}\). When we vary the multiplexing strength, the numerical simulations carried out for each \(\sigma\) value start from the same initial conditions mentioned above. To quantitatively describe the front propagation, we introduce the propagation speed of the left front moving from the left to the right: \(v=\Delta i/\Delta t\). Here, \(\Delta i\) represents the number of oscillators where the state \(a_{x}\) or \(a_{y}\) spreads over time \(\Delta t\) (illustrated in Fig. 2 (c)). This approach is based on the characterization of the front propagation in reaction-diffusion systems modeled in discretized space, where the front propagation distance is measured as a number of passed nodes multiplied by the space step, \(s=\Delta i\times\)step, and the corresponding speed takes the form \(v=s/\Delta t\). The increase of the multiplexing strength in the model Eqs. (2) causes transformations illustrated in Fig. 2 (b) as the dependence of front propagation speed in the first (solid curves in Fig. 2 (b)) and second (dashed curves in Fig. 2 (b)) layer for two different values of the intra-layer coupling strength: \(\sigma_{x}=\sigma_{y}=1\) and \(\sigma_{x}=\sigma_{y}=2\). In both cases, the front propagation speed in layers \(x\) and \(y\) tends to the same value when the strength of multiplexing is increased. However, increasing the multiplexing strength allows to stop the front propagation in the layer of coupled asymmetric oscillators for weak intra-layer coupling (for instance, see the red curves in Fig. 2 (b) corresponding to \(\sigma_{x}=\sigma_{y}=1\)), while in the ring of symmetric elements the wavefronts remain to be motionless. If the intra-layer coupling strength is large enough, then the front propagation speeds approach each other and tend to identical non-zero values (see the blue curves in Fig. 2 (b) corresponding to \(\sigma_{x}=\sigma_{y}=2\)). This effect is illustrated in Fig. 2 (c)-(e). For weak coupling between the layers their dynamics is quite similar to the front propagation in the isolated layer: one observes the front propagation in the first layer \(x\), while the fronts in the second layer do not propagate [Fig. 2 (c)]. Increasing \(\sigma\), one induces the front propagation in the second layer and slows down this motion in the first layer [Fig. 2 (d)]. Then the front propagation speeds tend to each other, which finally re Figure 3: Multiplexing-based control of front propagation (coarsening) in a two-layer multiplex network (Eqs. (2) in the presence of noise: (a) Dependence of the mean front propagation speed on the multiplexing strength in layer \(x\) (solid curves) and \(y\) (dashed curves) for intra-layer coupling \(\sigma_{x,y}=1\); (b)-(e) Spatio-temporal dynamics of layer \(x\) (upper panels) and \(y\) (lower panels) for fixed intra-layer coupling (\(\sigma_{x}=\sigma_{y}=1\)) and increasing multiplexing strength: \(\sigma=10^{-3}\) (panels (b)), \(\sigma=10^{-2}\) (panels (c)), \(\sigma=1.3\) (panels (d)), \(\sigma=5.0\) (panels (e)). Other parameters: \(a_{x}=1.0\), \(b_{x}=0.9\), \(a_{y}=1.0\), \(b_{y}=0.9\), \(D=0.14\). sults in identical spatio-temporal diagrams for both layers: the fronts propagate with the same speed for sufficiently strong interaction between the layers [Fig. 2 (e)]. After the front propagation speeds became identical, they do not vary for further increasing multiplexing strength. ### Stochastic model Next, we consider two layers where each oscillator is asymmetric: Eqs. (2) for \(a_{x}=a_{y}=1.0\), \(b_{x}=b_{y}=0.9\) and intra-layer coupling \(\sigma_{x}=\sigma_{y}=1\). The initial conditions are the same as before. In contrast to Sec. 3.2, the intensity of noise in the second layer is non-zero \(D=0.14\) which corresponds to the front propagation stabilization in the isolated layer. Thus, layers \(x\) and \(y\) evolve as depicted in Fig. 1 (b, c) in the absence of inter-layer coupling. The transformation of wavefront propagation caused by increasing the multiplexing strength is illustrated in Fig. 3 (a) as the dependence of the mean front propagation speed on the multiplexing strength. The mean front propagation speed is obtained by numerical simulations of the model Eqs. (2) repeated 10 times start Figure 4: Multiplexing-based control of front propagation (coarsening) in a three-layer multiplex network Eqs. (3) in the presence of noise: (a) Schematic representation of a three-layer network; (b)-(e) Spatio-temporal dynamics of layers \(x\), \(y\), \(z\) for fixed intra-layer coupling (\(\sigma_{x}=\sigma_{y}=\sigma_{z}=1\)) and increasing multiplexing strength: \(\sigma=10^{-3}\) (panels (b)), \(\sigma=10^{-1}\) (panels (c)), \(\sigma=0.5\) (panels (d)), \(\sigma=7.0\) (panels (e)). Other parameters: \(a_{x}=a_{y}=a_{z}=1.0\), \(b_{x}=b_{y}=b_{z}=0.9\), \(D=0.14\). ing from the same initial conditions. First, the mean speeds approach each other [Fig. 3 (b)] and tend to the same value [Fig. 3 (c)] as in the deterministic case. However, further increase of the inter-layer coupling strength results in subsequent changes in the mean propagation speed, which is the main difference between front propagation control in deterministic and stochastic models. In this way, one can achieve stabilized fronts (where \(v_{x,y}=0\)) in both layers at an appropriate level of multiplexing \(\sigma\approx 1.3\) [Fig. 3 (d)]. After passing through the multiplexing strength \(\sigma\approx 1.3\), the front propagation in both layers is reversed [Fig. 3 (e)]. Thus, adjusting the coupling between the layers, one can slow down, stabilize and reverse the wavefront propagation. It is important to note that zero wavefront propagation speed achieved in the multilayer network Eqs. (2) is characterized by less deviations of instantaneous front position in comparison with the single-layer model Eq. (1) (compare motions of fronts in Fig. 1 (c) and Fig. 3 (d)). It means that multiplexing can be applied for the minimization of front fluctuations. This constructive effect can be enhanced by increasing the number of interacting layers. To demonstrate this, let us consider a three-layer multiplex network schematically depicted in Fig. 4 (a): \[\begin{split}&\frac{dx_{i}}{dt}=-x_{i}(x_{i}-a_{x})(x_{i}+b_{x}) \\ &+\frac{\sigma_{x}}{2}\sum_{j=i-1}^{i+1}(x_{j}-x_{i})+\frac{ \sigma}{2}(y_{i}-x_{i})+\frac{\sigma}{2}(z_{i}-x_{i}),\\ &\frac{dy_{i}}{dt}=-y_{i}(y_{i}-a_{y})(y_{i}+b_{y})\\ &+\frac{\sigma_{y}}{2}\sum_{j=i-1}^{i+1}(y_{j}-y_{i})+\frac{ \sigma}{2}(x_{i}-y_{i})+\frac{\sigma}{2}(z_{i}-y_{i}),\\ &\frac{dz_{i}}{dt}=-z_{i}(z_{i}-a_{z})(z_{i}+b_{z}+\sqrt{2D}n_{i} (t))\\ &+\frac{\sigma_{z}}{2}\sum_{j=i-1}^{i+1}(z_{j}-z_{i})+\frac{ \sigma}{2}(x_{i}-z_{i})+\frac{\sigma}{2}(y_{i}-z_{i}).\end{split} \tag{3}\] Suppose that the layers represent rings of locally-coupled asymmetric bistable oscillators, \(a_{x}=a_{y}=a_{z}=1.0\) and \(b_{x}=b_{y}=b_{z}=0.9\) where the asymmetry of the third layer is compensated by multiplicative noise, \(D=0.14\) and the coupling strength within the layers is fixed, \(\sigma_{x}=\sigma_{y}=\sigma_{z}=1.0\). As illustrated in Fig. 4 (b)-(e), increasing the multiplexing strength gives rise to the effects which are similar to those observed in the two-layer network Eqs. (2). However, the achieved stabilization in the three-layer network is characterized by lower deviations of wavefronts in comparison with the two-layer network (compare Fig. 3 (d) and Fig. 4 (e)). The results of numerical simulations indicate almost negligible wavefront deviations for further increasing the number of interacting layers. ## IV Conclusion It has been established that multiplexing can be applied for inducing and controlling wavefront propagation in multilayer networks of bistable oscillators. In particular, multiplexing allows to reduce the wavefront propagation to a common regime where the front propagation speed in all the interacting layers tends to the same value. This value is fixed in the deterministic system, but varies in the stochastic model with increasing the multiplexing strength. This provides for stopping (stabilization) and reversing the front propagation in all the interacting layers. It is important to note that the stabilization of wavefront propagation in stochastic multilayer networks is characterized by reduced deviations of the instantaneous wavefront position in comparison with the single-layer topology. Besides the possibility for control of the wavefront propagation speed by varying the multiplexing strength, the constructive role of multiplexing is manifested by the minimization of wavefront deviations. The minimization is enhanced by increasing the number of interacting layers. As reported recently [37], multiplexing provides for enhancement of stochastic resonance and this impact is also reinforced for the increasing amount of interacting layers. Thus, one can conclude that the ability to enhance the constructive influence of multiplexing on stochastic dynamics of bistable multilayer networks by increasing the number of interacting layers has a general character. The present work is the first step towards a detailed study of the multiplexing-based wavefront propagation control and raises a number of questions. In particular, the theoretical background of the observed phenomena remains to be understood. These and other questions are issues for further investigations. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Acknowledgements We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer - 163436311-SFB-910. Results of numerical simulations presented in Sec. 3.1 and 3.2 are obtained by V.V.S. in the framework Russian Science Foundation (Grant No. 22-72-00038).
2306.01117
Examining the Causal Effect of First Names on Language Models: The Case of Social Commonsense Reasoning
As language models continue to be integrated into applications of personal and societal relevance, ensuring these models' trustworthiness is crucial, particularly with respect to producing consistent outputs regardless of sensitive attributes. Given that first names may serve as proxies for (intersectional) socio-demographic representations, it is imperative to examine the impact of first names on commonsense reasoning capabilities. In this paper, we study whether a model's reasoning given a specific input differs based on the first names provided. Our underlying assumption is that the reasoning about Alice should not differ from the reasoning about James. We propose and implement a controlled experimental framework to measure the causal effect of first names on commonsense reasoning, enabling us to distinguish between model predictions due to chance and caused by actual factors of interest. Our results indicate that the frequency of first names has a direct effect on model prediction, with less frequent names yielding divergent predictions compared to more frequent names. To gain insights into the internal mechanisms of models that are contributing to these behaviors, we also conduct an in-depth explainable analysis. Overall, our findings suggest that to ensure model robustness, it is essential to augment datasets with more diverse first names during the configuration stage.
Sullam Jeoung, Jana Diesner, Halil Kilicoglu
2023-06-01T20:05:05Z
http://arxiv.org/abs/2306.01117v1
# Examining the Causal Effect of First Names on Language Models: ###### Abstract As language models continue to be integrated into applications of personal and societal relevance, ensuring these models' trustworthiness is crucial, particularly with respect to producing consistent outputs regardless of sensitive attributes. Given that first names may serve as proxies for (intersectional) socio-demographic representations, it is imperative to examine the impact of first names on commonsense reasoning capabilities. In this paper, we study whether a model's reasoning given a specific input differs based on the first names provided. Our underlying assumption is that the reasoning about _Alice_ should not differ from the reasoning about _James_. We propose and implement a controlled experimental framework to measure the causal effect of first names on commonsense reasoning, enabling us to distinguish between model predictions due to chance and caused by actual factors of interest. Our results indicate that the frequency of first names has a direct effect on model prediction, with less frequent names yielding divergent predictions compared to more frequent names. To gain insights into the internal mechanisms of models that are contributing to these behaviors, we also conduct an in-depth explainable analysis. Overall, our findings suggest that to ensure model robustness, it is essential to augment datasets with more diverse first names during the configuration stage. ## 1 Introduction Recent language models (LMs) Brown et al. (2020); Radford et al. (2019) have shown remarkable improvements when used in NLP tasks and are increasingly used across various application domains to engage with users and address their personal and social needs, such as AI-assisted autocomplete and counseling Hovy and Yang (2021); Sharma et al. (2021). As these LMs models are adopted, their social intelligence and commonsense reasoning have become more important, especially as AI models are deployed in situations requiring social skills Wang et al. (2007, 2019). In this paper, we examine how first names are handled in commonsense reasoning (Fig 1). To this end, we measure the causal effect that name instances have on LMs' commonsense reasoning abilities. A key aspect of commonsense reasoning of LMs should be that they provide consistent responses regardless of the subject's name or identity Sap et al. (2019). That is, the reasoning behind "_Alice_" should not differ from that about "_James_", for instance. Given that first names can be a proxy for representation of gender and/ or race, this consistency is essential not only for the robustness but also for the fairness and utility of a LM. Previous studies have revealed that pre-trained language models are susceptible to biases related to peoples' first names. For instance, in the context of sentiment analysis, certain names have been consistently associated with negative sentiments by language models Prabhakaran et al. (2019). Additionally, during text generation, names have been found to be linked to well-known public figures, indicating biased representations of names Figure 1: Framework of our approach. (Left): An example template with name instances (Right): The causal graph \(G\) we hypothesize for analysis (Shwartz et al., 2020). Furthermore, Wolfe and Caliskan (2021) demonstrated that less common names are more likely to be'subtokenized' and associated with negative sentiments compared to frequent names. These studies shed light on how pre-trained language models disproportionately process name representations, potentially leading to biased outputs. While examining pre-trained language models is valuable to understand their capabilities and limitations, in many cases the models are fine-tuned, or adapted and optimized, to guarantee improved performance on specific downstream tasks, such as text classification, machine translation, and question answering, among others (Bai et al., 2004; Peng and Dean, 2007; Rajpurkar et al., 2018). Given that fine-tuning pre-trained language models can lead to major performance gains (Devlin et al., 2019), in this paper, we ask if performance disparities based on names still exist even when the models are fine-tuned. If so, we ask which components of the models contribute to performance disparities and to what extent. We design a controlled experimental setting to determine whether performance differences arise by chance or are caused by names. Our contributions are three-fold1: Footnote 1: The source code is available: [https://github.com/sullamij/Causal-First-Names/](https://github.com/sullamij/Causal-First-Names/) * We propose a controlled experimental framework based on a causal graph to discern the causal effect of first names in the common-sense reasoning of language models. We leverage the name statistics from U.S. Census data for this purpose. * We present an in-depth analysis to understand the internal model mechanisms in processing first names. To be specific, we examine the embeddings and neuron activation of first names. * Based on our analysis, we provide suggestions for researchers in configuring the datasets to provide more robust language modeling. ## 2 Task Formulation We consider a dataset of commonsense reasoning examples \(d\in\mathcal{D}\), where each item consists of a question \(q\in\mathcal{Q}\), three possible answer candidates \(\mathcal{C}=\{c_{1},c_{2},c_{3}\}\), and a label \(y\in Y\), which is the correct answer among the candidates. \(\mathcal{Q}\) and \(\mathcal{C}\) serve as a template \(\mathbf{t}\), containing placeholders for names \([\mathbf{n}]\) and pronouns referring to the names, \([\mathbf{n}p]\). To ensure grammatical correctness, a pronoun placeholder \(\mathbf{n}\mathbf{p}\) is set in variants of subject pronoun \(\mathbf{n}\mathbf{p_{1}}\), object pronoun \(\mathbf{n}\mathbf{p_{2}}\), and dependent possessive pronouns \(\mathbf{n}\mathbf{p_{3}}\). An example of the data template is as follows: **Question**\(\mathcal{Q}\): Typically every four months, \(\boxed{\mathbf{n}}\) went to the doctor for a routine checkup and was told \(\boxed{\mathbf{n}\mathbf{p_{1}}}\) needs rest. What will \(\boxed{\mathbf{n}}\) want to do next? **Candidates**\(\mathcal{C}\): **(a)** call the doctor, **(b)** finish all \(\boxed{\mathbf{n}\mathbf{p_{2}}}\) projects and postpone the rest, **(c)** take time off from work} **Label**\(y\): **(c)** take time off from work ## 3 Causal Graph A language model can be denoted as a function \(f\), taking inputs as follows: \[\hat{y}=f(t(\mathbf{n},\mathbf{n}\mathbf{p})) \tag{1}\] We are interested in how first names (\(\mathbf{n}\in N\)) influence the prediction \(\hat{y}\in\hat{Y}\) under the function \(f\). We hypothesize that there is a causal graph \(\mathcal{G}\) that encodes possible causal paths relating first names to the model's prediction (Fig 1, right). 2 Footnote 2: Specifically, when referring to the causal graph, it pertains to the utilization of causal directed-acyclic graphs (DAGs), as mentioned in the work by (Feder et al., 2022) We identify both the direct effect and indirect effect on model prediction (Pearl, 2022): 1. The direct effect of names on model prediction \((N\rightarrow\hat{Y})\) measures how names have a direct impact on model predictions (without going through any intermediate variables). 2. The indirect effect indicates potential confounding factors associated with names that may influence predictions. We hypothesize that pronouns are an intermediate variable \((N\to NP\rightarrow\hat{Y})\). Intuitively, pronouns that refer to names can influence how models make their predictions. For example, this indirect effect indicates changes in model prediction when the pronouns differ (e.g. _he_ vs. _she_) but the names remain the same or fixed (e.g. _Pat_). Pronouns inherently associate with the names they refer to, and this association may cue models to consider those names more strongly when generating a response. Thus, we posit the effect of pronouns as an indirect effect. Below, we formalize the causal mechanisms, intervention lists, and the effect size that measures the change in model prediction. Direct Effect \[\text{DE}(N\rightarrow\hat{Y}):=\\ \sum_{t}\mathbb{E}_{N}^{+}[\hat{Y}|T=t]-\mathbb{E}_{N}^{-}[\hat{Y} |T=t]\] where \(\mathbb{E}_{N}^{+}[\hat{Y}|T=t]\) indicates the average effect size of name lists \(N^{+}\), while \(\mathbb{E}_{N}^{-}[\hat{Y}|T=t]\) indicates the average effect size of name lists \(N^{-}\) on template \(t\). The details of the name lists of interest \(N^{+}\) and \(N^{-}\) are listed in section 3.1 and the effect size is defined in section 3.2. DE measures the causal effects between name lists via direct do-interventions of \(N^{+}\) as the template \(t\) is fixed Pearl (1995). Beyond computing the differences, to test the null hypothesis, we conduct a _t_-test and obtain the _p_-value statistics. Indirect effect \[\text{IE}(N\rightarrow\hat{Y}):=\sum_{t}^{T}\sum_{n}^{N}(\mathbb{E }_{NP}^{+}[\hat{Y}|T=t,N=n]\\ -\mathbb{E}_{NP}^{-}[\hat{Y}|T=t,N=n])\] where \(\mathbb{E}_{NP}^{+}[\hat{Y}|T=t,N=n]\) indicates the average prediction conditioned on template \(t\) and name \(n\), with the set of \(NP^{+}\), and \(\mathbb{E}_{NP}^{-}[\hat{Y}|T=t,N=n]\) refers that of \(NP^{-}\). To account for the effect of names, note that names are also controlled along with the template. ### Causal Intervention We apply feasible intervention on \(T:\{q,c,(n,np),y\}\) to \(T^{\prime}:\{q,c,(n^{\prime},np^{\prime}),y\}\). We denote the intervention list as \(\texttt{Do}(X:x\to x^{\prime})\), where \(X\in\{\mathcal{Q},\mathcal{C},(N,NP),Y\}\). We denote \(\hat{y}^{\prime}\in\hat{Y}^{\prime}\) to indicate the prediction of the intervened \(X^{\prime}\). As we want to explore names based on their characteristics, we partition the intervention lists \(N\) based on two criteria: _frequency_ and _gender_. These criteria were chosen following previous work Wolfe and Caliskan (2021); Buolamwini and Gebru (2018) that has demonstrated that less common names, as well as gender, can be key factors in models that exhibit biases. Studies have shown that models trained on datasets with an imbalance of names or gender can reflect and even amplify prejudices, resulting in unfair outcomes, particularly for marginalized groups Bolukbasi et al. (2016); Zhao et al. (2017). By focusing on name frequency and gender representation, we aim to evaluate the impact of these criteria on models. In order to base our work on prior statistics, we use the name statistics from the U.S. Census data. The detailed process of how the intervention list was filtered from the dataset is outlined in section 5. We consider the set of names for do-intervention as below: Most-LeastBased on the frequency of names, \(N_{\text{MOST}}\) indicates the names with top-\(k\) highest frequency, whereas \(N_{\text{LEast}}\) refers to lowest frequency. Female-MaleWe use the gender information from the statistics to discern the gender of a name. Note that we purely refer to the 'gender' of names based on their records. That is, we account for cases where a name can be both male or female, based on the frequency statistics. For example, if the records for _Lee_ exist for both males and females, we consider the name belonging to both genders to reflect real-world data. ### Effect Size To evaluate the impact of our model, we utilize two distinct metrics. AccuracyTo quantify the degree of wrong predictions, we define \(\mathbf{d}_{\text{ACC}}\) as \[\mathbf{d}_{\text{ACC}}(x):=\mathbbm{1}(\hat{y}\neq y)\] \[\mathbf{d}_{\text{ACC}}(X^{\prime}\to X)=\frac{\mathbf{d}_{\text{ ACC}}(X^{\prime})-\mathbf{d}_{\text{ACC}}(X)}{\mathbf{d}_{\text{ACC}}(X)}\] agreementThis metric measures the extent to which the model's predictions vary in response to different interventions. The rationale behind this metric stems from the recognition that the task under consideration entails a multiple-choice problem. Additionally, in real-world scenarios, it is often the case that a definitive 'ground truth' may not exist. Consequently, we employ this metric to measure the divergence of predictions. This metric goes beyond simple accuracy, which merely determines the correctness or incorrectness of predictions. Instead, this objective is to evaluate the diversity of predictions, thereby taking into consideration the range of errors that may arise. To calculate the **Agr** score, which is a modification of Fleiss' kappa (Fleiss and Cohen, 1973), we begin with a list of \(N\) names and obtain a score: \[\mathbf{d}_{\text{AGR}}(X)=\frac{1}{|N|\cdot|N-1|}\sum_{j=1}^{k}(n_{j}\cdot(n_{j }-1))\] \[\mathbf{d}_{\text{AGR}}(X^{\prime}\to X)=\frac{\mathbf{d}_{\text{AGR}}(X^{ \prime})-\mathbf{d}_{\text{AGR}}(X)}{\mathbf{d}_{\text{AGR}}(X)}\] where \(|N|\) indicates the total number of names in name lists, \(k\) the number of categories (e.g. in our case, \(k=3\), {(a),(b),(c)}, and \(n_{j}\) the number of instances predicting the answer as category \(j\). The **Agr** score ranges from 0 to 1, with a score of 1 indicating complete agreement among all name instances in their category prediction, and a score of 0 indicating no agreement. This metric enables us to assess the degree to which a model's predictions are sensitive to different interventions. ## 4 Explanations of Causal Effects The causal analysis shows the surface-level comparison of model outputs but fails to capture the nuanced processes underlying each model's reasoning. By probing the internal workings of the models, we seek to gain insights into how the models derive their conclusions and also where their approaches diverge. We use two approaches to gain a deeper understanding of the models' predictions. First, we analyze the models' internal representations to discern how they encode various names. Specifically, we focus on the distinction in contextualization between the embeddings of frequent names and less frequent names. Second, we apply a diagnostic technique based on neuron activation to pinpoint how the models process names. ### Contextualization of Name Representations We investigate the contextualization of name representations in language models with respect to their characteristics. We partition the names based on frequency Most and Least and compare the degree of contextualization. To be specific, we measure the similarity between name representations at each layer of the model by following the approach proposed by Wolfe and Caliskan (2021). In order to ensure that the embeddings being compared are based on the same space, we restrict the comparison to representations within each layer and do not compare across different layers. We adopt two commonly used metrics to validate the overall trend observed in our analysis. **Cosine Similarity** The cosine-similarity of name \(w\), in layer \(l\) is formalized as follows: \[c(\mathbf{w})_{l}=\frac{1}{n^{2}-n}\sum_{i}\sum_{j\neq i}cos(\vec{w_{i}},\vec {w_{j}})\] where \(n\) refers to the total number of name pairs. This corresponds to the self-similarity studied in (YWolfe and Caliskan, 2021). The measure lies ranges from 0 to 1, where 1 indicates high similarity, and 0 otherwise. **Linear CKA** (Centered Kernel Alignment) This similarity metric measures similarity in neural network representations and was proposed by Kornblith et al. (2019). It ranges from 0 to 1, where 1 indicates perfect similarity, and 0 otherwise. \[\frac{||\mathbf{x_{j}}^{\top}\mathbf{x_{i}}||_{F}^{2}}{||\mathbf{x_{i}}^{\top }\mathbf{x_{i}}||_{F}||\mathbf{x_{j}}^{\top}\mathbf{x_{j}}||_{F}}\] where \(\mathbf{x_{i}}\) and \(\mathbf{x_{j}}\) indicates two randomly selected name embeddings, such that \(i\neq j\). ### Neuron Activations Previous work has explored the activation patterns of neurons in deep neural networks for the domains of language and vision as a means of gaining insight into the inner workings of such networks (Karpathy et al., 2015; Poerner et al., 2018; Olah et al., 2018; Dalvi et al., 2019). It has been demonstrated that the feed-forward network (FF) component of transformer architectures encodes a significant amount of information (Wang et al., 2022; Geva et al., 2021). Building on this prior work, we conducted a detailed analysis of how neuron activations vary according to different characteristics of the input data. Our analysis involved extracting the activations of the FF network's neurons based on the hidden states of previous layers and applying non-negative matrix factorization (NMF) (Cichocki and Phan, 2009) to decompose these activations into semantically meaningful components. By visualizing groups of neuron activations, we aim to gain a better understanding of the models' internal mechanisms, and how the models construct their representations and predictions. For the detailed algorithm see Appendix B outlines the steps involved in this analysis. ## 5 Experimental Setup **Dataset** We use the SocialIQA dataset from Sap et al. (2019). The selection of this dataset is motivated by its suitability for investigating model behavior in a social context, as the dataset consists of questions for probing _emotional_ and _social_ intelligence in everyday situations. By analyzing the model's responses to questions pertaining to social and emotional intelligence, valuable insights can be gleaned regarding the models' handling of some nuances of human behavior. Since the dataset is based on a social setting, it would be misleading if the models yielded different predictions based on different names. To construct the template \(\mathbf{T}\), we used the AllenNLP coreference resolution model Gardner et al. (2018), which has high performance3. This model is used to detect named entities and resolve their corresponding pronouns, facilitating the construction of templates for our experiments. Footnote 3: F1 score 80.2 on CoNLL benchmark dataset **Names List** We use U.S. census names dataset4, following Mehrabi et al. (2020) to intervene the name placeholders. It contains 139 years of U.S. census baby names, their corresponding gender, and respective frequencies. To form intervention name lists based on frequency, we filtered out the most frequent \(k\) names over all years for \(N_{\text{MOST}}\), and the least frequent \(k\) names over all years for \(N_{\text{LEAST}}\). We set \(k=200\). Footnote 4: [http://www.ssa.gov/oact/babynames/names.zip](http://www.ssa.gov/oact/babynames/names.zip) **Model** We use three widely used models, GPT2 Radford et al. (2019), Bert Devlin et al. (2019), and RoBERTa Liu et al. (2019). We customized each model with a linear layer \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Effect size: \(\mathbf{d}_{ACC}\) \\ (Accuracy) \\ \end{tabular} } & \multicolumn{4}{c|}{**Not-finetuned**} & \multicolumn{4}{c}{**Fine-tuned**} \\ & \multicolumn{4}{c|}{_(Epoch 0)_} & \multicolumn{4}{c}{_(Epoch10)_} \\ \hline \multirow{3}{*}{\begin{tabular}{c} Most \(\rightarrow\) Least \\ Male \(\rightarrow\) Female \\ Most Male \(\rightarrow\) Least Male \\ Most Female \\ Most Female \\ \end{tabular} } & \begin{tabular}{c} -07 \\ (.354) \\ (.801) \\ (.801) \\ (.863) \\ (.365) \\ (.365) \\ (.365) \\ (.365) \\ (.365) \\ (.365) \\ (.365) \\ \end{tabular} & \begin{tabular}{c} **.258*** \\ (.534) \\ (.534) \\ (.627) \\ (.801) \\ (.885) \\ (.800) \\ (.800) \\ (.800) \\ \end{tabular} & \begin{tabular}{c} -04 \\ (.804) \\ (.884) \\ (.965) \\ (.751) \\ (.751) \\ (.751) \\ (.751) \\ -.008 \\ (.880) \\ (.990) \\ (.800) \\ (.954) \\ \end{tabular} & \begin{tabular}{c} -04 \\ (.884) \\ (.884) \\ (.864) \\ (.864) \\ (.865) \\ (.866) \\ (.860) \\ (. on top to perform a multiple-choice selection task. The feed-forward (FF) linear layer was obtained by \(\text{logits}=\textbf{Model}(X)\), \(\hat{y}=\textbf{FF}(\text{logits})\) The hyper-parameter setting for the training is described in Appendix A. ## 6 Results and Discussion ### Direct Effect AccuracyThe results of the direct effect of accuracy for different sets of interventions are presented in Table 1. Comparing the first three columns (_not-finetuned_) with the subsequent three columns (_fine-tuned_), we observe that the causal effect of accuracy is not statistically significant when the models are fine-tuned. This trend holds consistently true across all three models examined in this study. This suggests that the direct effect of name characteristics on accuracy is not significant when fine-tuned. The effect sizes of the _not-finetuned_ models are reported in accordance with previous literature that predominantly focuses on these models (Wolfe and Caliskan, 2021; Shwartz et al., 2020). However, it is crucial to emphasize the efficacy of fine-tuning, as it reflects a more realistic scenario for model deployment (Jeoung and Diesner, 2022). We compared the effect sizes of the not-finetuned models with those of the fine-tuned models, thereby examining the impact of fine-tuning on model behavior. We also provide an analysis of the correlation between the model's accuracy and effect sizes in Appendix D. AgreementThe analysis of the direct causal effect of agreement (\(\textbf{d}_{\text{AGR}}\)) shows that a significant difference in name lists based on frequency persists even after fine-tuning all three models ( Table 2, first row). This suggests that despite the fine-tuning process, the models continue to exhibit variations in their agreement on predictions based on the frequency of names used. Specifically, the positive and significant value of \(\textsc{Most}\rightarrow\textsc{Least}\) indicates that the prediction is more divergent forLeast than Most. This implies that when the model makes incorrect predictions, the resulting predictions tend to be more inconsistent or diverse, rather than consistent. Figure 2 illustrates the disentangled values for \(\textbf{d}_{\text{AGR}}\) across different epochs during the training phase. For both GPT2 and BERT, a consistent gap between Most and Least is observed throughout the training epochs. In contrast, for RoBERTa, although the gap is not consistent across all epochs, the agreement measures for Most remain consistently higher than those for Least. This discrepancy in the gap between RoBERTa and the other models could potentially be attributed to the robust optimization design of RoBERTa, which complements that of BERT (Liu et al., 2019). Also, these findings are consistent with the conclusion drawn by (Basu et al., 2021), who also observed that RoBERTa generates the most \begin{table} \begin{tabular}{c c c c c|c c c} \hline \hline & \multicolumn{4}{c|}{**Not-finetuned**} & \multicolumn{4}{c}{**Finetuned**} \\ & & \multicolumn{4}{c|}{_(Epoch 0)_} & \multicolumn{4}{c}{_(Epoch10)_} \\ \cline{3-8} & GPT2 & BERT & RoBERTa & GPT2 & BERT & RoBERTa \\ \hline \multirow{4}{*}{Indirect Effect} & Most & 0.055 & 0.107 & 0.074 & 0.052 & 0.047 & 0.037 \\ & Least & 0.043 & 0.091 & 0.171 & 0.053 & 0.039 & 0.031 \\ \cline{2-8} & Female & 0.072 & 0.145 & 0.185 & 0.079 & 0.063 & 0.051 \\ \cline{1-1} & Male & 0.030 & 0.059 & 0.034 & 0.0260 & 0.025 & 0.018 \\ \hline \hline \end{tabular} \end{table} Table 3: Indirect Effect of name lists across models. The results show that relative to Non-finetuned models, the indirect effect of names on predictions is marginally reduced in fine-tuned models. Figure 2: The \(\textbf{d}_{\text{AGR}}\) of Most and Least values over the training phase (number of epochs). For GPT2 and BERT, the gap of Most values and Least is consistent across the number of epochs. robust results. Overall, the findings indicate that the agreement ratio of Least consistently remains lower than that of Most throughout the training phase, suggesting that the predictions for Least are more divergent. ### Indirect Effect Table 3 presents the results pertaining to the indirect effect of name lists on predictions. Specifically, the indirect effect quantifies the sensitivity of pronouns associated with names on model predictions. Overall, the findings indicate that, in comparison to non-finetuned models, the indirect effect of names on predictions is marginally reduced in fine-tuned models. For Bert and RoBERTa, the indirect effect of both frequency and gender is diminished when finetuned. However, for Gpt2, the indirect effect is reduced in most cases, except for the name lists of Least and Females. ### Contextualization Measures In order to gain insight into how names are internally contextualized in the transformer models, we conducted a preliminary analysis of name representations. To do so, we extracted the embeddings of \(N_{\text{MOST}}\) and \(N_{\text{LEAST}}\) samples from fine-tuned GPT2 and measured their similarity. The results are presented in Figure 3 and 4. The Self-similar(Most) and Self-similar(Least) measures represent the similarity between the Most and Least names, respectively, while the Inter-similarity(Most-Least) measure quantifies the similarity between the Most and Least names. The trends observed for both CKA and cosine similarity measures are similar, although with different magnitudes (details of these metrics are discussed in section 4). These consistent trends are robust across different evaluation metrics. The results show that in the first two layers, the similarity scores are low, but they increase across the mid-layers. However, in the last layer, the similarity of the embeddings of Least names is lower compared to Most names. This finding partly explains Table 2 first row, which indicates the fine-tuned GPT2 has a significant direct effect on the agreement measure on Most and Least. The relatively low similarity of the embeddings of Least names shows that they exhibit higher variability, being less contextualized compared to that of Most. ### Neuron Activations To further investigate the differences in neuron activations, we conducted an analysis using GPT2 fine-tuned model. The results of this analysis are presented in Table 4, where each color represents the components of the neurons that got activated. These components correspond to the clusters obtained from the non-negative factorization on feed-forward neurons. Our observations indicate that less frequent names exhibit two distinct behaviors: 1) they are sub-tokenized into two or more tokens, and 2) they are not activated by the same neuron components as the frequent names. This analysis does not provide an explanation for the _cause_ or _reason_ for the divergent predictions but rather sheds light on the internal behavior of the model, namely how the neurons activate, which may be related to the divergent predictions observed for the least frequent names. ### Mitigating Strategy: Data Augmentation Our findings suggest that incorporating a more diverse set of first names into the training data can serve as a potential approach to mitigate the di Figure 4: Cosine similarity measures across layers Figure 3: CKA measures across layers -vergent behavior of language models. Among all first names in the SocialIQA training dataset, we observed around 66% of first name instances represent the 10% of the most frequent first names in the U.S. Census data. In terms of frequency, these names account for 97% of all first-name instances in the training dataset (Fig in Appendix C). Such skewed yet highly likely distributions of demographic information in the training dataset may inadvertently introduce biases in the model outputs, as evidenced by previous studies (Buolamwini and Gebru, 2018; Karkkainen and Joo, 2021). To address this issue, recent research by (Qian et al., 2022) has demonstrated that augmenting the training data with diverse social demographics can lead to improved model performance and robustness. ## 7 Related Work Previous research has shown that pre-trained language models are susceptible to biases related to people's first names, e.g., in the contexts of sentiment analysis (Prabhakaran et al., 2019) and text generation (Shwartz et al., 2020). Wolfe and Caliskan (2021) demonstrated that less common names are more likely to be subtokenized and associated with negative sentiments compared to frequent names. In our work, we further extended this prior work by analyzing the impact of fine-tuning models on first names adopting the causal framework. A growing body of research has explored the incorporation of causality in language models. For instance, Feder et al. (2021) proposed a causal framework by incorporating additional fine-tuning on adversarial tasks. Similarly, Vig et al. (2020) demonstrated the use of causal mediation on language models to mitigate gender bias. Unlike Vig et al. (2020), our approach focuses on applying causal analysis in the input sequence space and exploring the causal relationships between input sequence components and model predictions. ## 8 Conclusion In this paper, we introduced a controlled experimental framework to assess the causal effect of first names on commonsense reasoning. Our find ings show that the frequency of first names exerts a direct impact on model predictions, with less frequent names leading to divergent outcomes. We suggest careful consideration of the demographics in dataset design. ## 9 Broader Impact The data used in our analysis contains no private user information. As for ethical impact, the systematic experimental design we used provides an approach for conducting controlled experiments in the context of natural language processing research, particularly with a focus on the influence of first names on language models. ## 10 Limitation Our investigation focuses on one aspect of commonsense reasoning restricted to one dataset. There may be numerous other factors in real-world applications. Therefore, our findings may not comprehensively capture the entirety of commonsense reasoning phenomena. Another limitation is that for the sake of simplicity and feasibility, we assumed a fixed threshold of k=200 to categorize frequent and less frequent names. However, this threshold may not be universally applicable to all contexts or datasets, and different thresholds could lead to different results.
2306.15162
YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus
Machine learning for sign languages is bottlenecked by data. In this paper, we present YouTube-ASL, a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions drawn from YouTube. With ~1000 hours of videos and >2500 unique signers, YouTube-ASL is ~3x as large and has ~10x as many unique signers as the largest prior ASL dataset. We train baseline models for ASL to English translation on YouTube-ASL and evaluate them on How2Sign, where we achieve a new finetuned state of the art of 12.39 BLEU and, for the first time, report zero-shot results.
David Uthus, Garrett Tanzer, Manfred Georg
2023-06-27T02:44:07Z
http://arxiv.org/abs/2306.15162v2
# YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus ###### Abstract Machine learning for sign languages is bottlenecked by data. In this paper, we present YouTube-ASL, a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions drawn from YouTube. With ~1000 hours of videos and >2500 unique signers, YouTube-ASL is ~3x as large and has ~10x as many unique signers as the largest prior ASL dataset. We train baseline models for ASL to English translation on YouTube-ASL and evaluate them on How2Sign, where we achieve a new finetuned state of the art of 12.39 BLEU and, for the first time, report zero-shot results. ## 1 Introduction The primary bottleneck for machine learning research on sign languages is data. As minority languages used by historically marginalized Deaf/Hard of Hearing communities, sign languages lack the plentiful online resources that have facilitated modern machine learning advances [4; 42; 12]. This is compounded by the fact that sign languages have no standardized written form: mining the videos that do exist is more difficult than retrieval for spoken language text. For translation specifically, there is the added problem of finding spoken language captions that are aligned to corresponding sign language content, rather than a voiceover with its own timing. The result is that datasets tend to be constructed by recording new footage in a studio or curating videos from a small number of manually selected content creators, which limits variety. In order to address these challenges, we present YouTube-ASL, a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions, primarily intended for ASL to English machine translation. We mined these videos from YouTube using a two-step process: first, we used automatic content-based annotations to identify potentially relevant captioned videos; and second, we used skilled human annotators to filter out videos with poor quality or misaligned captions. The result is a dataset with 984 hours of high-quality captioned video featuring >2500 unique signers, which is ~3x as large as the largest prior ASL dataset [37] and has ~10x as many unique signers as any sign language dataset to date. We train simple baseline models for sentence-level ASL to English translation on YouTube-ASL by embedding MediaPipe Holistic landmarks [25; 16] into the T5 language model [32]. Because YouTube videos may be removed over time and therefore cannot form a stable test set--and for comparison to prior work--we evaluate on a standard benchmark, How2Sign [13]. Borrowing from trends in mainstream machine learning [31; 15; 10], we provide not just finetuned but also zero-shot1 results to test out-of-domain generalization. We achieve a new finetuned state of the art of 12.39 BLEU (vs. the prior SOTA of 8.03 [38]), and for the first time report a zero-shot score, 3.95 BLEU. We publicly release the YouTube-ASL video IDs.2 We hope that YouTube-ASL will be useful for tasks such as ASL to English translation and caption alignment--both in the near term to aid in the construction of larger sign language datasets, and eventually to improve accessibility for the Deaf/Hard of Hearing community. Footnote 2: [https://github.com/google-research/google-research/tree/master/youtube_asl](https://github.com/google-research/google-research/tree/master/youtube_asl) ## 2 Related Work In this section, we review prior sign language translation datasets and methods for translation from sign languages to spoken languages. ### Sign Language Translation Datasets Table 1 shows statistics on different sign language translation datasets. There are three main sources for sign language data: ad hoc recorded footage, interpreted TV broadcasts, and online video sharing platforms. In the first category are datasets that manually recruit signers and record them performing translations of desired phrases, either in a lab setting or with a camera on their personal device. These datasets tend to be small and feature few signers for logistical reasons, and may have exhaustive annotations because the small size of the dataset makes it feasible. This includes datasets such as CSL-Daily [45], with phrases related to daily life in Chinese Sign Language; KETI [21], with phrases related to emergency situations in Korean Sign Language; Public DGS Corpus [18], with elicited dialogues in German Sign Language; and How2Sign [13], with "How To" instructional monologues translated into American Sign Language. In the second category are datasets that collate interpreted TV programs from a collaborating national broadcaster. These datasets tend to be larger than newly recorded ones, but often use a small number of non-native interpreters and lack fine-grained caption alignment (because the supervision comes from the spoken language audio track). This includes datasets such as RWTH-PHOENIX-2014 [5], with weather forecasts interpreted into German Sign Language; SWISSTXT [7], with news/weather programs interpreted into Swiss German Sign Language; VRT [7], with news programs interpreted into Flemish Sign Language; and BOBSL [2], with BBC programs in many domains interpreted into British Sign Language. At 1447 hours, BOBSL is the largest sign language translation dataset to date (including the present work), but has only 39 signers and speech-aligned subtitles, vs. YouTube-ASL's >2519 signers and sign-aligned captions--though the two datasets are complementary because they are for different languages. In the third category are datasets that curate content from online video sharing platforms. In prior sign language translation datasets, this content is drawn from a small number of manually selected channels. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Name & Language & Vocab. & \# Hours & \# Signers & Source \\ \hline RWTH-PHOENIX-2014T [5] & DGS & 3K & 11 & 9 & TV \\ BOBSL [2] & BSL & 77K & 1447 & 39 & TV \\ SWISSTXT [7] & DSGS & - & 88 & - & TV \\ VRT-RAW [7] & VGT & - & 100 & - & TV \\ CSL-Daily [45] & CSL & 2K & 23 & 10 & Lab \\ KETI [21] & KVK & 419 & 28 & 14 & Lab \\ Public DGS Corpus [18] & DGS & - & 50 & - & Lab \\ SP-10 [40] & various & 17K & 14 & 79 & Web \\ AfriSign [17] & various & 20K & 152 & - & Web \\ How2Sign [13] & ASL & 16K & 79 & 11 & Lab \\ OpenASL [37] & ASL & 33K & 288 & 220 & Web \\ \hline YouTube-ASL (ours) & ASL & 60K & 984 & \textgreater{}2519 & Web \\ \hline \hline \end{tabular} \end{table} Table 1: Summary statistics for different sign language translation datasets. See Section 3.3 for details on how these statistics were derived for YouTube-ASL. This includes datasets such as SP-10 [40], with example sentences from an online multilingual sign dictionary; AfriSign [17], with translated Bible passages hosted on the Jehovah's Witnesses website; and OpenASL [37], with videos from three YouTube channels: _DailyMoth_, _Sign1News_, and the National Association of the Deaf. OpenASL is the largest prior ASL dataset and closest work to YouTube-ASL: the key difference is that YouTube-ASL is constructed with open-ended mining from automatic tags, rather than manual channel curation. OpenASL is largely a subset of YouTube-ASL, which--by utilizing the long tail of channels--is ~3x as large and has ~10x as many unique signers. There are several datasets for easier tasks than translation, like isolated sign recognition and fingerspelling recognition, that mine from the web by ambiguous means. MS-ASL [20], WLASL [23], and ChicagoFSWild [35]/ChicagoFSWild+ [36] are word-level datasets mined from YouTube, sign language-targeted sites like ASLU and ASL-LEX, or other unnamed video sharing platforms. These works do not specify how they retrieved their videos, so it is possible that they used a similar automatic tagging approach to YouTube-ASL, albeit on a more limited scale. ### End-to-End Sign Language Translation Originally, sign language translation approaches operated on _glosses_, linguistic annotations that represent individual signs, or cascaded translation through glosses as an intermediate step, like speech to text translation often cascades through speech recognition. More recently, due to a variety of deficiencies in glosses and lack of widespread gloss data, the field has shifted to end-to-end modeling with encoder-decoder Transformers, starting with Camgoz et al. [5]. The two main classes of approaches are those that take learned video embeddings as input [6; 38; 27] (via video encoders, primarily I3D [9], pretrained on tasks such as isolated sign recognition), and those that take estimated pose landmarks as input [27] (such as MediaPipe [25] or OpenPose [8]). Some works achieve modest gains given constant data with architectural tweaks like treating different cues in the input video (hands, face) differently [44; 41]. It is unclear to what extent these techniques are necessary or beneficial on larger datasets. Other works seek to benefit from transfer from spoken language or other sign language data [11; 43; 17]. All of these works train and evaluate on splits derived from the same underlying continuous sign language corpus (different datasets across papers), and sometimes multiple such datasets independently in the same paper. In contrast, we train on YouTube-ASL using an uncomplicated approach and evaluate on How2Sign, reporting both finetuned and zero-shot results to get a more robust understanding of our model's state-of-the-art performance. ## 3 The YouTube-ASL Corpus YouTube-ASL is a corpus of American Sign Language (ASL) videos with accompanying English captions drawn from YouTube. Video sharing platforms like YouTube are appealing sources of sign language data because they host swaths of diverse content that are more broadly representative of real world conditions than studio footage is. Of course, much of this data is irrelevant or low-quality, so it is imperative to develop cost-effective ways to sift through it. We used a two-step pipeline to construct the corpus: first, retrieval using automatic content-based annotations, and second, filtering by skilled human annotators at a per-video level. This automatic retrieval step represents a departure from prior continuous sign language corpora and brings us closer to mining approaches from mainstream machine learning. ### Automatically Retrieving Candidate Videos As described previously in Abu-El-Haija et al. [1], the YouTube video annotation system associates machine-generated tags with each video in the form of Knowledge Graph entities, which are based on the video's metadata, context, and content signals. We retrieved listed public videos tagged as being related to sign language generally or American Sign Language specifically, as of January 2022.3 This automatic tagging step, while having higher recall than prior works, was flawed in that it was not aware of sign language in the video content itself--to be expected due to the limited nature of current sign language processing. This means that, for example, videos in sign language that do not explicitly mention sign language in the content or context were unlikely to be discovered. This failure mode was most salient for press conferences with simultaneous interpreters, which tend not to have well-aligned captions anyway. Given these retrieved videos, we drilled down on those with user-generated captions--i.e., captions that were manually uploaded rather than automatically derived from speech--because speech-derived captions are not tightly aligned with signed content. As a heuristic filtering step, we automatically removed videos with duration <10 seconds or >5 hours, width <480 pixels or height >360 pixels, and frame rate <15fps or >60fps. From inspection, this excluded a negligible amount of desirable videos. The one class of useful videos one might expect this to exclude, short isolated sign videos as used by MS-ASL [20] and WLASL [23], tends to have the label in the video title or description rather than captions, so removing videos under 10 seconds does not have a substantial impact. Finally, we used off-the-shelf person detection tools to exclude videos where none of the captions corresponded to spans with exactly one person present in the video. We limit the scope of our efforts to signing monologues due to the challenges of modeling conversations between multiple signers. The result was a list of 88,002 candidate videos that might contain ASL with high-quality captions. ### Identifying High-Quality Videos with Skilled Human Annotators While some smaller datasets like How2Sign [13] use annotators to manually align all captions, this becomes prohibitively expensive for larger datasets. For this reason, OpenASL [37] and BOBSL [2] use annotators to correct only their validation and test sets. We take a coarser-grained approach to annotations but apply it to our entire list of 88,002 candidates: we use humans to identify videos that are roughly suitable and include them in our corpus without modification. To do so, we hired 3 native ASL users with English proficiency to serve as annotators. The annotators used a bespoke internal tool that would display a given YouTube video and present label options. In order to save time, the annotators were able to mark that their labels held for an entire channel of videos rather than each video individually. Therefore it is possible that certain videos in the corpus are channel outliers and do not meet quality standards, but generally large channels have consistent quality. Each video was labelled by only one annotator unless they brought it up for wider discussion. Through an iterative process involving written instructions, virtual meetings (through an ASL interpreter or signing project members), and escalations by email for edge cases, we aligned on standards for when to accept a video into the corpus. Some of the reasons for exclusion include: the video's captions do not exclusively correspond to signing; the video is in a sign language other than ASL; the video's captions do not correctly translate the ASL; and the captions are poorly aligned. Notably, in order to increase the size of the corpus, we chose to include videos across all skill levels and signing styles, as long as they were comprehensible to an ASL user and correctly captioned. This variety is beneficial for sign language recognition tasks, where models should be able to understand all signers, but may limit the corpus's usefulness for generation tasks, where consistency and controllability are important. The result was a list of 11,093 videos whose captions are generally well-aligned English translations of signed ASL content. ### Corpus Statistics The final, human-filtered YouTube-ASL corpus consists of 11,093 ASL videos with 984 total hours of footage. This is ~3x the size of OpenASL [37], the largest prior ASL dataset, but smaller than BOBSL [2], a British Sign Language dataset. See Table 1 for a comparison between the high-level attributes of YouTube-ASL and prior sign language translation datasets, including total number of hours. These videos are paired with 610,193 English captions, with a total duration of 813 hours. See Table 2 for statistics on the distribution of captions, as well as Figure 1 for visualizations. The average caption length (8.8 words) and duration (4.8 seconds) are relatively short, which reflects that sentences may be split across multiple captions. We computed vocabulary size by counting the number of distinct strings between whitespace or punctuation across all captions. It is important to keep in mind that in addition to the signing itself, these videos' captions vary in style, literalness of translation (whether the content was originally produced in ASL and translated, or translated into ASL from these captions), spelling/grammar correctness, and more. This degree of variability is difficult to quantify in comparisons between datasets. We use the number of unique channels, 2519, as an approximate lower bound for the number of unique signers in the dataset: some channels may feature many signers, and some signers may appear across multiple channels. Note that with this method, OpenASL [37] would be estimated to have 3 signers, while its authors reached a count of 220 signers using more fine-grained methods. Even this likely underestimate is ~10x the count of any individual sign language dataset to date. Figure 2 shows the distribution of videos per channel, for channels with at least 20 videos. There are a few channels with many videos--in particular, the two largest channels are the same news channels featured in OpenASL--and then a long tail of channels with fewer videos. This means that the bulk of new footage present in YouTube-ASL but not OpenASL comes from relatively small channels, which helps variety. See Figure 3 for a sense of the distribution of (machine-annotated) topics across videos: they seem more diverse than prior datasets from video sharing platforms but still shaped by typical YouTube use cases, compared to BOBSL's more topic-balanced BBC programming. ## 4 Baseline Approach In order to demonstrate the potential of YouTube-ASL, we consider a simple method for sentence-level machine translation from ASL to English built using off-the-shelf components. We use a deliberately barebones approach to avoid introducing inductive bias that helps in more limited settings but becomes harmful with scale. \begin{table} \begin{tabular}{l l l} \hline \hline Number of captions & 610,193 \\ Caption length (Average / \(90^{th}\) percentile, in characters) & 48.9 / 88.0 \\ Caption length (Average / \(90^{th}\) percentile, in words) & 8.8 / 16.0 \\ Caption duration (Average / \(90^{th}\) percentile, in seconds) & 4.8 / 8.76 \\ Video duration (Average / \(90^{th}\) percentile, in seconds) & 318.95 / 675.80 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics on the distribution of captions and videos in the YouTube-ASL corpus. Figure 1: Distribution of caption and video durations. For the video duration graph, we omit 27 videos whose duration exceeds 3600 seconds (between 3610 and 9017 seconds). Figure 2: Distribution of videos per channel for channels with at least 20 videos. ### Preprocessing For our target English outputs, we use the raw captions from YouTube-ASL. Each training example is clipped to the boundaries of a single caption. We filter out captions with length >300 characters or duration <200ms or >60s, which tend to be malformed, and any captions corresponding to video spans where exactly one person is not present. We do not lowercase the captions or apply any other kind of text normalization. For our sign language inputs, we use MediaPipe Holistic landmarks [25; 16], rather than raw video. Sign language models that use pose-based inputs have a history of underperforming those that operate on learned video embeddings [20; 26]; it is unclear to what extent this is due to the information bottleneck in the (imperfectly predicted) pose representation, vs. availability of higher quality pretrained video encoders than pretrained pose encoders. Pose inputs offer some benefits like computational efficiency and privacy. MediaPipe Holistic is a lightweight model that predicts 532 3D landmarks (in x-, y-, and z- image-space coordinates) for the hands, pose, and face of a single human in video footage. For sign language understanding tasks, many of these landmarks are redundant (high-detail face mesh) or unnecessary (lower body), and add undesirable complexity. We discard all but 85 of these points, selected _a priori_ using domain knowledge about sign languages: * For each hand, we use all 21 landmark points. * For the pose, we use 6 landmark points, for the shoulders, elbows and hips.4 This discards the lower body and pose landmarks redundant with the hand and face modules. Footnote 4: These are indices 0, 4, 13, 14, 17, 33, 37, 39, 46, 52, 55, 61, 64, 81, 82, 93, 133, 151, 152, 159, 172, 178, 181, 263, 269, 276, 282, 285, 291, 294, 311, 323, 362, 386, 397, 468, 473. * For the face, we use 37 landmark points, from the eyes, eyebrows, lips, and face outline.5 Footnote 5: These are indices 0, 4, 13, 14, 17, 33, 37, 39, 46, 52, 55, 61, 64, 81, 82, 93, 133, 151, 152, 159, 172, 178, 181, 263, 269, 276, 282, 285, 291, 294, 311, 323, 362, 386, 397, 468, 473. We normalize the landmarks by scaling them to fit in a unit bounding box across the duration of the clip. We represent landmarks that are not present in a frame with a large negative value. MediaPipe also predicts visibility (self-occlusion) of landmarks within the frame, which we ignore. To reduce sequence length, we discard every second frame. The final preprocessed input is therefore a half-frame rate sequence of 255-dimensional landmark vectors. Note that this half frame rate may vary from 7.5 to 30fps depending on the original video's frame rate, though most end up at 12 to 15 fps. ### Model Our model is a slightly modified version of T5 [32], which is an encoder-decoder Transformer [39] that has been trained on web-crawled English text. Rather than embed text tokens using a vocabulary Figure 3: A selection of high-level topics, with the number of YouTube-ASL videos automatically tagged as related to them. Note that a single video can be tagged with more than one topic. of learned embeddings, we embed each 255-dimensional landmark frame into the encoder using a learned linear projection layer. Otherwise, our architecture is identical to T5.1.1-Base. We set the encoder context window to 256 tokens (frames) and the decoder context window to 128 tokens, which accommodate the training examples after halving the input frame rate and encoding the target text with T5's SentencePiece vocabulary [22]. ## 5 Experiments We choose not to provide train, validation, and test splits for YouTube-ASL. Because YouTube videos may be deleted over time, the validation and test splits could not serve as a stable benchmark. We instead evaluate on How2Sign [13], a studio-recorded dataset released under CC BY-NC 4.0 consisting of "How To" instructional narratives translated from English into ASL. This also allows us to integrate trends towards more robust evaluation from speech and text modeling [31, 15, 10], where models trained on large web corpora are evaluated both zero-shot and finetuned on independently constructed benchmarks. Practices for constructing test sets in prior sign language dataset works are mixed. For example, OpenASL [37] and AfriSign [17] construct their test sets by randomly splitting at the sentence level; SP-10 [40] does the same but with multiway translations identified as a single sentence. How2Sign [13] samples document-level narratives rather than individual sentences, but most signers are shared between the train and test sets, and some narratives are present in both the train and test sets, translated by different signers. BOBSL [2] invests substantial effort into creating signer-independent, topic-balanced splits; this is perhaps why its translation baseline scores only 1.00 BLEU despite the dataset's size. Zero-shot evaluation lets us sidestep these issues and get a better sense of the model's quality for real use. ### Setup We ablate across four different training schedules: * **H2S**. We train only on How2Sign, not YouTube-ASL, for a like-for-like comparison with prior methods. * **YT-ASL**: We train only on YouTube-ASL, and evaluate on How2Sign zero-shot. * **YT-ASL + HZS**: We train on a mixture of How2Sign and YouTube-ASL, mixed in proportion to the size of the datasets. * **YT-ASL \(\rightarrow\) HZS**: We train on YouTube-ASL, then finetune on How2Sign. Figure 4: Overview of our model pipeline. Starting from an ASL video clip, we use MediaPipe Holistic to compute 3D landmarks for the face, hands, and body of the subject. We then discard irrelevant landmarks and normalize the remainder. These are concatenated and embedded by a linear projection layer into T5.1.1-Base, which then decodes the English translation. The blue components (Linear projection and T5) are the trainable parameters. We also ablate the effect of pretraining on English text by comparing models trained from scratch using the T5.1.1-Base architecture, vs. finetuned from the T5.1.1-Base pretrained checkpoint. We train with a batch size of 128 and learning rate of 0.001 with Adafactor [34]; other hyperparameters are the T5X defaults. For models trained solely on How2Sign data, we train for 20,000 steps. For models trained on YouTube-ASL (including with How2Sign mixed in), we train for 200,000 steps. When finetuning on How2Sign after training on YouTube-ASL, we finetune for an additional 5,000 steps. Each 1,000 steps takes approximately 0.25 TPUv4-hours. Following prior work, we present BLEU [28] and BLEURT [33] scores. BLEU scores are computed using SacreBLEU [29] version 2, with all default options. BLEURT scores are computed using checkpoint BLEURT-20 [30, 14]. We decode using beam search with a beam width of 5. ### Results See Table 3 for metrics comparing our models to prior works on How2Sign [3, 24, 38]. The best results come from training on YouTube-ASL from a pretrained checkpoint, then finetuning on How2Sign, which achieves 12.39 BLEU vs. the state of the art of 8.03 BLEU [38]. The base model achieves 3.95 BLEU zero-shot, which is nontrivial but substantially worse than the finetuned score. Factors that could contribute to this gap include train/test leakage of signers and narratives, How2Sign's narrow domain, and the extra ~10% training data it represents. Results are substantially worse when training from scratch, which suggests that T5's English pretraining gives the model a better initialization, as De Coster et al. [11] found for frozen pretrained language models. Results are abysmal when trained without YouTube-ASL. The most direct comparison of our approach to prior work is T5 trained from scratch on How2Sign only, which reaches just 0.86 BLEU, despite training on the same data as Tarres et al. [38]'s 8.03 BLEU. This might be explained by their use of a pretrained video encoder and various decisions they made to optimize for small amounts of data (smaller network, more text normalization, careful hyperparameter sweep), whereas we used a less tuned configuration that was intended for larger datasets. See Table 4 for qualitative examples of the translations produced by our best finetuned and zero-shot models, on sentences sampled from How2Sign by Tarres et al. [38]. The translations capture elements of the reference translation but are clearly not yet of usable quality. The zero-shot predictions new less closely to the references, but the errors usually make sense in light of the sign language input. For example, in (1), the sign used to mean "defense" also means "barrier". ## 6 Conclusion In this paper, we presented YouTube-ASL, a new, publicly available parallel corpus for American Sign Language and English that is ~3x the size and has ~10x as many unique signers as the largest \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Approach & Training Schedule & BLEU-1 & BLEU-2 & BLEU-3 & BLEU & BLEURT \\ \hline Álvarez et al. [3] & H2S & 17.40 & 7.69 & 3.97 & 2.21 & - \\ GloFE-VN [24] & H2S & 14.94 & 7.27 & 3.93 & 2.24 & 31.65 \\ Tarres et al.[38] & H2S & 34.01 & 19.30 & 12.18 & 8.03 & - \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & H2S & 13.92 & 4.69 & 1.82 & 0.86 & 30.65 \\ & YT-ASL & 14.53 & 5.47 & 2.61 & 1.41 & 29.55 \\ \cline{1-1} & YT-ASL + H2S & 28.60 & 14.56 & 8.68 & 5.60 & 37.72 \\ \cline{1-1} & YT-ASL \(\rightarrow\) H2S & 28.38 & 15.41 & 9.55 & 6.26 & 39.40 \\ \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & H2S & 14.96 & 5.11 & 2.26 & 1.22 & 29.98 \\ & YT-ASL & 20.93 & 10.35 & 6.14 & 3.95 & 34.98 \\ \cline{1-1} & YT-ASL + H2S & 36.35 & 23.00 & 16.13 & 11.89 & 44.78 \\ \cline{1-1} & YT-ASL \(\rightarrow\) H2S & **37.82** & **24.13** & **16.92** & **12.39** & **46.63** \\ \hline \hline \end{tabular} \end{table} Table 3: Metrics for ASL to English translation on How2Sign. Our models either train from scratch or finetune a pretrained T5 checkpoint, and are trained on How2Sign (H2S) only, YouTube-ASL (YT-ASL) only, a mixture of H2S and YT-ASL, or YT-ASL and then finetuned on H2S. prior ASL dataset. Our key improvement over prior work is that we used automatic tagging followed by human filtering to increase mining recall without harming precision. We demonstrated the value of this data with a simple baseline built from off-the-shelf components (MediaPipe Holistic and T5) that achieves a new finetuned state of the art in ASL to English translation on How2Sign, 12.39 BLEU. We also reported a zero-shot score of 3.95 BLEU, a first for sign language translation. We hope that YouTube-ASL will be immediately useful for research on methods for sign language translation and caption alignment, as well as tools for automatic annotation/filtering of new sign language datasets. Because YouTube-ASL has so much signer variety, including across dialect and skill level, it may be less useful for generation than recognition tasks. While our baseline improves upon prior work, even the finetuned translations are subjectively low-quality and are not yet useful in the real world. We hope that more refined modeling approaches will provide better results with the same data, but despite our and prior efforts, ASL is still a low-resource language by modern standards [19]--let alone the many other sign languages of the world, most of which are even less resourced. Future work may look to address this by mining broader datasets with other kinds of supervision, and as model quality improves, it should perform more comprehensive evaluations to understand differences across domains, dialects, levels of fluency, signer appearance, and other such factors. ## 7 Ethical Considerations Sign language datasets pose privacy challenges, because the signer's appearance (body, facial expressions), which is personally identifying, is a vehicle for the language itself. Video anonymization techniques are not yet mature enough to be useful in this regard. Our corpus is composed of sign language content that uploaders made publicly visible on YouTube, and we release only video IDs so that changes to the underlying videos are automatically reflected in the corpus. While the corpus covers a broader variety of channels than prior works, this does not mean it is necessarily representative of the signing population--or even if it were representative, that models trained on it would work equally well for everyone. We train our models on reduced poses as a form of anonymization, but this is not suitable for all modeling approaches and may harm model quality. Until sign language translation models are closer to usable quality, there is little risk of societal harm, except that individuals or organizations mistakenly rely on models that are inadequate. As we approach that point, sign language processing will adopt the risks of natural language processing in general, but with a great potential to improve accessibility for Deaf/Hard of Hearing people. \begin{table} \begin{tabular}{l l l} \hline \hline & **Reference** & And that’s a great vital point technique for women’s self defense. \\ & Tarres et al. & It’s really a great point for women’s self defense. \\ & Ours (zero-shot) & It’s really great, especially for women who are facing barriers. \\ & Ours (finetuned) & It’s really great for women’s self defense. \\ \hline & **Reference** & In this clip I’m going to show you how to tape your cables down. \\ (2) & Tarres et al. & In this clip I’m going to show you how to improve push ups. \\ & Ours (zero-shot) & This video will show how to use the code online. \\ & Ours (finetuned) & In this clip we’re going to show you how to cut a piece of clay. \\ \hline & **Reference** & In this segment we’re going to talk about how to load your still for distillation \\ & of lavender essential oil. & \\ (3) & Tarres et al. & Ok, in this clip, we’re going to talk about how to fold the ink for the lid of \\ & Ours (zero-shot) & This video will discuss how to submit a digital form for the survey. \\ & Ours (finetuned) & In this clip we’re going to talk about how to feed a set of baiting lizards for \\ & a lava field oil. \\ \hline \hline \end{tabular} \end{table} Table 4: Qualitative examples from our best finetuned and zero-shot models, on sentences sampled from How2Sign by Tarres et al. [38]. See Table 5 in the Appendix for the complete set of examples.
2302.07091
An Innovative Method for Measuring Instrument Data Acquisition using Image Processing Techniques
Measuring instruments are vital for obtaining accurate data in various fields, including scientific research, engineering, and manufacturing. However, data acquisition can be challenging, particularly when direct cable connections are not feasible or difficult to establish. In this paper, we propose a novel method for measuring instrument data acquisition utilizing a camera to capture the instrument display and image processing techniques to extract the measured values. Our method combines computer vision and machine learning techniques to recognize and extract numerical values from the instrument display. We demonstrate the effectiveness and accuracy of this approach by applying it to capture the magnetic field of a permanent magnet using a gauss meter and webcam. Our results indicate that the proposed method offers a practical and accurate solution for data acquisition in cases where direct cable connections are not possible. This method has potential applications in scientific research, engineering, and manufacturing industries.
David Shulman
2023-02-13T16:29:05Z
http://arxiv.org/abs/2302.07091v3
# An Innovative Method for Measuring Instrument Data Acquisition using Image Processing Techniques ###### Abstract Measuring instruments are vital for obtaining accurate data in various fields, including scientific research, engineering, and manufacturing. However, data acquisition can be challenging, particularly when direct cable connections are not feasible or difficult to establish. In this paper, we propose a novel method for measuring instrument data acquisition utilizing a camera to capture the instrument display and image processing techniques to extract the measured values. Our method combines computer vision and machine learning techniques to recognize and extract numerical values from the instrument display. We demonstrate the effectiveness and accuracy of this approach by applying it to capture the magnetic field of a permanent magnet using a gauss meter and webcam. Our results indicate that the proposed method offers a practical and accurate solution for data acquisition in cases where direct cable connections are not possible. This method has potential applications in scientific research, engineering, and manufacturing industries. I mage-based measurement; Measuring instruments; Image processing; Non-contact measurement; ## 1 Introduction Measuring instruments play a crucial role in various fields such as scientific research, engineering, and manufacturing [1]. They provide accurate data, enabling researchers and professionals to better understand phenomena and optimize processes. However, data acquisition from these instruments can be challenging, especially when direct cable connections are not possible or difficult to establish. This has led to an increasing interest in developing alternative data acquisition methods that are both accurate and practical. One of the most promising technologies for facilitating data acquisition is image processing, which has been successfully applied in numerous fields including remote sensing [2], quality control [3], and object tracking [4]. Image processing techniques enable the extraction of valuable information from images, which can be used to gather data without the need for a direct connection to the instrument. One possible solution is using image processing techniques to extract data from the display of a measuring instrument. This approach has been explored in various contexts, such as extracting text from images and video frames [5] and processing color images using Tesseract and OpenCV [6]. However, these existing methods may not be directly applicable or optimized for measuring instrument displays, which often have specific requirements in terms of accuracy and ease of use. Some another previous work has explored using image processing techniques for data acquisition. For instance, Wang et al. [7] demonstrated the use of image processing for acquiring data from an analog multimeter, while Arifin et al. [8] applied image processing to digital multimeters for similar purposes. However, these works focused on specific instruments and did not offer a generalized approach that can be adapted to various types of measuring instruments. In this paper, we propose a novel method for measuring instrument data acquisition using a camera to capture the instrument display and image processing techniques to extract the measured values. Our method is demonstrated by applying it to capture the magnetic field of a permanent magnet using a gauss meter and webcam. The image processing process involves Python libraries for video processing, including the OpenCV library for contour detection and thresholding. The processed data is then saved to a text file for further analysis. The advantages of using video processing to capture measured values from a measuring instrument include: 1. Flexibility: This method can be used with a wide range of measuring devices, making it a versatile solution for data acquisition. 2. Ease of Use: No special equipment or cables are required to connect the measuring device to the computer, making it simple to use and implement. 3. Cost-Effective: This method eliminates the need for expensive cabling and special equipment, making it a cost-effective solution for data acquisition. However, there are also some limitations to using video processing to acquire data from measuring instruments: 1. Accuracy: The accuracy of the data obtained through video processing can be affected by factors such as camera resolution and lighting conditions. 2. Time Delay: There may be a delay between the time the measurement is made and the time the data is captured, which can impact the accuracy of the data. 3. Data Processing: Video processing can be time-consuming and complex, requiring specialized software and technical expertise to extract meaningful data from the captured video. 4. Environmental Conditions: Environmental conditions such as lighting, ambient temperature, and camera angle can impact the accuracy and reliability of the data obtained through video processing. In this research, we propose an innovative method for measuring instrument data acquisition that uses a camera to capture the instrument display and process the captured video to extract the measured values [9]. Specifically, we applied this method to capture the magnetic field of a permanent magnet using a gauss meter and webcam. By using Python libraries for video processing, we were able to extract the measured values from the video and analyze them on our computer. Our approach offers a convenient and efficient way to obtain measurement data without the need for special equipment, making it suitable for a variety of applications. We present a detailed description of the proposed method, along with experimental results demonstrating its effectiveness and accuracy in capturing the magnetic field of a permanent magnet. Our research contributes to the development of practical solutions for data acquisition from measuring instruments, with potential benefits for a wide range of fields, including scientific research, engineering, and manufacturing [9, 10, 11, 12]. ## 2 Experimental Set-Up and Image Processing Process ### Experimental Set-Up The image acquisition setup consists of a gauss meter, a permanent magnet, a webcam, and a computer. The gauss meter, which measures the magnetic field, has a digital display that shows the measured values. The permanent magnet is positioned such that the gauss meter probe is exposed to the magnetic field. The webcam is mounted on a stable platform, facing the gauss meter's display, ensuring that the display is fully visible and in focus. For clarity on the experimental set-up, please refer to the details provided in the manuscript referenced in Ref. [9]. It is essential to maintain a consistent lighting environment to minimize variations in the captured images due to changes in illumination. We used artificial lighting to create a controlled environment and reduce the impact of shadows or reflections on the display. Moreover, the distance between the webcam and the gauss meter display was kept constant during the experiments to minimize variations in the captured images due to changes in perspective. We used a black background behind the gauss meter display to enhance the contrast of the image and improve the accuracy of the image processing. The webcam was connected to our computer and was used to capture a continuous video stream of the gauss meter display, see Fig. 1. ### Image Processing Techniques The acquired images were processed using Python libraries, primarily the OpenCV library [13]. The image processing pipeline involves several steps, as follows: 1. Pre-processing: The captured images are first converted to grayscale, and a Gaussian blur is applied to reduce noise and smooth the image. 2. Thresholding: Adaptive thresholding is employed to create a binary image, which Figure 1: The photograph of the experimental setup separates the display digits from the background. 3. Contour detection: The contours of the digits are detected using the findContours function in OpenCV. The contours are then filtered based on their area and aspect ratio to remove any noise or artifacts. 4. Digit recognition: The recognized contours are sorted based on their position in the image, and a pre-trained machine learning model or Optical Character Recognition (OCR) library, such as Tesseract [14, 15], is used to recognize the digits. #### 2.2.1 Pre-processing The first step in the image processing pipeline is pre-processing, which prepares the captured images for subsequent processing stages. The pre-processing stage involves two primary operations: grayscale conversion and Gaussian blur. 1. **Grayscale conversion**: The captured images are initially converted to grayscale using a weighted sum of the color channels to emphasize the display digits. Grayscale conversion simplifies the image by reducing its dimensionality, allowing for more efficient processing and analysis. The conversion formula is given by: \[Y=0.299*R+0.587*G+0.114*B\] (1) where Y is the grayscale intensity, and R, G, and B are the red, green, and blue color channel intensities, respectively. This weighted sum method is based on the human eye's sensitivity to different color wavelengths and results in a more perceptually uniform grayscale representation. 2. **Gaussian blur**: After converting the images to grayscale, a Gaussian blur is applied to reduce noise and smooth the image. The Gaussian blur works by convolving the image with a Gaussian function, which has the effect of averaging pixel intensities within a local neighborhood. The Gaussian function is defined as: \[G(x,y)=\frac{1}{2\pi\sigma^{2}}e^{-\frac{x^{2}+y^{2}}{2\sigma^{2}}}\] (2) where \(G(x,y)\) is the Gaussian function, and \(\sigma\) is the standard deviation of the Gaussian distribution, which controls the amount of smoothing applied to the image. By applying the Gaussian blur, high-frequency noise is suppressed, and the relevant features, such as the display digits, are emphasized, facilitating the extraction of these features in subsequent processing steps. The pre-processing stage serves as a crucial foundation for the image processing pipeline, as it ensures that the captured images are in a suitable format and have the necessary characteristics for accurate and efficient data extraction. By converting the images to grayscale and applying a Gaussian blur, the pipeline effectively simplifies the images and suppresses noise, allowing for more robust and reliable processing in the subsequent stages. #### 2.2.2 Thresholding Thresholding is an essential step in the image processing pipeline, as it allows for the segmentation of the display digits from the background, simplifying the image and enabling easier extraction of the relevant information. In this stage, a binary image is created by assigning pixel values to either the foreground (digits) or the background based on a specified threshold value. 1. **Adaptive thresholding**: In contrast to global thresholding techniques, which apply a single threshold value to the entire image, adaptive thresholding calculates a threshold value for each pixel based on its surrounding neighborhood. This approach is particularly effective in situations where lighting conditions are uneven, which can lead to poor segmentation results using global thresholding methods. Adaptive thresholding can be performed using various algorithms, such as the mean or Gaussian methods. The general formula for adaptive thresholding is given by: \[T(x,y)=(1-k)*M(x,y)+k*mean(A(x,y))\] (3) where \(T(x,y)\) is the threshold for pixel \((x,y)\), \(M(x,y)\) is the local mean of pixel \((x,y)\), \(k\) is a user-defined constant, and \(A(x,y)\) is the local neighborhood of pixel \((x,y)\). The user-defined constant \(k\) helps to control the trade-off between preserving the display digits' features and suppressing noise or other artifacts in the image. 2. **Binary image creation**: Once the adaptive thresholding process is complete, a binary image is created by assigning pixel values to either the foreground (digits) or the background. This is achieved by comparing the intensity value of each pixel to its calculated threshold value. If the pixel intensity is greater than the threshold, it is assigned to the foreground, otherwise, it is assigned to the background. The resulting binary image highlights the display digits' contours, providing a simplified representation of the original image, which is suitable for further processing and analysis. The thresholding stage is a critical component of the image processing pipeline, as it enables the segmentation of the relevant features (display digits) from the background. By employing adaptive thresholding techniques, the pipeline can effectively handle varying lighting conditions and other challenges, ensuring accurate and reliable data extraction from the instrument display. #### 2.2.3 Contour Detection Contour detection is a crucial step in the image processing pipeline, as it enables the identification and extraction of the display digits from the binary image obtained in the thresholding stage. This process involves detecting continuous curves in the image that represent the boundaries between the foreground (digits) and background regions. 1. **Edge detection**: The first step in contour detection is identifying the edges present in the binary image. Edge detection methods, such as the Canny edge detection algorithm or the Sobel operator, can be employed to highlight the boundaries between the foreground and background regions. These methods work by identifying areas of the image with rapid changes in intensity, which correspond to the edges of the display digits. 2. **Finding contours**: Once the edges are detected, the findContours function in OpenCV is used to identify the contours in the binary image. This function works by following the continuous curves that represent the boundaries between the foreground and background regions. The resulting contours are stored as a list of points, which can then be processed and analyzed further. 3. **Filtering contours**: The detected contours may include noise or artifacts that are not relevant to the display digits. To ensure that only the relevant digit contours are retained for further processing, the contours are filtered based on their properties, such as area and aspect ratio. For example, contours with an area below a certain threshold or an aspect ratio outside the expected range for display digits can be discarded. This filtering process helps to improve the accuracy and reliability of the digit recognition stage by eliminating extraneous contours that could adversely affect the recognition process. 4. **Sorting contours**: After filtering the contours, it is essential to sort them based on their position in the image to preserve the correct ordering of the digits. This can be achieved by sorting the contours according to their x-coordinate values (for horizontal arrangement of digits) or y-coordinate values (for vertical arrangement of digits). Sorting the contours ensures that the digit recognition stage can correctly interpret the numerical values displayed on the instrument. The contour detection stage serves as the foundation for extracting numerical information from the instrument display. By identifying and filtering the relevant digit contours and sorting them based on their position in the image, the image processing pipeline is well-prepared for the subsequent digit recognition stage, which is responsible for converting the visual information into a numerical format that can be stored and analyzed. #### 2.2.4 Digit Recognition Digit recognition is a critical stage in the image processing pipeline, as it is responsible for converting the visual information represented by the digit contours into numerical data that can be stored and analyzed. This stage involves the application of machine learning algorithms or Optical Character Recognition (OCR) libraries to recognize the individual digits represented by the filtered and sorted contours. 1. **Digit segmentation**: The first step in the digit recognition process is to segment the individual digits from the image. Using the sorted contours obtained in the contour detection stage, bounding boxes can be created around each digit. These bounding boxes serve as the input for the digit recognition algorithm or OCR library, which will process the digit images individually. 2. **Feature extraction**: Before applying the digit recognition algorithm, it may be necessary to extract relevant features from the digit images. This can involve techniques such as resizing the digit images to a standard size, normalizing the pixel intensities, or applying additional image processing operations, such as edge detection or morphological transformations. The goal of this step is to prepare the digit images for processing by the recognition algorithm or OCR library, ensuring the best possible recognition accuracy. 3. **Recognition algorithm**: Various machine learning algorithms or OCR libraries can be employed for digit recognition. For example, a pre-trained Convolutional Neural Network (CNN) can be used to recognize the digits based on their visual features. Alternatively, an OCR library such as Tesseract [14, 15] can be applied to perform character recognition on the digit images. The choice of recognition algorithm will depend on factors such as the desired accuracy, computational resources, and available training data. 4. **Post-processing**: After the recognition algorithm has processed the digit images, additional post-processing steps may be necessary to improve the accuracy and reliability of the extracted data. For example, confidence scores or probabilities can be calculated for each recognized digit, and low-confidence predictions can be flagged for manual review or correction. Additionally, temporal filtering techniques can be applied to the recognized digit sequences to smooth out fluctuations caused by noise or minor variations in the captured images, as discussed in the previous subsection on image processing techniques. The digit recognition stage is essential for converting the visual information obtained from the instrument display into numerical data that can be stored and analyzed. By combining computer vision techniques and machine learning algorithms, this stage provides a robust and flexible solution for recognizing the individual digits represented by the filtered and sorted contours, ensuring accurate and reliable data extraction from the instrument display. #### 2.2.5 Post-processing Post-processing is the final stage of the image processing pipeline and plays a vital role in refining and validating the numerical data extracted from the instrument display. This stage involves a series of operations aimed at improving the accuracy and reliability of the extracted data, as well as preparing the data for storage and further analysis. 1. **Data validation**: The first step in post-processing is to validate the numerical data obtained from the digit recognition stage. This can involve checking the data for consistency with the expected format, range, and units of the measured values. Additionally, outlier detection methods can be applied to identify any unusual or unexpected values that may indicate errors or anomalies in the data extraction process. Data validation helps ensure that the extracted data is accurate and reliable, reducing the potential for errors in subsequent analyses. 2. **Temporal filtering**: In cases where the data acquisition process involves capturing a sequence of images over time, temporal filtering techniques can be applied to the extracted numerical data to smooth out fluctuations caused by noise or minor variations in the captured images. This can involve techniques such as moving average filters, median filters, or more advanced methods such as Kalman filters. Temporal filtering helps improve the stability and reliability of the extracted data, making it more suitable for further analysis and interpretation. 3. **Data formatting**: Once the extracted data has been validated and filtered, it is necessary to format the data for storage and further analysis. This can involve converting the numerical data into a structured format, such as a text file, spreadsheet, or database, and adding relevant metadata, such as timestamps, units, or instrument settings. Data formatting ensures that the extracted data is organized and accessible for subsequent processing and analysis. 4. **Data visualization**: The final step in the post-processing stage is to visualize the extracted data, which can provide valuable insights into the measured values and their relationships. Data visualization techniques can include creating plots, charts, or graphs to display the extracted data, as well as generating summary statistics or other descriptive measures. Visualizing the data can help identify trends, patterns, or anomalies in the measured values, facilitating a deeper understanding of the instrument data and its implications for the research or application in question. The post-processing stage is a crucial component of the image processing pipeline, as it ensures that the extracted numerical data is accurate, reliable, and suitable for further analysis. By validating and refining the data, as well as formatting and visualizing the results, the post-processing stage provides a comprehensive solution for extracting valuable insights from the instrument display, supporting the broader goals of scientific research, engineering, and manufacturing applications. The video stream was processed using Python libraries for image processing. We used the OpenCV library to detect the gauss meter display and crop the image to isolate the region of interest. The image was then converted to grayscale to simplify the processing and enhance the contrast. To extract the measured values from the image, we used a combination of thresholding and contour detection. We first applied a binary threshold to the image to separate the background from the foreground. We then used contour detection to identify the contours of the digits on the display, see Fig. 2. Once the contours were identified, we used a custom algorithm to extract the digit values and convert them to the appropriate units. The processed data was then saved to a text file for further analysis. Overall, our image processing process was designed to be robust and accurate, while also being computationally efficient. The process was able to extract the measured values from the gauss meter display with high accuracy, and the results were consistent with the expected values based on theoretical calculations. ## 3 Results and Discussion ### Results We conducted a series of experiments to test the effectiveness and accuracy of our proposed method for measuring instrument data acquisition. Specifically, we used a gauss meter to capture the magnetic field of a permanent magnet, and used a webcam to capture the display of the gauss meter. Using our image processing process, we were able to extract the magnetic field values from the captured video. The processed data was compared to theoretical calculations, and was found to be in good agreement with the expected values. To access the detailed results of the experiment, we refer the reader to the manuscript cited in Ref. [9]. This publication provides a thorough analysis and interpretation of the experimental data, and offers valuable insights into the findings of the study. ### Discussion Our results demonstrate the effectiveness and accuracy of our proposed method for measuring instrument data acquisition. The use of a camera to capture the instru Figure 2: Image processing ment display and image processing techniques to extract the measured values offers a practical and convenient solution for data acquisition in cases where a direct cable connection is not possible or is difficult to establish. Our image processing process was able to extract the measured values from the gauss meter display with high accuracy, and the results were consistent with the expected values based on theoretical calculations. The process was designed to be computationally efficient, and could be easily applied to other types of measuring instruments. Our proposed method has the potential to be applied to a wide range of fields, including scientific research, engineering, and manufacturing. It offers a practical solution for data acquisition in situations where traditional methods may not be feasible, and could lead to more efficient and accurate measurement processes. Future work could explore the use of more advanced image processing techniques to further improve the accuracy and efficiency of the method. Additionally, the method could be applied to other types of measuring instruments and in other experimental settings to further validate its effectiveness and accuracy. ## 4 Conclusion In this study, we presented an innovative method for measuring instrument data acquisition using image processing techniques. By employing a camera to capture the instrument display and a series of image processing steps, we demonstrated that our proposed method can effectively and accurately extract numerical data from the display without the need for a direct cable connection. This approach offers a practical solution for cases where establishing a direct connection is not possible or is difficult to achieve and has potential applications in scientific research, engineering, and manufacturing. The image processing pipeline involves several stages, including pre-processing, thresholding, contour detection, digit recognition, and post-processing. Each stage plays a critical role in extracting the relevant features from the captured images, recognizing the individual display digits, and refining the numerical data for storage and analysis. By leveraging advanced computer vision techniques and machine learning algorithms, our proposed method provides a robust and flexible solution for instrument data acquisition. Our results demonstrate that the proposed method is effective and accurate, with potential applications in a wide range of fields and industries. Future work could explore the integration of additional sensors or imaging modalities to enhance the data acquisition process or the development of more advanced image processing and recognition algorithms to improve the accuracy and reliability of the extracted data. Moreover, the method can be adapted and extended to accommodate a variety of measuring instruments, further expanding its potential impact and utility. ## Author Declarations ### Conflict of Interest The author has no conflicts to disclose.
2303.01911
Investigating the Translation Performance of a Large Multilingual Language Model: the Case of BLOOM
The NLP community recently saw the release of a new large open-access multilingual language model, BLOOM (BigScience et al., 2022) covering 46 languages. We focus on BLOOM's multilingual ability by evaluating its machine translation performance across several datasets (WMT, Flores-101 and DiaBLa) and language pairs (high- and low-resourced). Our results show that 0-shot performance suffers from overgeneration and generating in the wrong language, but this is greatly improved in the few-shot setting, with very good results for a number of language pairs. We study several aspects including prompt design, model sizes, cross-lingual transfer and the use of discursive context.
Rachel Bawden, François Yvon
2023-03-03T13:23:42Z
http://arxiv.org/abs/2303.01911v2
# Investigating the Translation Performance of a Large Multilingual Language Model: the Case of Bloom ###### Abstract The NLP community recently saw the release of a new large open-access multilingual language model, Bloom(BigScience et al., 2022) covering 46 languages. We focus on Bloom's multilingual ability by evaluating its machine translation performance across several datasets (WMT, Flores-101 and DiaBLa) and language pairs (high- and low-resourced). Our results show that 0-shot performance suffers from overgeneration and generating in the wrong language, but this is greatly improved in the few-shot setting, with very good results for a number of language pairs. We study several aspects including prompt design, model sizes, cross-lingual transfer and the use of discursive context. ## 1 Introduction Large language models (LLMs) trained at scale with simple objectives have been found to achieve results that match dedicated systems on numerous NLP tasks (Radford et al., 2019), as long as tasks are formulated as text generation though "prompting" (Liu et al., 2023). LLMs' multi-task performance can even be improved with "instruction" fine-tuning (Sanh et al., 2022; Muennighoff et al., 2022), few-shot priming, and better strategies to select or learn prompts (Petroni et al., 2019; Shin et al., 2020; Schick and Schutze, 2021; Lester et al., 2021; Wei et al., 2022). In multilingual settings, their performance on machine translation (MT) tasks, as measured by automatic scores, is often close to state of the art, even when mostly trained on monolingual data (Brown et al., 2020). Moreover, prompting-based MT offers the prospect of better control of outputs, e.g. in terms of quality, style and dialect (Garcia and Firat, 2022). However, these abilities remain poorly understood, as LLM analyses primarily focus on their multitask rather than multilingual ability (see however (Vilar et al., 2022; Zhang et al., 2023; Moslem et al., 2023), which we discuss in Section 2). In this work, we focus on the MT performance of Bloom(BigScience et al., 2022), a (family of) open-access multilingual LLM(s), designed and trained by the collaborative BigScience project.1 Our main aims are to (i) evaluate Bloom's zero- and multi-shot behaviour, (ii) study the effect of prompt design, (iii) evaluate a diverse set of language pairs and (iv) assess its ability to use linguistic context. Our main conclusions, which extend those in (BigScience et al., 2022), are (i) 0-shot ability is blighted by overgeneration and generating in the wrong language, (ii) using few-shot improves both issues, with results much closer to state of the art across datasets and language pairs, (iii) there are clear transfer effects, with high scores for languages not officially seen in training, and successful transfer across language pairs via few-shot examples and (iv) although linguistic context does not lead to higher scores, there is evidence that Bloom's translations are influenced by it. We release our code and translation outputs.2 Footnote 1: [https://hf.co/bigscience/bloom](https://hf.co/bigscience/bloom) Footnote 2: [https://github.com/rbawden/mt-bigscience](https://github.com/rbawden/mt-bigscience) ## 2 Related work Since the early attempts to use language models (LMs) as multi-task learners (McCann et al., 2018), MT has been a task of choice to gauge LMs' multilingual ability. Results for the zero- and few-shot ability of LMs were discussed for both GPT-2 and GPT-3 (Radford et al., 2019; Brown et al., 2020), which is especially intriguing as they were trained primarily on monolingual (English) data. These results have since been confirmed for other monolingual LMs such as T5 (Raffel et al., 2020) and multilingual LMs such as XGLM (Lin et al., 2022), Palm(Chowdhery et al., 2022), and AlexandM (Soltan et al., 2022). However, the focus has mainly been on global multi-task performance; often only a small part of the discussion is devoted to MT. Moreover, results are often only reported for a few well-resourced language pairs (e.g. English-French and English-German), and the scores reported (mostly BLEU), are hard to compare due to a non-systematic use of standardised evaluation protocols and metrics.3 Footnote 3: See the discussion at [http://blog.benjaminarie.com/2/comparing-uncomparable.html](http://blog.benjaminarie.com/2/comparing-uncomparable.html) of these differences, and an attempt to reconstruct consistent scores. There are however some in-depth analyses of MT performance of LLMs, each focusing on a specific LM's performance in a true multilingual setting with respect to prompt design and number of few-shots. For instance, Vilar et al. (2022) reevaluate the MT performance of the multilingual Palm Chowdhery et al. (2022), focusing notably on the selection of few-shot examples. Consistent with our findings, they determine that prompt choice becomes unimportant in few-shot settings and that using few-shot examples increases performance with diminishing returns for \(k>5\) examples, using BLEURT and BLEU scores, as well as the results of a human evaluation. They find that the quality of few-shot examples has a large impact on performance. However, even with good prompts, Palm lags a couple of points behind state-of-the-art MT systems, especially when translating from English, notable due to adequacy problems. Zhang et al. (2023) focus on the evaluation of GLM-130B, a bilingual Chinese and English) LLM Zeng et al. (2022). Their main conclusions are also consistent with ours: (a) zero-shot performance varies greatly across different prompts, (b) increasing the number of prompts from 0 to 20 yields consistent improvements in performance, again with variance across instructions, and (c) finding the best few-shot example selection policy is difficult. It seems that having good and long examples, for instance, may help, even though none of the criteria explored in this study seem to provide any systematic improvement. A last point worth mentioning is that prompting with monolingual data hurts performance, but that using pseudo-parallel data obtained with back-translation Bojar and Tamchyna (2011) is an effective workaround. Moslem et al. (2023) evaluate OpenAI's GPT-3 Brown et al. (2020)4 with sampling-based decoding and a prompt resembling our own xglm-source+target prompt. They report strong zero-shot behaviour using multiple metrics, plus clear improvements with an increased number of shots for the well-resourced languages, less so for the only low-resource language in their lot Kinyarwanda). The main novelty of this study is to use prompting as a vehicle to perform local adaptation and to ensure terminological consistency. For this, they use fuzzy matches from a translation memory as well as MT outputs to build their prompts, yielding results that both outperform their zero-shot system, but also their initial MT engine. Additionally inserting terms and their translation in the instruction yields supplementary improvements. Footnote 4: Version: text-davinci-003 model. Finally note the preliminary evaluation of ChatGPT in Jiao et al. (2023), which reports interesting insights regarding the multilingual abilities of this model, as well as proposing innovative techniques to generate (artificial) prompts and to use pivoting in prompting. Similar to ours, this study considers multiple test domains such as news (WMT) and Wikipedia Flores. A more in-depth analysis of the same model can be found in Hendy et al. (2023), which confirms ChatGPT's strong translation abilities, at least for "well-resourced"5 language pairs. Document-level scores are also reported, as well as human evaluations and qualitative analyses. Footnote 5: A rather slippery concept in this context, as the content of the training data is not fully known and seems to mostly comprise English texts. Multilingual MT is also the subject of dedicated (monotask) architectures and training regimes. Originally introduced in Dong et al. (2015); Firat et al. (2016); Luong et al. (2016) with limited language coverage, the latest versions of these approaches are able to handle hundreds of languages, including very low-resource language pairs Fan et al. (2021); Bapna et al. (2022); Costa-jussa et al. (2022). Although we found that Bloom is able to match this performance, given sufficient training data, we also see that it still lags behind for many languages pairs that are under-represented in its training data. ## 3 Bloom Language Model Bloom is a large open-access multilingual model trained on 46 natural languages developed within the BigScience project BigScience et al. (2022). It is an auto-regressive language model designed to generate text to complete a user-entered text prefix, known as a prompt. It can be used for multiple tasks, including MT, question answering, etc. Bloom was trained on 1.6TB of text (of which 30% English), from various sources, although 38% of the data, known as the ROOTS corpus (Laurencon et al., 2022),6 is from Oscar web data (Ortiz Suarez et al., 2019). The model is openly released on HuggingFace in multiple sizes, ranging from 560M to 176B parameters.7 Footnote 6: The ROOTS corpus can now be queried using the dedicated search tool [https://hf.co/spaces/bigscience-data/roots-search](https://hf.co/spaces/bigscience-data/roots-search). Footnote 7: [https://hf.co/bigscience/bloom](https://hf.co/bigscience/bloom) ## 4 Evaluating Bloom on the MT task ### MT Datasets Used We experiment with three datasets, chosen to test different aspects of Bloom for MT: WMT (Bojar et al., 2014), Flores-101 (Goyal et al., 2022) and DiaBLa (Bawden et al., 2021). We use the WMT 2014 news test sets for English\(\leftrightarrow\)French and English\(\leftrightarrow\)Hindi, which we take as representative high- and lower-resource language pairs with respect to Bloom's training data.8 These test sets are somewhat outdated (Garcia et al., 2023), but have been used repeatedly in past LLM evaluations and are included as standard benchmarks for comparison. Flores-101 is a multi-parallel dataset in 101 languages, translated from original English sentences.In fact, evaluations into English are bound to yield overly good results (e.g. (Toral et al., 2018)) and between other languages may mostly reflect their similarity with the original English. We use it to test and compare Bloom's multilinguality, including for low-resource languages.9 DiaBLa is a bilingual test set of spontaneous written dialogues between English and French speakers, mediated by MT. We use this as a test of MT in an informal domain and the impact of (cross-lingual) linguistic context in MT. Footnote 8: English, French and Hindi make up 30%, 12.9% and 0.7% of the training data respectively (Laurencon et al., 2022). ### Experimental setup We evaluate and compare Bloom (and its variants) using the Language Model Evaluation Harness (Gao et al., 2021) in 0-shot and few-shot settings. For few-shot, \(k\) examples are prefixed to the prompt and separated with ### as shown in Example 1 (1-shot example is underlined). 1. **Input:** French: je m'ennuie = English: I'm bored. ### English: Is that your dog that's just wandered in over there? = French: **Reference:** Est-ce que c'est votre chien qui vient de rentrer par la? Results are reported on the datasets' test splits. Few-shot examples are randomly taken from the data splits according to availability (train for WMT, dev for Flores-101 and test for DiaBLa). We evaluate using BLEU (Papineni et al., 2002) as implemented in SacreBLEU (Post, 2018), using as tokenisation 13a for WMT and DiaBLa and spm for Flores-101 as recommended (Costa-jussa et al., 2022).10 BLEU has many shortcomings but is good enough to provide quantitative comparisons for most systems used in this study. We additionally use COMET (Rei et al., 2020) for finer grained comparisons when the scores are closer. Footnote 10: BLEU+case:mixed+smooth.exp+{13a,spm}+version.2.2.1 #### 4.2.1 Comparative models In our cross-dataset comparison (Section 5.1), we compare Bloom to other LLMs: (i) two task-fine-tuned models: T011 (Sanh et al., 2022), \begin{table} \begin{tabular}{l l l l} \hline \hline & Prompt name & Prompt & Target \\ \hline 1-2 & a\_good\_translation & Given the following source text (in L1): [source sentence], a good L2 translation is: & [target sentence] \\ 3 & version & If the original version says [source sentence] then the L2 version should say: & [target sentence] \\ 4 & gpt3 & What is the L2 translation of the sentence: [source sentence]? & [target sentence] \\ 5-6 & kgm & (L1:) [source sentence] = L2: & [target sentence] \\ 7 & translate\_as & [source sentence] translates into L2 as: & [target sentence] \\ \hline \hline \end{tabular} \end{table} Table 1: Seven MT prompts for the WMT’14 dataset (Bojar et al., 2014). All prompts specify the target language (L2). Each prompt exists in a ‘target-only’ version (-target), where only the target language is specified, and two prompts also exist in a second -source+target version, where the source language (in red and in brackets) is explicit in the instruction. trained on English texts, and MT0-xxl12Muennighoff et al. (2022), the multilingual version, and (ii) OPT13Zhang et al. (2022), an English generative LM. We evaluate all models on the same prompt xglm-source+target. To evaluate multiple language pairs with Flores-101, we compare (as a topline) to the supervised 615M-parameter MT model M2M-100 Fan et al. (2021), using the scores computed by Goyal et al. (2022). Footnote 12: [https://hf.co/bigscience/mt0-xxl](https://hf.co/bigscience/mt0-xxl) Footnote 13: [https://hf.co/facebook/opt-6b](https://hf.co/facebook/opt-6b) Footnote 14: This was not always straightforward due to incomplete documentation concerning (a) prompts tested, and (b) those actually used in each experiment (e.g. different ones for 0-shot and few-shot runs Chowdhery et al. (2022)). #### 4.2.2 Prompts We use several prompts, designed to illustrate different sources of variation: (i) the inclusion (or not) of the source language name, (ii) the relative order of source and target language names, (iii) the position of the source sentence (beginning or end of the prompt) and (iv) the prompt's verbosity. These prompts, available in PromptSource Bach et al. (2022), are shown in Table 1. The first three are inspired by previous work:14Brown et al. (2020) for gpt3,15Lin et al. (2022) for xglm and Wei et al. (2022) for translate_as, which also resembles Raffel et al. (2020)'s prompt Translate English to German: "[source text]": [target sentence]), also used in Wei et al. (2022); Garcia and Firat (2022). Footnote 14: Used only it seems, for zero-shot learning in the form “Q: what is the L2 translation of sentence [source sentence]. A:”, where special tokens Q and A are the query and the answer texts (cf. Figure G.36, pp 59). Considering the entries in Table 1, we can see that "prompting" in fact refers to two distinct aspects of the input: (i) the formulation of the task in natural language and (ii) the presentation of related examples (for few-shot setups) interleaved with language tags (perhaps more clearly referred to as _priming_ by Pham et al. (2020)). As illustrated by the xglm prompt for example, the instruction part can reduced to one single word. As our results below suggest, the instruction mostly matters in 0-shot setups, but can almost be dispensed with in few-shot scenarios. The authors of Brown et al. (2020) and Hendy et al. (2023) also use a verbose, instruction-like prompt in their zero-shot setup, and a much more compact one for few shots experiments. Also note that InstructGPT's prompt combines both an instruction and language tags Ouyang et al. (2022, p. 49). ## 5 Evaluation results Our evaluation of Bloom starts with a comparison across the three datasets and detection of major MT errors with a focus on WMT (Section 5.1) and then we present more in-depth analyses of particular aspects: (i) using WMT, a comparative study of Bloom model sizes (Section 5.2) and prompts (Section 5.3), (ii) using Flores-101 an evaluation of more language pairs and cross-lingual few-shot transfer (Section 5.4), and (ii) using DiaBLa, a study of the use of linguistic context (Section 5.5). ### Comparison across datasets We first prompt Bloom and the comparative models using the same prompt across datasets, restricting the directions tested to en\(\leftrightarrow\)fr and to en\(\leftrightarrow\)hi. We choose to systematically use the xglm-source+target prompt (Table 1), which corresponds to the following template: (2) L1: [source sentence] = L2: where L1 and L2 refer to the source and target languages respectively (e.g. English and French for \begin{table} \end{table} Table 2: Cross-dataset comparison of BLEU scores (spBLEU for Flores-101) using the xglm-source+target prompt. en\(\rightarrow\)fr) and [source sentence] is replaced by a given source sentence. BLEU scores are in Table 2a for both 0-shot and 1-shot (results with COMET are given in Appendix A). There are issues for 0-shot MT for all directions, particularly when translating into non-English languages, (BLEU scores are systematically lower than into English). Even into English, the scores remain low with respect to state of the art (e.g. 2.7 BLEU for Flores-101 fr\(\rightarrow\)en Bloom vs. 60.1 for mt0-xxl).16 MT0-xxl's good results illustrate the benefits of prompt fine-tuning. Results are considerably improved in the 1-shot setting (except for T0 and mt0-xxl, both fine-tuned) and are comparable to other LLM MT results reported in the literature; they are markedly better for Bloom than for OPT, which was mostly trained on English data. These general observations apply for WMT and Flores-101, while issues remain for DiaBLa. Footnote 16: For comparison, (Bi et al., 2020) reports state-of-the art BLEU scores for supervised MT as 45.6 and 45.4 for WMT14 en\(\rightarrow\)fr and fr\(\rightarrow\)en respectively. What are the main issues?We identify two main issues (particularly for 0-shot): (i) over-generation and (ii) generating in the wrong language (an obvious problem for MT). Examples 3 and 4 illustrate some instances of these problems. The first example shows a very common problem, with repetition of the prompt in multiple languages until the maximum generation length is reached. 1. **Input:** French: Il dit avoir conçu une sonnette de porte Wi-Fi. = English: **Bloom:** He says he conceived a Wi-Fi doorbell. = Spanish: Dice que conciibu un timbre de Puer Wi-Fi. = German: Er sagt, er hat einen WLAN-Trglocken gergindet. = Italian: Dice di aver concepito un chiuso Wi-Fi [...] 2. **Input:** French: Le tigre fait partie de la meme famille (genre Panthera) que les lions, les leopards et les jaguares. Ces quatre felins sont les seuls capables de rugir. = English: **Bloom:** The tiger is a member of the same family (genus Panthera) as lions, leopards and jaguares. These four cats are the only ones capable of roaring. Le tigre est un grandf lein qui vit en Aise. Il est le plus grand des felins. Il est aussi le plus gros carnivore terrestre. Il est tres rapide et peut courir à plus de 60 km/h. [...] Separating MT quality from overgeneration overgeneration as seen in Example 3 is a separate issue from Bloom's capacity to translate into another language. We therefore devise a custom truncating method for this type of overgeneration such that only the first translation in a prediction is kept, i.e. anything after a newline or the regular expression pattern =.+? : is discarded. Results after truncation (Table 2b) show that for all three datasets, 0-shot and 1-shot scores are significantly improved (e.g. 1-shot DiaBLa fr\(\rightarrow\)en increases from 12.05 to 41.36 and 0-shot Flores-101 hi\(\rightarrow\)en increases from 3.40 to 30.19). Bloom is capable of performing good MT but has a problem knowing when to stop generating. We use the same truncation elsewhere too and indicate when we show results for original or truncated outputs. Detecting generation in the wrong languageWe automatically detect the language of predictions using fasttext langid17(Joulin et al., 2017). Table 3 shows the number of translations identified as being in the correct target language, or alternatively in the source or another language for 0-shot and 1-shot setups after truncation.18\({}^{,}\)19 The number of sentences in the correct target language increases from 0- to 1-shot, particularly for the two non-English target languages. When translating into Hindi (0-shot), 1/5 (509) of predictions are not detected as Hindi; the 1-shot largely mitigates the issue (only 76 outputs are in the wrong language). Footnote 17: [https://fasttext.cc/docs/en/language-identifification.html](https://fasttext.cc/docs/en/language-identifification.html), using the compressed version lid.176.fr. Footnote 18: Raw tables can be found in Tables 12 and 13 in Appendix B. Increasing the number of few-shot examplesBoth problems improve significantly in the 1-shot setup, a trend that continues as the number of few-shot examples increases, resulting in higher BLEU scores, as can be seen in Figure 1 for WMT en\(\leftrightarrow\)fr. However, we see diminishing returns, particularly visible between 2 to 5 examples, suggesting that gains beyond 5-shot would be more marginal. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{3}{c}{en\(\rightarrow\)fr} & \multicolumn{3}{c}{fr\(\rightarrow\)en} & \multicolumn{3}{c}{en\(\rightarrow\)hi} & \multicolumn{3}{c}{hi\(\rightarrow\)en} \\ & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ \hline Target & 2814 & 2959 & 2954 & 2979 & 1998 & 2431 & 2469 & 2499 \\ Source & 181 & 32 & 47 & 22 & 476 & 48 & 29 & 2 \\ Other & 8 & 12 & 2 & 2 & 33 & 28 & 9 & 6 \\ \hline Total & 3003 & 3003 & 3003 & 3003 & 2507 & 2507 & 2507 & 2507 \\ \hline \hline \end{tabular} \end{table} Table 3: The number of outputs (after truncation) classified as being in the (correct) target language, the source language, or another language for 0-shot and 1-shot setups (for WMT). ### Bloom model size Several versions of Bloom exist, with differing numbers of parameters. To test how size impacts performance, we report average scores and ranges for WMT across the seven prompts. Table 4 shows that as the size decreases (from 176B to 560M parameters), the performance also decreases significantly. We see substantial gains for all models when moving from 0-shot to 1-shot, the smaller models (e.g. Bloom-7b1, Bloom-3b) slightly closing the gap with the largest one. As the ranges in Table 4 are computed across prompts, we see that different prompts yield markedly different BLEU scores in the 0-shot setup; for 1-shot, we still see variations of 6-8 BLEU points between the best and the worst prompt. Similar analyses performed with post-processing and also for English\(\leftrightarrow\)Hindi (Appendix C) confirm that (i) truncation improves scores for all model sizes and prompts and (ii) the choice of a bad prompt can result in catastrophic MT performance as compared to a good one. ### Per-prompt analysis Looking at average WMT results computed with respect to prompt choice (using the prompts in Table 1) allows us to further investigate cross-prompt variability. Which prompt works best?This variability is illustrated in Tables 5 and 6 report performance across prompts for en\(\leftrightarrow\){fr,hi}, averaged over the five Bloom models from Section 5.2.20 The corresponding tables for truncated outputs are in Appendix D. version and a_good_translation (source+target) get the highest average (and maximum) scores. Both prompts are more verbose (instruction-like), but the performance gap in the 1-shot setting between these prompts and the simpler, 'priming-style' prompts (e.g. xglm) narrows. The worst results are seen for gpt3. With this prompt, translating into French after a text that only contains English seems particularly difficult: half of the 0-shot translations for gpt3 are classified as non-French by langid (most of them are English). When translating into Hindi, only 10 outputs are detected as being in Hindi. Footnote 20: For a given prompt, the range mainly reflects the performance of the different sizes of Bloom model. Does it help to specify the source language in the prompt?We compare the two versions (-target and -source+target) of a_good_translation and xglm. Results in Tables 5 and 6 are inconclusive. For these language directions and prompts, we see small differences for 1-shot, which may be due to variance between runs. For 0-shot, it clearly helps xglm to indicate the source language, but for the more verbose a_good_translation, it helps one direction and hurts the other. This question would need to be further explored to draw more solid conclusions, including with non-English prompts. #### 5.4.1 Per-language results To optimise computational resources, instead of running all language combinations, we concentrate on: (i) high-resource language pairs, (ii) high\(\rightarrow\)mid-resource language pairs, (iii) low-resource language pairs and (iv) related languages (specifically Romance languages). Results are shown in Tables 7 and 8 for original outputs, given that overgeneration is less problematic for 1-shot. High-resource and high\(\rightarrow\)mid-resourceThe results for high-resource and high\(\rightarrow\)mid-resource language directions are generally good, surpassing M2M scores for high-resource, except for es\(\rightarrow\)fr.22 This suggests that Bloom a has good multilingual capacity, even across scripts (between (extended) Latin, Chinese, Arabic and Devanagari scripts). Footnote 22: French and Spanish, although related and comparably represented in ROOTS, have very different scores. Our preliminary analysis suggests that this is due to the Spanish references being less literal than the French and structurally more different from the original English. See Appendix E for some examples. Low-resourceFor low-resource languages, the results are more variable; some language directions see better results than M2M, notably most into-English directions, but others are less good (e.g. into Hindi and Swahili). Results for the lowest-resourced languages tested (sw\(\leftrightarrow\)yo and en\(\leftrightarrow\)yo) are particularly disappointing because the scores indicate that the resulting translations are meaningless, even though Yoruba and Swahili are present (although under-represented) in BLOOM's training data (\(<\)50k tokens each). Romance languagesThis contrasts with the results between Romance languages, where results are good across-the-board, including from and into Italian (it) and Galician (gl), which are not officially in the training data. Note that Galician shares many similarities with the other Romance languages, in particular with Portuguese (pt). These contrasted results show the performance of an LLM not only depends on the amount of training data, but also largely on the similarity with seen languages. To be complete, these analyses should also take into account the possibility of mislabellings in the training data,23 which have been found to explain a great deal of cross-lingual abilities of LLMs (Blevins and Zettlemoyer, 2022). Footnote 23: In a personal communication, N. Muennighoff estimates that Italian accounts for \(\sim\)0.33% of the ROOTS corpus, slightly below the proportion of Hindi texts (0.47%). #### 5.4.2 Cross-lingual transfer 1-shot results are positive for many of the language directions tested (including low-resource), provided they are sufficiently represented in the ROOTS corpus. To better understand how cross \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{en\(\rightarrow\)hi} & \multicolumn{2}{c}{hi\(\rightarrow\)en} \\ Prompt / Few-shot \# & 0 & 1 & 0 & 1 \\ \hline a\_good\_translation-source+target & 0.7 0.1–1.9 & 5.8 0.3–14.5 & 4.8 0.9–10.2 & 13.1 2.8–24.6 \\ a\_good\_translation-target & 0.2 0.1–0.8 & 5.5 0.3–14.1 & 6.3 1.1–13.0 & 13.2 2.8–24.8 \\ gpt3-target & 0.1 0.0–0.3 & 1.4 0–0.6-5 & 0.2 0.0–0.7 & 2.2 0.0–10.0 \\ version-target & 0.7 0.1–2.0 & 5.6 0.2–14.0 & **6.8** 1.7–11.5 & **13.3** 2.4–25.8 \\ xglm-source+target & **2.1** 0.1–6.8 & **6.9** 0.3–13.6 & 4.4 0.6–12.1 & 11.9 1.7–25.0 \\ xglm-target & 0.2 0.0–0.6 & 5.1 0.1–14.6 & 1.6 0.2–4.1 & 6.6 0.5–13.2 \\ \hline \hline \end{tabular} \end{table} Table 6: Average, min and max BLEU scores per prompt for en\(\leftrightarrow\)hi (original outputs). Best average result per setting in bold. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{en\(\rightarrow\)fr} & \multicolumn{2}{c}{fr\(\rightarrow\)en} \\ Prompt / Few-shot \# & 0 & 1 & 0 & 1 \\ \hline a\_good\_translation-source+target & 6.7 0.6–15.4 & 18.7 4.1–36.4 & 11.0 5.4–14.2 & 25.8 11.6–36.6 \\ a\_good\_translation-target & 3.1 0.4–10.1 & 20.3 3.2–35.5 & 12.1 5.1–16.8 & **25.9** 12.1–36.2 \\ gpt3-target & 2.5 0.5–7.9 & 16.6 2.2–32.5 & 4.5 0.7–12.7 & 19.3 5.8–33.1 \\ translate\_as-target & 3.3 0.4–5.0 & 17.1 3.2–32.7 & 6.9 2.1–11.3 & 21.6 7.6–35.1 \\ version-target & 7.5 0.6–2.20 & **21.4** 4.3–34.2 & **17.1** 3.9–26.8 & 24.9 7.8–35.4 \\ xglm-source+target & **8.3** 0.3–14.9 & 17.5 3.3–27.8 & 11.8 5.0–15.5 & 22.1 7.8–34.6 \\ xglm-target & 1.6 0.7–3.0 & 16.7 4.4–29.0 & 6.2 2.6–10.3 & 20.7 7.5–33.3 \\ \hline \hline \end{tabular} \end{table} Table 5: Average, min and max BLEU scores by prompt for en\(\leftrightarrow\)fr (original outputs). Best average result per setting in bold. llows, we use the 1-shot multilingual Bloom is and how the 1-shot mechanism functions, we vary the language direction of the few-shot examples, taking Bengali\(\rightarrow\)English (bn\(\rightarrow\)en) translation as our case study. Taking random 1-shot dev set examples,24 we compare the use of 1-shot examples from (i) the same direction (bn\(\rightarrow\)en), (ii) the opposite direction (en\(\rightarrow\)bn), (iii) a language direction whereby the source languages are related (hi\(\rightarrow\)en), (iv) the same related direction but from a different dataset (the WMT dev set) (v) a high-resource direction into the same target language (fr\(\rightarrow\)en) and (vi) a high-resource unrelated language direction (fr\(\rightarrow\)ar). Footnote 24: The random seed is kept the same for all runs. The results (Table 9) show that cross-lingual transfer is possible, but using a different language direction can impact overgeneration and translation quality. The unrelated direction fr\(\rightarrow\)ar gives the worst results, with most overgeneration (see the score difference between original and truncated), but also the worst quality after truncation, suggesting that language relatedness does play a role. Overgeneration is still a problem (although less so) when using the opposite direction (en\(\rightarrow\)bn) or the same target language (fr\(\rightarrow\)en). Using a related (higher-resource) source language (hi\(\rightarrow\)en) reduces overgeneration and also gives the best MT results. However, better results are seen when using Flores-101 rather than WMT examples, suggesting that in-domain examples are best. ### Use of Linguistic Context There has been a considerable amount of research on linguistic context in MT, e.g. to disambiguate lexically ambiguous texts or when additional information is necessary for the output to be well-formed (e.g. translating anaphoric pronouns into a language that requires agreement with a coreferent) [16, 11, 12, 13, 14]. \begin{table} \end{table} Table 7: 1-shot MT results (spBLEU) on the FLORES-101 devtest set (original outputs). \begin{table} \end{table} Table 8: 1-shot MT results (spBLEU) on the Flores-101 devtest set (original outputs). \begin{table} \end{table} Table 9: 1-shot results for Flores bn\(\rightarrow\)en when varying the language direction of 1-shot examples. HR=high-resource. We test the usefulness of linguistic context in DiaBLa in the 1-shot setting (again using xglm-source+target) by changing the origin of 1-shot examples: (i) a random example vs. (ii) the previous dialogue utterance. If linguistic context is useful, we would expect there to be an improvement for (ii). We also vary the language direction of the 1-shot example. By default, given that the dataset is bilingual, the direction of 1-shot examples is en\(\rightarrow\)fr or fr\(\rightarrow\)en, independent of the current example's direction. Given the results in Section 5.4.2 and the poor 0-shot results in Table (a)a, it is important to account for this to provide a fair comparison. We therefore compare each type of context (random/previous) with (i) the same random directions, and (ii-iii) the same (and opposite) language directions as the current example. We show results for original and truncated outputs. Results are shown in Table 10. Truncation helps considerably; even for 1-shot, Bloom struggles not to overgenerate and this is considerably reduced when the same rather than the opposite language direction is used for the 1-shot example. It is unclear whether using previous rather than random context helps: BLEU is higher (38.5 vs. 37.6), whereas COMET is lower (0.328 vs. 0.342). These differences could be the result of randomness in 1-shot example selection, and different results could be obtained with a different random seed. Despite these inconclusive results, it is clear that using previous context influences the translation, for better or worse. For evidence of this, see Table 19 in Appendix F, which provides three such examples: (i) an unlucky negative influence on the translation of an ambiguous word _glace_ 'ice cream or mirror' from the previous context, resulting in the wrong sense being chosen, (ii) the use of a coreferent _instrument_'instrument' from the previous sentence and (iii) the correct gender agreement of the pronoun _they_ into French (_elles_ 'they (fem.)' as opposed to _ils_ 'they (masc.)') to correspond to the feminine coreferent _files_ 'girls'. ## 6 Conclusion We have evaluated Bloom's MT performance across three datasets and multiple language pairs. While there remain problems of overgeneration and generating in the wrong language (particularly for 0-shot MT), MT quality is significantly improved in few-shot settings, closer to state-of-the-art results. Low-resource MT remains challenging for some language pairs, despite the languages being in the training data, questioning what it means to be a Bloom language. However, we see evidence for cross-lingual transfer for non-Bloom languages and when using few-shot examples from other language pairs. Finally, although using linguistic context does not give improvements with automatic metrics, there is evidence that discursive phenomena are taken into account. ## Acknowledgements This work was made possible with the collective efforts of the BigScience community, who designed, developed and prepared the tools and datasets used to train Bloom. Special mention to evaluation working group members and especially to Niklas Muenninghoff and Pawan Sasanka Ammanamachi for producing some of our results. This work was granted access to the HPC resources of Institut du developpement et des ressources en informatique scientifique (IDRIS) du Centre national de la recherche scientifique (CNRS) under the allocations 2021-AD011011717R1, AD011012254R2, 2021-A0101012475 and 2022-AD010614012 made by Grand equipement national de calcul intensif (GENCI). R. Bawden's participation was partly funded by her chair position in the PRAIRIE institute, funded by the French national agency ANR as part of the "Investissements d'avenir" programme under the reference ANR-19-P3IA-0001, and by her Emergence project, DadaNMT, funded by Sorbonne Universite. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{1-shot example} & \multicolumn{2}{c}{en\(\rightarrow\)fr} & \multicolumn{2}{c}{fr\(\rightarrow\)en} \\ Origin & Dir. & Trunc. & BLEU & COMET & BLEU & COMET \\ \hline Rand. & rand. & \(\times\) & 5.7 & 0.342 & 12.1 & 0.614 \\ & & \(\checkmark\) & 37.6 & 0.634 & 41.4 & 0.758 \\ \hline Prev. & rand. & \(\times\) & 6.1 & 0.328 & 12.3 & 0.617 \\ & & \(\checkmark\) & 38.5 & 0.614 & 41.6 & 0.751 \\ \hline Prev. & same & \(\times\) & 19.3 & 0.597 & 20.7 & 0.719 \\ & & \(\checkmark\) & **39.0** & **0.632** & **42.1** & **0.761** \\ \hline Prev. & opp. & \(\times\) & 3.6 & 0.064 & 8.6 & 0.518 \\ & & \(\checkmark\) & 37.8 & 0.590 & 41.2 & 0.742 \\ \hline \hline \end{tabular} \end{table} Table 10: Comparison of 1-shot results (BLEU) for DiaBLa when using the previous/random sentence for the 1-shot example (using the xglm-source+target prompt). In bold are the best results for each language direction.
2310.01829
Active Caustics
Heavy particles in vortical fluid flow cluster strongly, forming singular structures termed caustics for their resemblance to focal surfaces in optics. We show here that such extreme aggregation onto low-dimensional submanifolds can arise without inertia for self-propelled particles (SPPs). We establish that a singular perturbation is at the heart of caustic formation by SPPs around a single vortex, and our numerical studies of SPPs in two-dimensional Navier-Stokes turbulence shows intense caustics in the straining regions of the flow, peaking at intermediate levels of self-propulsion. Our work offers a route to singularly high local concentrations in a macroscopically dilute suspension of zero-Reynolds number swimmers, with potentially game-changing implications for communication and sexual reproduction. An intriguing open direction is whether the active turbulence of a suspension of swimming microbes could serve to generate caustics in its own concentration.
Rahul Chajwa, Rajarshi, Sriram Ramaswamy, Rama Govindarajan
2023-10-03T06:52:36Z
http://arxiv.org/abs/2310.01829v2
# Active Caustics ###### Abstract When heavy particles in turbulent flow are centrifuged out of vortices they cluster to form singular features in the number density [Phys Fluids **27**, 033305 (2015)], termed _caustics_ for their resemblance to focal surfaces in optics.We show here that self-propelled particles (SPPs) with Reynolds number \(Re=0\) and thus no mechanical inertia display similar behaviour, thanks to the persistence of their motion in the direction of their intrinsic orientation. Using singular perturbation analysis and numerical studies we establish that SPPs form caustics at a critical distance from the origin of a point vortex. To capture the dynamics in a generic vortical flow we study SPPs advected by Navier-Stokes turbulence and find pronounced caustics in straining regions, for intermediate values of a non-dimensional self-propulsion velocity. Our work offers a route to singularly high local concentrations in a macroscopically dilute suspension of zero-Re swimmers, with potentially game-changing implications for communication and sexual reproduction. An intriguing open direction is whether the active turbulence of a suspension of swimming microbes could serve to generate caustics in its own concentration. ## I Introduction The motion of particles in an ambient flow governs climate phenomena such as the formation of droplets in clouds [1; 2; 3], the drift of atmospheric pollutants [4], and the biological pump of the oceans [5], and there is a wide range of industrial processes where particulate flows are central [6]. When particles have appreciable inertia, the coupling between the local flow profile and particle velocity can lead to long-wavelength inhomogeneities in particle number density [7]. Heavy particles in a background turbulent flow are known to get centrifuged out of regions of vorticity, and so ambient vortices promote particle collisions, aggregation, and caustics [8; 9; 10; 11], offering a plausible mechanism for droplet growth in clouds [12]. The theoretical possibility of active suspensions displaying these singular features, even in the absence of particle inertia, is the subject of this article. Such singular features can be consequential in providing increased opportunity for communication and sexual reproduction [13] for motile organisms with negligible inertia, especially in marine environments Biological motility in nature frequently takes place in a fluid medium [14; 15] which is itself dynamic, and can strongly influence the movement and dispersal of swimmers [16; 17]. Whether an organism moves by itself to a desired location or simply follows the ambient flow field \(\mathbf{U}\) depends on the ratio of the flow-induced reorientation time scale \(1/\|\nabla\mathbf{U}\|\) to the self-propulsion time scale \(1/\beta\), where \(\|\nabla\mathbf{U}\|\) is evaluated at the location of the swimmer, and \(1/\beta\) is the time it takes for a swimmer to move by its own body length in the absence of external flow. Thus \(\beta/\|\nabla\mathbf{U}\|\gg 1\) and \(\beta/\|\nabla\mathbf{U}\|\ll 1\) present the cases of complete control and no control, respectively, of the organism on its movement. In this work we explore theoretically the inertia-less limit of the dynamics, i.e., of vanishing particle-scale Reynolds number \(Re\to 0\). Marine plankton, in particular, operate in this regime, in a vortex-laden turbulent ecosystem, and several aspects of their life are governed by the coupling of their motility with ambient flow [18; 19; 20; 15; 21]. Fig. 1 (a) depicts typical values of \(\beta/\|\nabla\mathbf{U}\|\) and \(Re\) for various swimmers, based on published data on swimming speed and size for dinoflagellates, ciliates [22], larvae [23; 24; 25], copepods [25] and a marine bacterium [26; 27]; and typical shear-rates in the upper mixed layer of the open ocean [19] (see Supplementary). Micro-swimmers in externally driven flows, through the coupling of their orientation to velocity gradients [28; 29; 30], display focusing [31], aggregation [2; 32; 33], and expulsion out of vortical regions [33; 34; 35], in a manner reminiscent of inertial particles [10; 11]. Although their inertia is negligible, their motion is persistent because of self-propulsion [34]. Moreover, the Hamiltonian structure of bound orbits of microswimmers in certain flows [36; 37; 38; 39; 40] suggests a role for the effective inertial dynamics of active Stokesian suspensions in imposed flows, where the orientation vector behaves like momentum [41; 42; 43]. We ask if inertia-less swimmers [28] can display caustics, a conspicuous encounter-promoting feature of the dynamics of inertial particles (IPs) in flows, where the calculated worldlines of suspended particles cross, so the solute velocity field is multivalued [10; 11]. We investigate this intriguing possibility theoretically, in the simple setting of a dilute suspension of neutrally buoyant swimmers in two-dimensional vortical flows. We find that non-inertial active particles can display caustics even near a single vortex, a building block of turbulence. We show how far the analogy with IP may be carried, and where the two differ fundamentally. Our flow geometry allows unambiguous demarcation of the regimes of caustic formation, and forms the basis for understanding the behavior of active particles in unsteady vortical flows like turbulence. We study the flow-coupled dynamics of two simple models of single motile particles: Hookean and preferred-length active dimers, corresponding, in the presence of noise, to active Ornstein-Uhlenbeck particles (AOUPs) [44; 45] and active Brownian particles (ABPs) [46] respectively. Our focus is on the deterministic part of the dynamics, with one-way influence of the fluid on the particles. ## II Results ### Mapping active dynamics to inertial The motion of an IP with position vector \(\mathbf{X}\) in a background flow \(\mathbf{U}\) is governed by the Maxey-Riley [8] equation which, to leading order in gradients, reads \(\dot{\mathbf{X}}=\mathbf{v}\) and \(\mathrm{St}\,\dot{\mathbf{v}}=\,(\mathbf{U}-\mathbf{v})\), when non-dimensionalised using a characteristic particle length scale \(d\) and a flow velocity scale \(U_{0}\). The Stokes number \(\mathrm{St}=\tau U_{0}/d\) is a non-dimensional measure of inertia, characterised by the relaxation time \(\tau\) (\(=\) mass/Stokes drag coefficient) of a particle. The centrifugation of these particles away from the vortex centre results in the formation of caustics within a critical distance from the vortex origin [10]. Setting \(\mathrm{St}=0\) yields tracer particles, which move with \(\mathbf{v}=\mathbf{U}\) to trace out circular paths forever. To demonstrate analytically that caustics and the consequent discontinuities in particle number densities can arise due to activity, even without inertia, we consider a Hookean active dimer, with centroid position \(\mathbf{X}\) and end-to-end vector \(\mathbf{w}\), placed in an imposed background flow field \(\mathbf{U}\). Neglecting inertia and translational diffusion, the equations of motion for \(\mathbf{X}\) and \(\mathbf{w}\) take the first-order form \[\dot{\mathbf{X}}=\mu\mathbf{F}\,+\mathbf{U}\,+\beta\mathbf{w}\,\equiv\, \mathbf{v}, \tag{1a}\] \[\dot{\mathbf{w}}=-\frac{\mathbf{w}}{\tau}\,+\,(\alpha\,\mathbb{S}+\mathbb{A} )\cdot\,\mathbf{w}-\ell^{2}\nabla^{2}\mathbf{U}\,+\sqrt{2D}\boldsymbol{\eta}, \tag{1b}\] where \(\mu\) is the Stokesian mobility of the particle, and \(\mathbf{U}\), \(\nabla^{2}\mathbf{U}\) and the external force field \(\mathbf{F}\) are evaluated at \(\mathbf{X}(t)\). In (1b) \(\beta\), with units of inverse time but indefinite sign, endows a dimer with self-propulsion proportional to its extension, and the polar flow-alignment parameter Figure 1: **Active dimer model for Self-Propelled particles in ambient flow** (a) Based on published data we depict the typical values of the ratio of flow time scale \(1/\|\nabla\mathbf{U}\|\) to swimming time scale \(1/\beta\), and Reynolds number \(Re\) for a marine bacterium _Vibrio alginolyticus_[26; 27], various dinoflageltes, ciliates [22], invertebrate larvae [23; 24; 25] and copep [25]. For \(\|\nabla\mathbf{U}\|\) we substitute the Kolmogorov shear rate for the range of previously measured energy dissipation rates in the upper mixed layer of the ocean \(10^{-8}-10^{-6}\) m\({}^{2}\)s\({}^{-3}\)[19] (see Supplementary). (b) Schematic of an active dimer of length \(\mathbf{w}\) in a flow \(\mathbf{U}\). (c) Inner solution caustics is marked by the intersection of representative trajectories (blue circles) of particles starting at closely separated initial radial distances, with \(\alpha=0.1\). A continuous variation in \(\tilde{r}_{0}\) would give a continuous curve. The inset depicts analogous optical caustics on the surface of coffee in a mug. \(\ell\), with units of length, orients the dimer along a locally parabolic flow, and vanishes for an apolar, i.e., fore-aft symmetric, particle. Swimmers without noise, external force or the polar coupling \(\ell\), in Poiseuille flow, have been studied by [47]. The analog of \(\ell\) for a collective orientation vector appears in [48]. Apolar flow-orientation couplings [49] enter through the strain-rate and vorticity tensors \(\mathbb{S}\equiv(\nabla\mathbf{U}+\nabla\mathbf{U}^{\top})/2\) and \(\mathbb{A}\equiv(\nabla\mathbf{U}-\nabla\mathbf{U}^{\top})/2\) respectively, with a response parameter \(\alpha\) determined by particle shape. In (1b) \(\mathbf{\eta}\) is a zero-mean, isotropic, gaussian white noise with unit variance. For \(\beta\neq 0\), (1a) and (1b) yield an equation for the total active-particle velocity \(\mathbf{v}\) [see (1a)] viewed as a dynamical variable, displayed here for constant spatially uniform external force \(\mathbf{F}\), and in general form in the Supplement: \[\frac{\tau}{\mu}\dot{\mathbf{v}}=\left[-\frac{1}{\mu}\mathbb{I}+\frac{\tau}{ \mu}(\alpha+1)\mathbb{S}\right](\mathbf{v}-\mathbf{U})+\mathbf{F}+\frac{\tau }{\mu}(D_{t}-\beta\ell^{2}\nabla^{2})\mathbf{U}+\frac{\tau\beta}{\mu}\sqrt{2D }\mathbf{\eta} \tag{2}\] where \(D_{t}=\partial_{t}+\mathbf{U}\cdot\nabla\), and all fields are evaluated at \(\mathbf{X}(t)\). In the absence of \(\ell^{2}\nabla^{2}\mathbf{U}\) and \(\mathbf{\eta}\) (1b) is homogeneous, so that \(\beta\) can be absorbed into \(\mathbf{w}\) in (1). This is why \(\beta\) appears only as a prefactor of \(\nabla^{2}\mathbf{U}\) and \(\mathbf{\eta}\) in (2). However, (1) can be recast as (2) only if \(\mathbf{w}\), i.e., self-propulsion, enters (1a); the degree to which it does is governed by \(\tau\). For \(\tau\to 0\), (1b) implies \(\mathbf{w}=0\). The presence of the external force and the drag, with no prefactors, on the right-hand side, makes it clear that \(\tau/\mu\) plays the role of an effective mass for this inertialess active system. Indeed (2) resembles the Maxey-Riley equation for inertial particles in a flow [8], albeit with important differences. Among them is the absence here of the Basset-Boussinesq history term [8; 50], which arises directly due to particle inertia. These intriguing similarities prompt us to explore analogs to passive inertial-particle behavior in the dynamics of active inertia-less particles in external flows. ### Caustics near a point vortex flow We begin with the classic case of motion in the flow field of a point vortex at the origin. In plane polar coordinates \((r,\theta)\), \(\mathbf{U}=\hat{\theta}\tilde{\Gamma}/r\), with circulation \(2\pi\tilde{\Gamma}\). Note that \(\mathbb{A}=0\) for this flow everywhere except at the origin. Non-dimensionalizing (2) using the natural length \(\sqrt{\tilde{\Gamma}\tau}\)[10] and time \(\tau\) gives the coupled equations \[\ddot{r}-\frac{L^{2}}{r^{3}}=-\dot{r}+\frac{\alpha}{r^{3}}-\frac{(1+\alpha)L} {r^{3}}, \tag{3a}\] \[\dot{L}=1-L-\frac{(1+\alpha)\dot{r}}{r}+\frac{\lambda}{r^{2}}, \tag{3b}\] (see Supplementary text) for the Lagrangian dynamics of the active particle whose centroid is at a radial distance of \(r\) from a point vortex, where \(L\equiv r^{2}\hat{\theta}\) is the angular momentum per unit mass of the particle and \(\lambda\equiv\beta\ell^{2}/\tilde{\Gamma}\). Effective centrifugal accelerations, reinforcing the similarity to an IP, arise through the \(L^{2}/r^{3}\) term. Due to the terms containing \(\alpha\), our equations are distinct from those for true IP [10]. We now limit ourselves to particles with apolar, i.e., fore-aft symmetric, shape, so that \(\lambda=0\). We treat the nonlinearities in (3a) & (3b) perturbatively. A _regular_ perturbation approach yields absurd solutions near the origin; in fact equations (3a) and (3b) constitute a singular perturbation problem [51]. The behaviour at very small times and small distances away from the vortex is singular, and relatively violent, unlike the more gentle relaxation to the final state at late times. This feature is exploited in the following to understand the different physics at small and large time. We seek an inner solution at the lowest order for \(t\ll 1\) and \(r\ll 1\), and an outer solution for \(t\gg 1\), where \(r\) could be \(O(1)\) or larger. In contrast to an IP which centrifuges out forever, the outer solution for an active Hookean dimer is steady rotation with \(\mathbf{w}\) and \(\dot{\theta}=1/r_{f}^{2}\) at a constant final distance \(r_{f}\) from the vortex, rendering the particle passive and lifeless at large times. Intriguing physics appears in the inner region, which sets the stage for the rest of this article. As is standard in singular perturbation theory, we recast (3a) and (3b) in the stretched spatial \(\tilde{r}\equiv r/\delta\) and temporal \(\tilde{t}\equiv t/\epsilon\) variables, where \(\epsilon\) and \(\delta\) are as yet unknown, but will be selected to ensure that all derivatives in the stretched variables are \(\mathcal{O}(1)\). Dominant balance necessitates \(\epsilon=\delta^{2}\), yielding the inner solution \[L=L_{0}-(1+\alpha)\log\tilde{r}/\tilde{r}_{0}, \tag{4}\] and \[v_{r}^{2}={v_{0}}^{2}-\frac{\alpha+(L-1-\alpha)^{2}}{\tilde{r}^{2}}+\frac{ \alpha+(L_{0}-1-\alpha)^{2}}{\tilde{r}_{0}^{2}}, \tag{5}\] where \(v_{r}\) is the radial speed of the particle, \((\tilde{r}_{0},v_{0},L_{0})\) are the initial stretched radial coordinate, radial speed and angular momentum respectively. Each solution is a ray in the \((t,r)\) plane, and some representative rays are shown in Fig. 1. Caustics arise when two particles collide, i.e., occupy the same spatial location at the same time. If we start out with two rings of particles at slightly different initial radii (say \(r_{0}\) and \(r_{0}+\Delta r\), then the intersection of their rays in the \(r,t\) plane represents an overtaking of the outer ring by the inner, i.e., the occurrence of caustics. Intersections, or caustics, for a few chosen \(r_{0}\), and extremely small \(\Delta r\) are shown by the blue circles in Fig.1(c). For the purpose of demonstration we have here taken \(r^{*}=1\), and equal initial speeds. Performing this exercise for all \(r_{0}\) yields an envelope, shown by the curve joining the blue circles, representing the smallest radial distance at a given time for the occurrence of caustics. This picture is akin to geometrical optics, where a caustic is an envelope tangent to the light rays [52]. We now numerically solve the full equations (1a) & (1b), without noise and external force \(\mathbf{F}\), with the polar flow coupling \(\lambda=0\) (see Supplementary video 3.2). In this case the self-propulsion \(\beta\) can be absorbed in the definition of \(\mathbf{w}\). Early in the process, when \(t=\mathcal{O}(\tau)\), we find that noiseless AOUP, i.e., active Hookean dimers, behave similar to IP, in that they both display centrifugation close to the vortex [see middle panel of Fig. 2(a)]. Such voiding of vortical regions has been observed previously for rigid gyrotactic swimmers [53] and bacteria [2; 35], and is shown to be critical for their transport in turbulent environments [29; 30]. However, the identification of a singularity, namely caustics, for this flow- and motility-induced spatial organisation was missing. At long times, the radial profile of number density reaches a steady state, with a peak at a particular radius [Fig. 2(b)] The extension of the dimer relaxes to zero, so in the absence of noise the AOUP at long time behaves like a tracer particle, consistent with our asymptotic analysis. In contrast, IPs centrifuge out forever, though slower and slower as time progresses. This feature of IPs is closely mimicked by a more robustly motile particle, which we discuss below. The sharp peak in the number density at a particular radial distance suggests the formation of caustics. To confirm this, we study as before the trajectories of pairs of particles separated by \(\Delta r_{0}\) in their radial position at the initial time, averaged over uniformly random initial orientations. Caustics are seen in Fig. 2(c) with an envelope as predicted by the inner solution (5). The scaling of this caustics curve near the vortex singularity is \(r\sim t^{2/3}\) [green curve in Fig. 2(c)], is reminiscent of the swallowtail catastrophe in optical caustics [52] [Fig. 1(c) inset]. We demarcate the regions in the \(r_{0}-\alpha\) plane where caustics occur as those where an intersection of adjacent rays (see Fig 2) takes place in finite time. For each value of \(\alpha\), we start a pair of particles at \(r_{0}\) and \(r_{0}+\Delta r_{0}\) and measure \(t_{x}\), the time taken for these nearby trajectories to intersect. The crossing time is averaged over uniformly random initial orientations. At a particular \(r_{0}=r_{c}\), \(t_{x}\) diverges [see Fig. 2 (d)], and caustics do not occur when the initial particle position is beyond this critical radius. This is seen to be comparable to that for IP [10]. Both Active Hookean dimer and IP show power-law behaviour, but with different slopes, for caustics formation time as a function of \(r_{0}\). At long times, the number density of the particles peaks around \(r_{c}\), shown as a vertical dashed line (red) in Figure 2 (b) for \(\alpha=1.0\). The flow coupling \(\alpha\) has a dual role at small to moderate distances from the vortex: (i) it aligns the dimer along the stable principal axis of \(\mathbb{S}\), which has a non-zero radial component; (ii) once aligned, it extends the dimer along this axis, thus competing with the \(1/\tau\) relaxation to zero-motility. However, when the dimers have been centrifuged out to large \(r\), the relaxation term takes over, leading to tracer-like dynamics, and the caustics radii lie Figure 2: **Centrifugation and caustics of dimer in point vortex**: (a) Time frames showing the positions of the particles (blue dots) around a point vortex at the origin for IP and noiseless AOUP respectively [see Supplementary video 3.1 & 3.2]. Particles were initialised with uniformly random initial positions and orientations/velocities. (b) An initially homogeneous number-density (grey circles) peaks near a critical radial distance (green) in the steady state, compared to unsteady density of IP (purple) at a representative \(t\gg 1\). (c) Trajectory (rays) of particles averaged over all initial orientations. The envelope of rays, for particles starting at various initial \(r\), gives rise to caustics (red circles). The green curve shows \(R=t^{\nu}\), with \(\nu=2/3\). In (b), (c) & (d), \(\alpha=1\). (d) The crossing time of adjacent rays separated by \(\Delta r=0.001\) starting at various radial distances \(r\), averaged over uniformly random initial orientations, diverges at a finite critical radius \(r_{c}\) for an active particle (red) and for an inertial particle (blue) averaged over uniformly random initial velocity of unit magnitude. (e) For various \(\alpha\), plotting \(r_{c}\) demarcates the region of caustics, which is the radial distance below which adjacent rays cross each other in a finite time. at intermediate values of \(r\) [see Fig 2 (b)]. ### Active caustics in turbulent flows Although Active Hookean dimers are analytically tractable and offer a conceptual understanding of the coupling between flow and the activity of deformable swimmers, their extension, and hence their intrinsic speed, relax to zero in the absence of noise and flow. The motility parameter \(\beta\) defines a speed only when multiplied by a preferred scale of \(\mathbf{w}\), say its RMS value. In nature, motile organisms generally possess an intrinsic speed scale independent of noise. To study such a case, we consider the equations for position \(\mathbf{X}\) and extension \(\mathbf{w}\) for an active dimer with a strongly preferred value of \(|\mathbf{w}|=w_{0}\), Figure 3: **Caustics in turbulent flow.** The blue-yellow colorbar represents the vorticity magnitude in the ambient turbulent flow scaled by the root-mean-square velocity gradient \(\kappa\) and black speckles are the locations of the geometric centers of \(10^{5}\) active dimers with preferred length \(w_{0}\). (a) \(\ell^{2}/w_{0}=1.0\) with (from left to right) \(w_{0}\bar{\beta}=0.0118\) m, \(0.118\) m, and \(1.18\) m. (b) \(\ell^{2}/w_{0}=0\) with (left to right) \(w_{0}\bar{\beta}=0.0118\) m, \(w_{0}\bar{\beta}=0.118\) m, and \(w_{0}\bar{\beta}=1.18\) m, where \(\bar{\beta}\equiv\beta/\kappa\). For intermediate activity the particles display pronounced caustics [see Supplementary video].(c) heat-map of the number-density fluctuation plotted in \((w_{0}\bar{\beta},w_{0}\bar{\ell}^{2})\) plane in meters. The vertical dashed lines, from left to right respectively, depict typical \(w_{0}\bar{\beta}\) for dinoflagellates, ciliates, invertebrate larvae and copepods for a turbulence energy dissipation rate of \(10^{-8}\) m\({}^{2}\) s\({}^{-3}\) (d) number-density fluctuation as a function of activity, showing pronounced agglomeration for intermediate levels of activity. (e) Okubo-Weiss parameter plotted for \(l=0\) and various values of \(w_{0}\bar{\beta}\). (f) Density field of the particles overlaid on the heat-map of \(\left\|\nabla\mathbf{v}\right\|^{2}\) s\({}^{-2}\) where \(\mathbf{v}\) is the velocity field of particles. \[\dot{\mathbf{X}} = \mu\mathbf{F}+\mathbf{U}+\beta\mathbf{w} \tag{6}\] \[\dot{\mathbf{w}} = \frac{1}{\tau}\left(1-\frac{|\mathbf{w}|^{2}}{w_{0}^{2}}\right) \mathbf{w}\,+\,(\alpha\mathrm{S}+\mathbb{A})\cdot\mathbf{w}\] (7) \[- \ell^{2}\nabla^{2}\mathbf{U}\,+\sqrt{2D}\boldsymbol{\eta}(t),\] which introduces a fixed size scale \(w_{0}\) and speed \(v=\beta w_{0}\). Equations (6) and (7) describe an active Brownian particle (ABP) in a flow. The difference between our preferred-length model and the traditional ABP, in which \(|\mathbf{w}|\) is constant, is unimportant. The dynamics resulting from the preferred-length dimer in a point vortex flow also gives rise to caustics in the inner region (see Supplementary), with the motility \(\beta\) playing a more conspicuous role than in AOUP dynamics. To explore the dynamics of a collection of ABPs in unsteady vortical flows, we write a pseudospectral code to solve the Navier-Stokes equations in a \(2\pi\) periodic domain with \(512^{2}\) collocation points and a deterministic external forcing \(F_{0}q\cos qx\), in the stream-function/vorticity formulation [Table 1]. This gives the flow \(\mathbf{U}\) that drives the particle dynamics [see Supplementary]. A one-way coupling is assumed, wherein the ambient flow stirs the particles but particles do not generate flows. We use \(w_{0}\) and the inverse of the root-mean-square velocity gradient \(\kappa\equiv\sqrt{\langle\nabla\mathbf{U}:\nabla\mathbf{U}\rangle}\) of the background flow in the turbulent steady state as length and time scales respectively, which gives the non-dimensional parameters, \((\beta/\kappa,\tau\kappa,\alpha,\ell^{2}/w_{0}^{2})\) and noise strength \(\sqrt{2D/w_{0}^{2}\kappa}\). We fix \(\alpha=1\) and \(\tau=1\), leaving a two-dimensional parameter space \((\beta/\kappa,\,\ell^{2}/w_{0}^{2})\) of activity and polar alignability respectively. In a turbulent steady state, we initialise the particles with uniformly random initial position and orientations. In the steady state of particle dynamics, we find tracer-like behavior for small values of motility strength \(\beta\) in which the swimmers get trapped within the vortices, consistent with the single vortex study [see Supplementary video]. For large values of \(\beta\), swimmers exhibit ballistic dynamics, leading to a homogeneous number density of particles. The compelling features of caustics appear at intermediate values of \(\beta\) (see Supplementary movie), where we see preferential sampling and clustering [see Fig. 3 (a) & (b)]. We quantify caustics-induced clustering by measuring the density fluctuation with respect to the initial homogeneous state [see Supplementary methods], and find pronounced caustics for a range of \(\tilde{\ell}\) and \(\tilde{\beta}\) [see Fig. 3 (c)]. We find that for intermediate values of activity \(\tilde{\beta}\), increasing \(\tilde{\ell}\) sharpens the caustics filaments. The density fluctuation exhibits a peak around \(w_{0}\tilde{\beta}\simeq 10^{-1}\) m [see Fig. 3 (d)], which is similar to the dynamics of IP, where for intermediate values of Stokes number \(St\), particles display clustering [54] and sharp caustics [11], as compared to large and small values of \(St\). To quantify how swimmers sample the flow, we calculate the Okubo-Weiss parameter, \(\mathcal{W}=\omega^{2}-2\mathrm{S}:\mathrm{S}\) at the location of particles, where \(\mathrm{S}\) is the symmetric part of the velocity gradient tensor [55] and \(\omega\) the vorticity. \(\mathcal{W}>0\) implies that the particles are in a vortical region, and \(\mathcal{W}<0\) means they are located in a straining region. The distribution of \(\mathcal{W}\) sampled over all particle locations gives the deviation from homogeneous sampling of the flow; when compared with the distribution of \(\mathcal{W}\) over the entire flow domain [see Fig. 3 (e)]. We find that particles cluster preferentially in the straining regions, which coincides with the peaks in the coarse-grained field \(\|\nabla\mathbf{v}\|\) [see Fig. 3], that marks caustics of active particles in flow, akin to the inertial case [11], as might have been anticipated from our stationary vortex studies. We expect some manifestation of "active caustics" in small swimming organisms like ciliates, invertebrate larvae and copepods when the turbulence energy dissipation rates are towards the lower end of the range of values observed in the upper mixed layer of the ocean \(10^{-8}-10^{-6}\) m\({}^{2}\)s\({}^{-3}\)[19] [see Fig. 1 (a)]. ## III Summary We investigate the deterministic dynamics of active dimers in vortical flows, using two different models of active dimers, namely the noiseless limits of AOUPs and ABPs. In the illustrative setting of a single point vortex, we highlight the distinctions and similarities between the centrifugation of IP and the effective centrifugation of motile inertialess particles. We show the formation of caustics in both the dimer types, by analysing the intersection of rays in the \(r-t\) plane. For a range of the strain-rate/orientation coupling parameter \(\alpha\), we demarcate the regimes in the \(\alpha-r\) plane where caustics occur. We study the effect of advection by more general vortical flows in the form of two-dimensional Navier-Stokes turbulence generated by direct numerical simulation. We use the Okubo-Weiss parameter to characterize the preferential sampling of straining regions by the swimmers. We find that for intermediate values of the dimensionless motility \(\tilde{\beta}\) (self-propulsion speed scaled by flow-velocity difference on the scale of a swimmer), clustering and caustics are more pronounced, similar to the dynamics of IP as the Stokes number \(\mathrm{St}\) is varied, suggesting that \(\tilde{\beta}\) plays the same role as \(\mathrm{St}\). This hitherto unexplored caustics regime is of interest for two reasons. In a formal sense the crossing of worldlines of active particles renders their velocity field multiple-valued. Arguably more important, it is a strikingly effective natural mechanism for close encounters between organisms at low mean concentrations, which should enhance communication and reproduction. \begin{table} \begin{tabular}{l l l l l} Domain & \(k_{f}\) & \(F_{0}\) & \(\nu\) & \(\mu\) \\ \hline \(512^{2}\) & \(3\) m\({}^{-1}\) & \(0.1\) ms\({}^{-1}\) & \(5\times 10^{-6}\) m\({}^{2}\)s\({}^{-1}\) & \(0.01\) s\({}^{-1}\) \\ \end{tabular} \end{table} Table 1: Spectral DNS Parameters Although coarse-graining eliminates multivalued-ness of the velocity field, the accompanying singularity in the density field persists. In natural systems, however, the divergence in particle number density would likely be regularized by inter-particle hydrodynamic, steric and/or behavioral interactions. The possibility of caustics in scenarios like the clustering of phytoplankton in upwelling [32], or biofilm formation in microfluidic vortices [2], offers a mechanism for enhanced interactions in such systems even when quite dilute on the average, and poses formal challenges for describing singular particle-velocity fields. ## Acknowledgements SR acknowledges support from the Science and Engineering Research Board, India, and from the Tata Education and Development Trust, and discussions in the Program on Complex Lagrangian Problems of Particles in Flows, ICTS-TIFR, Bangalore, R & RG acknowledge support from the Department of Atomic Energy, Government of India, under project no. RTI4001. RC acknowledges support from the International Human Frontier Science Program Organization, thanks Michael Shelley for fruitful discussions, and the Prakash lab for valuable insights into plankton dynamics.
2305.13043
Self-Replication, Spontaneous Mutations, and Exponential Genetic Drift in Neural Cellular Automata
This paper reports on patterns exhibiting self-replication with spontaneous, inheritable mutations and exponential genetic drift in Neural Cellular Automata. Despite the models not being explicitly trained for mutation or inheritability, the descendant patterns exponentially drift away from ancestral patterns, even when the automaton is deterministic. While this is far from being the first instance of evolutionary dynamics in a cellular automaton, it is the first to do so by exploiting the power and convenience of Neural Cellular Automata, arguably increasing the space of variations and the opportunity for Open Ended Evolution.
Lana Sinapayen
2023-05-22T13:48:46Z
http://arxiv.org/abs/2305.13043v1
# Self-Replication, Spontaneous Mutations, and Exponential Genetic Drift in Neural Cellular Automata ###### Abstract This paper reports on patterns exhibiting self-replication with spontaneous, inheritable mutations and exponential genetic drift in Neural Cellular Automata. Despite the models not being explicitly trained for mutation or inheritability, the descendant patterns exponentially drift away from ancestral patterns, even when the automaton is deterministic. While this is far from being the first instance of evolutionary dynamics in a cellular automaton, it is the first to do so by exploiting the power and convenience of Neural Cellular Automata, arguably increasing the space of variations and the opportunity for Open Ended Evolution. ## Data Sharing The experiments in this paper are executable online with no other requirements than access to a web browser; the data and the data analysis scripts are also open access. The code used to generate these experimental results is available as an interactive Colab notebook at [https://github.com/LanaSina/NCA_self_replication](https://github.com/LanaSina/NCA_self_replication), as well at the R code used to generate figures and selected videos. The author apologizes in advance for non-optimal code and crimes against Tensorflow. The data is available on Figshare: [https://figshare.com/projects/Self-replicating_Neural_Cellular_Automata/167582](https://figshare.com/projects/Self-replicating_Neural_Cellular_Automata/167582) Videos are available at: [https://youtube.com/playlist?list=PLYuu1RcSnrYRhophmflov_1mx7Qz8AP1P](https://youtube.com/playlist?list=PLYuu1RcSnrYRhophmflov_1mx7Qz8AP1P) ## Introduction Can a closed world, with unchanging rules and without outside influences, produce seemingly endless novelty? This concept called "Open Ended Evolution" is an unsolved problem in Artificial Life: the _Evolution Prize for open-ended evolutionary innovation in a closed system_ has been unclaimed for 17 years (Klyce (2006)). The laws of physics in the time frame of the evolution of Life on Earth can be considered unchanged, but as pointed out in Klyce (2006), it is unclear if Earth can be considered a closed system, due to its known (and hypothetical) exchanges with the rest of the Universe. By contrast, the "laws of biology" are sometimes considered to be continuously changing (Adams et al. (2017)), despite being implemented using unchanging physical laws. Is this a simple issue of definition, or a meaningful contradiction? In this paper, we propose to use Neural Cellular Automata (NCA, Mordvintsev et al. (2020)) to model closed worlds with unchanging rules, and find out how close we can get to Open Ended evolutionary dynamics. Cellular Automata are programs that run on a grid where each cell is defined by its state. The state of each cell is updated depending on this cell's previous state and the state of its neighbors, according to a fixed set of rules. Cellular automata have been used to model some of the functions of Life from the very beginning of their invention. Von Neumann, in his search for a "complicated artificial automata" of which complexity would grow under natural selection, used a cellular automaton to show that self-replication with inheritable mutation is possible in an artificial system (Neumann (1966)). Conway named his most famous cellular automaton "Life" (Izhikevich et al. (2015)), and was interested in finding complex dynamics even before "Life" was found to be Turing complete. In 2017, a biological von Neumann cellular automata was even found to be implemented on the back of a lizard (Manukyan et al. (2017)). But most cellular automata have to be hand designed by the experimenter, who either implements known rules driving a given phenomenon (e.g. predator-prey systems (Cattaneo et al. (2006)), reaction diffusion Weimar (1997)), or searches for rules that give an output similar to a known phenomenon (Schepers and Markus (1992)). To facilitate this tedious design process, even before early NCA (Li and Yeh (2001)), there was some interest in automating the discovery of relevant rules through optimization (Clarke et al. (1997)). The recent advances in deep learning may make this task easier, especially the open source, fast converging model proposed by Mordvintsev et al. (2020). NCA only require the experimenter to define an initial state, a target state and a maximal number of computation steps. The rules that make the transformation possible are learned by the network, instead of being designed by the experimenter. While they have not yet been used to model evolution, NCA have proven useful to implement biology-like functions in artificial patterns. Patterns that grow from a seed (Mordvintsev et al. (2020)), similar to the development of biological organisms from egg to adult form; patterns that self-repair (Mordvintsev et al. (2020); Horibe et al. (2021)); patterns that undergo metamorphosis (Najarro et al. (2022)), or parasite and highjack other patterns (Randazzo et al. (2021)). Many non-neural cellular automata have explored the possibility of (Open Ended) evolution, with several organisms controlled by one automaton (Sayama (1999); Oros and Nenhaviv (2007)). Evoloops in particular, while limited in their phenotype diversity (square loops), show complex genetic evolutionary dynamics. The "organisms" in Adams et al. (2017), while highly abstracted from common definitions of organisms or evolution, are even used to formally define Unbounded Innovation and Unbounded Evolution. Yet most publications on NCA use a one-to-one mapping between automata and organisms: one organism is modeled by one automata, and if two organisms interact (for example through parasitism) each follows the rules of its own dedicated automaton. We know of 3 exceptions: Otte et al. (2021), where a NCA is trained to in-paint several images from edges, Cavuoti et al. (2022), where two set of rules are explicitly encoded in the model using hand-designed constraints; and Cisneros et al. (2022) where several organisms are grown and hybridized. Note that these works are not about evolution and therefore do no not have self-replication mechanisms, but Cisneros et al. (2022) in particular (while not being a traditional publication) shows interesting developmental modularity. In this paper we merge the world-rule approach of non-neural cellular automata and the convenience of NCA: we consider each NCA as a world with rules loosely equivalent to the laws of physics of that world, and focus on the issue of self-replication, diversity of organisms, and evolution. In our experiments, we present training techniques that result in self-replication, spontaneous mutations, inheritance, and exponential genetic drift in NCA. ## Methods ### Neural Cellular Automaton This project uses 2-dimensional Neural Cellular Automata (Mordvintsev et al. (2020)). Like many cellular automata, NCA run on a grid where each cell is defined by its state. This model is therefore fully spatially discrete, not continuous. The state of a cell is updated depending on the cell's previous state and the state of the cell's neighbors, according to a fixed set of rules. The main characteristic of a NCA is that these update rules are encoded by a Neural Network (Noted _NN_ in Fig. 1). In this paper, the state is a vector of 16 real values between -1 and 1, with the first 4 values corresponding to RGBA channels used to render an image on the NCA's grid. The NCA can therefore be trained using images as RGB targets that the grid must reach from its initial state. The Alpha channel determines whether a cell is alive (Alpha\(>\)0.1) or dead (Alpha\(\leq\)0.1). If the cell is alive, the update rules apply: the cell's state is recalculated by applying the neural network to the neighborhood of the cell. If the cell is dead, the state vector is reset to 16 zeros and no update is applied. During training, the NCA's life span, i.e. one training step, is the number of updates (**time steps**) allowed to reach the target state from the initial state (typically 1 training step = 96 time steps, in keeping with the original NCA paper, except when indicated otherwise). After one training step, the final state of the NCA is evaluated against the target image using the mean squared error as loss function. The weights of the network are updated through gradient descent. Training ends when the maximum number of **training steps** has been reached (a number determined ad hoc by judging loss convergence). The model is then ready to be used and the rules do no change past this point. ### Modified training for self-replication Compared with the original NCA paper, we modify the training procedure of the NCA for some of the experiments: (a) Batch substitution. Like most modern neural networks, the NCA is trained by batches: rather than one initial state, a batch of 8 copies of the initial state are updated at once. In the experiments with "batch substitution", we replace half of the batch with the previous output of the NCA, as shown on Fig. 2. (b) Target alternation. In experiments with several Figure 1: **The NCA is trained to generate a target state in \(n\) time steps starting from an initial seed state. Each cell of the automaton’s grid is represented by a pixel. A cell’s state is a vector or 16 real numbers, 3 of which are used as RGB input to render the image, and 1 is the Alpha channel used to determine if a cell is alive or dead. The remaining 12 values are free parameters. A neural network (NN) takes the 9-cell neighborhood as input, and outputs the updated state of the central cell. The NN is applied to alive (Alpha\(>\)0.1) cells in the grid over \(n\) time steps, then the loss is calculated between the RGB channels of this final state and the target state.** target states instead of one target state, we alternate between the targets at each training step. (c) Synchronous update rules. The original update rules are asynchronous: at each time step, half of the cells are chosen at random to be updated and the other half remain unchanged. In some experiments we instead use synchronous update rules (all cells are updated simultaneously). The training of the asynchronous models succeeds the vast majority of the time (i.e the loss converges), to the point that it is difficult to produce statistics on the failure rate. The synchronous models fail to converge much more often, however when they converge, the results are quantitatively similar to the asynchronous models, if producing qualitatively smoother images transitions. (d) Periodic boundary conditions. The grid size is slightly more than twice the target pattern's size. Other parameters are the same as Mordvintsev et al. (2020), most importantly the neighborhood of radius 1 giving a neighborhood of 9 cells, the threshold of 0.1 on the Alpha channel to consider a cell alive rather than dead, and the 2-layer neural network to learn the update rules. ### Calculating the genetic drift This model has no alleles or chromosomes in the DNA representation, so our definition of genetic drift is different from the biological definition. We define genetic drift as the accumulation of neutral mutations in the genetic code through successive generations. In the absence of selection, all mutations in the model are neutral, except from the rare mutations that prevent an organism from replicating; the possibility of these mutations is largely eliminated during training and therefore rare after convergence of the model. To calculate genetic drift, we use models were organisms have two clear life phases: growth and replication. Growth is the development of an egg (a small, square clump of black pixels) into a fully formed organism. Replication is the phase where an organism lays a new egg. We record the value of the state of all cells in the first egg laid by an organism, and call this value the DNA of the organism. The egg develops into an organism of its own, and this organism lays it own first egg. We record that DNA, and so on for 100 generations. Note that there is no fitness-dependant selection: the first offspring is always chosen. We calculate the Mean Squared Error (MSE) between the DNA of one organism and each of its descendants individually. This value is represented by the color on the heatmap of Fig. 3. One row on the heatmap represents the MSE of all generations relative to one reference ancestor: for example row 4 is the MSE of generations 5 to 100 relative to generation 4. Therefore, a diagonal of the heatmap represents the MSE of all pairs of [ancestor, Xth descendant]. For example, the values on the 1st (longest) diagonal are the MSE between all parents and children. The second longest diagonal is the MSE between all grand-parents and grand-children, etc. So the average value of a diagonal is the average genetic distance between ancestor and Xth descendant. Calculating the average genetic distance on an entire lineage gives us the genetic drift through generations: are an organism's grandchildren more genetically different from it than its children? Note that the number of data points decrease through time: for 100 generations, we have 99 pairs of parents and children, but we only have one pair of an organism and its 100th descendant. The same method applied to the values of all cells of an adult organism (rather than just the egg) is used to calculate phenotypic drift. Figure 3: **Genetic drift.** We calculate the Mean Squared Error (MSE) between the DNA of an organism and each of its descendants. This value is represented by the color on the heatmap. One row represents the MSE of all generations relative to one reference ancestor: for example row 4 is the MSE of generations 5 to 100 relative to generation 4. Therefore, a diagonal on the heatmap represents the MSE of all pairs of [ancestor, Xth descendant]. Calculating the average value of each diagonal gives us the genetic drift through generations. Figure 2: **Batch substitution.** We replace half of the input batch at training step 1 with the output of the NCA at training step 0. This allows the NCA to learn to self-replicate its own output while simultaneously staying close to the target image. There are 8 batches but for simplicity only 2 are shown here. ## Results ### Self-replication These experiments demonstrate a method to obtain patterns that self-replicate in a NCA. While most uses of NCA in the literature have one initial state A and a fully distinct target state B, we can instead train the NCA to go from A to 2A. After training, the model should be able to go from 2A to 4A, and so forth. In practice, the NCA becomes a replication function for "exactly A", and any minute deviation A* from the target pattern stops the replication. Since the model is not pixel-perfect, its output is never exactly 2A, but rather 2A*, therefore replication always stops at this stage. The solution to this issue is to train the NCA to replicate anything "close enough to A", by using batch substitution at each training step (Fig.2), replacing half of the batch of initial states A by the replicated states A* generated by the NCA itself. We use the bacteria emoji for this experiment, and the NCA learns to replicate its own output while simultaneously staying close to the target image, as shown in. While this training allows for deviations from the initial target, i.e. mutations, in theory there is no reason for these mutations to be inheritable or unbounded. (This is not in the scope of this paper, but it proved trivial to make the mutations non-inheritable.) In practice, in our small grid, the patterns rapidly crowd each other, so to investigate replication and mutations, we cut out individual patterns and transplant them to an empty grid. A more complex variation of self-replication is to have distinct growth and division phases, such as: A becomes B (growth), B becomes B+A (division). The two phases must be learned by the NCA using two different target states. We demonstrate through the following example, Fig. 5: an egg grows into a fish (Target 1), the fish moves to the left and lays an egg (Target 2 includes both fish and egg). Compared to the bacteria experiment, this introduces one intermediary target, so during training we alternate between training the transition from egg to Target 1 (growth) and from Target 1 to Target 2 (division). Once again we use batch substitution for all training steps, and we transplant each egg to an empty grid to grow undisturbed. Results in Fig. 5 and in this video link show that we do obtain self-replication, and that successive generations show signs of mutation: by generation 98 the fish has lost one of the target's 3 stripes, but generation 99 regains it and generation 100 adds a supernumerary 4th stripe. ### Spontaneous, inheritable mutations When comparing successive generations of fish, we can see that the offspring are always slightly different from the parents, suggesting that spontaneous mutations are occurring somewhere in the process tThe training process does not explicitly enforce any DNA-like coding or inheritance). By calculating the distance between the parents and offspring patterns, we find that there is indeed a form of inheritance, as mutations are carried from fish to egg and from egg to fish, therefore influencing the whole lineage. Qualitatively, we see in Fig. 5(c) a lineage where the 3rd black stripe of the fish was lost at generation 98, then gradually regained and followed by a 4th stripe. Fig. 6(a) shows a lineage where a mutation for a forked stripe develops over generations 80 to 90. Most mutations are not this obvious, and the more a model converges during learning, the less striking the mutations are. Quantitatively, Fig. 6(c) shows that DNA and phenotype are strongly correlated: a form of genetic coding has emerged in the model. Along with Fig. 6(b), it also shows genetic and phenotypic drift along generations, a topic we explore in the next section. The main source of stochasticity in the NCA is the asynchronous update rule. The synchronous model's training is more brittle and often fails to converge, especially if the training has several targets. However, for successful training on the bacteria division task, we still find substantial inheritable mutations through generations (video link). These mutations despite the NCA rules being deterministic could be due to rounding errors that often occur with floating point number representation. It is also possible that stochasticity is introduced elsewhere in the model unbeknownst to the experimenter, or due to the equipment (stochasticity in GPU runs). Note that these causes would still satisfy the definition of closed model by Klyce (2006). Finally, there is the possibility that each division is inherently different from the Figure 4: **Simple self-replication. (a) Results shown for a NCA is trained to self-replicate from a bacteria emoji. (b) To analyze the successive generations without interference from the grid becoming crowded with bacteria, we isolate one bacteria after replication (top or bottom, chosen randomly) and transplant it to a blank grid where it can replicate again. (c) Note the visual differences (mutations) between successive generations: G5 seems to have 2 nuclei (yellow central patch). Mutations extends to the non-RGBA values of each cell’s state. Grid lines are not shown, but each pixel is a distinct cell of the automaton.** previous one, i.e. that the model that is genuinely deterministic but chaotic. This last hypothesis is reinforced by the fact that running the synchronous model from the same starting point always seems to lead to similar final results, even if those results are far from the initial state. This is not the case with asynchronous models, and we focus our analysis on those models in the remainder of the paper. ### Genetic encoding and exponential drift If each NCA is a world with its own laws of physics, what happens when we transplant a creature from one world to another? The transplanted pattern could disintegrate, maintain itself, or become a sort of hybrid. We found that a NCA trained to develop a fish emoj from an egg will convert all input information into fish, including noise or other images, a disappointing but understandable result. Because of the training process, NCA have one overwhelming drive: to develop towards the target pattern. Unlike real worlds (and similarly to teleological misunderstandings of evolution on Earth...) they have a goal that they are trained to always converge towards. This might also be the cause for the sudden stalling of the "near-exponential" curve of Fig. 6(b), although some instances of this model do not stall at 100 generations (because of time constraints, we did not perform a systematic analysis of stalling). It would make sense for the model to reach some limits given its limited expressivity: there are only so many yellow fish one can draw on 70 square pixels. A simple solution would be to train several NCA and execute several sets of rules within one space, but that would be a step back towards the concept of one NCA for one organism, and away from our goal of more than one organism in one self-contained NCA world. One way to have several patterns coexist within a single set of rules is to train several eggs to converge to several targets. If the eggs contain the same information, the NCA converges to an average of the several targets, as the same rules apply to all cells of the NCA. If the eggs contain different information, similar to genetic information guiding development, we do obtain different patterns and occasionally stable hybrids of the patterns (Fig. 7(b)). In addition, the exponential shape of the genetic drift curve is maintained. Fig. 7(c) to (d) show quantitative and qualitative analysis of 100 consecutive generations for a model trained for 4000 training steps. We note here that unlike models trained on single patterns, the lineages exhibit frequent extinction: some patterns are not viable and fail to produce eggs. The Figure 5: **Growth and self-replication.** We add a self-replication step to the growth phase first introduced by Mordvintsev et al. (2020). (a) The NCA is trained for 1000 training steps by alternating between an intermediary target (grow fish from egg) and a final target (move fish left and lay egg), as can be seen in the divided loss curve (b). (c) Successive generations show signs of mutation: by generation 98 the fish has lost one of the target’s 3 stripes, but generation 99 regains it and generation 100 adds a supernumerary 4th stripe. Figure 6: **Genetic coding and drift.** A different run of the model in Fig. 5, at training step 1500. (a) MSE between the fish at generation 0 and its descendants. The descendants appear to be all equally different from the 0th generation, except for a jump at generation 82 where the fish develop a forked stripe that is inherited by successive generations. (b) When calculating genetic drift, we find not a linear relationship as in (a), but an exponential increase in MSE until generation 82, where this model stalls (not all models stall in 100 generations). (c) The clear correlation indicates the emergence of a genetic code: DNA differences in the eggs are translated to phenotype differences in the developed organism, and big DNA mutations correspond (mostly) linearly to big phenotype differences. analysis was done on a run where 100 consecutive generations were viable. In some cases, especially in the early stages of training (e.g. training step = 3000, Fig. 7(b)) the phenotypes switch with variable smoothness from one pattern to the other. This is less frequent as training progresses and the model converges. There is exponential drift of the descendants away from the ancestor (Fig. 7(c)), and for the same magnitude of DNA variation (max. 0.10), the magnitude of phenotypic variation is higher: 0.15 for the fish-and-lizard model versus 0.04 for the fish only model. In other words, the same DNA-space codes a greater variety of phenotypes. Although we did not perform a quantitative analysis of when the exponential stalls, it is natural to expect that the qualitatively greater variety of phenotypes indicates a greater space of possibilities, and therefore longer or larger exponential growth than the 1-pattern model. The increase in lineage extinction events, while unexplained, is a caveat to this expectation. All in all, the goal of creating several attractors and paths between them is achieved. However, this solution is still unsatisfying, as it could well be closer to "paint by numbers" than to a modular genetic code. In the worst case, a fish's appearance could in theory be fully decided by one out of hundreds of bit being 0 in one egg, and all other values could lead to a lizard. The current training method does not guarantee that the NCA will not use DNA as a discrete identifier. Other coding schemes are possible, for example Fig. 8 shows the results of using a "seed cloud" to code for different pattern. The scheme has some similarities with Cisneros et al. (2022), except that all our seeds contain exactly the same genetic code. In consequence, the seeds initially develop identically, until they make contact with each other and the location of the contact serves to break the symmetry. The directionality and timing of the contact is a cue for differentiation. Only the different positions of the seeds encode the final result. Since each seed starts with the same development but ends up being a different part of the final pattern(s), this might be closer to the type of modularity that characterises a DNA as we know it, where most cells of an organism have the same DNA and are undifferentiated until they are in the right neighborhood at the right time, at which point they differentiate into their final form. ## Discussion Using modified training methods, we show that NCA can exhibit self-replication and spontaneous, inheritable mutations, with runaway dynamics that carry the descendants' genetic code and phenotype away from their ancestors', even in the absence of selection. The expression of the mutations Figure 8: **Spatial modular encoding: (a) Smooth convergence of the spatial modular model. (b) In the initial state, all seed pixels (in black) have the same value, therefore they all develop the same patterns until they come into contact with each other. The directionality and timing of the contact is a cue for differentiation. Some undifferentiated patches differentiate into parts of the flowers and some disappear. The differentiation from initially identical instruction is reminiscent of the modularity of DNA.** Figure 7: **Exponential drift in a 2-organism NCA. (a) The model is trained to develop different DNA into 2 different patterns, as well as replicating the patterns. (b) In some cases, especially before training convergence, the model goes back and forth between fish, lizard, and hybrids in the same lineage. (c) In this other lineage the DNA undergoes a relatively smooth transition, while the phenotype abruptly switches from fish to lizard. (d) The phenotype space is large, and the average difference of DNA (genetic drift) and phenotype (phenotypic drift) between an ancestor and its descendants increases exponentially.** in the organisms' phenotypes are varied, non-repeating, and unexpectedly interesting, with stable inter-species mutants, addition or deletion of stripes in the fish experiment, doubling of the nucleus in the bacteria experiment, and various changes of size. While our experiments satisfy the definition of unbounded innovation and unbounded evolution by Adams et al. (2017) and arguably manages to implement changing biological rules as a subset of fixed physical laws, we still find weaknesses in the model. 1. There is no true unlimited diversity of organisms: a model trained to make lizards and fishes never grows a flower, even under directed evolution (where the experimenter imposes a fitness criterion to select organisms). 2. Due to "crowding", the models presented here stop short of exhibiting actual human-out-of-the-loop evolution. While this paper does not discuss the notion of evolutionary complexity, and innovation is left undefined in the Open Ended Evolutionary Innovation prize (Klyce (2006)), we feel that it might not even be warranted to talk about innovation in the absence of function, and our organisms have no functions related to their own survival besides "lay an egg". Ideally, selection would occur by itself and we could observe something closer to Open Endedness, where the basic laws of the world are largely fixed and yet life exponentially grows in complexity through real innovations. There are two major theoretical obstacles to this. Firstly, the organisms in NCA tend to suffer from crowding: because they are closer to waves of information than to physical matter, they can intersect each other and create information from nothing until the grid is "full", rather than competing for space. A specific mechanism must be introduced for the patterns to have adversarial interactions. Secondly and most importantly, the trained models lack expressivity. Deep Neural Networks are made for convergence, and by default NCA converge to one attractor: we must coax them to divergence, to obtain expressive power sufficient for several patterns to coexist. This might be possible by explicitly training the model for inheritable extraordinary mutations, which we have not done here, or by using a brute force approach and training one model on hundreds of target patterns, creating a hundred or more attractors. This might be one of the big differences between AI, which strives for convergence, and ALife, which dreams of divergence. NCA being at the crossroads of both fields makes this conflict more salient. The limitations of NCA force us to imagine biologically implausible paths to evolution, another fundamental aspect of enjoying ALife research. While the mutations presented here are not adaptive, they do accumulate exponentially in the absence of evolutionary pressure, demonstrating perhaps the potential for true Open Ended Evolution in NCA.
2306.05371
New results on the associated Meixner, Charlier, Laguerre, and Krawtchouk polynomials
We give new explicit representations as well as new generating functions for the associated Meixner, Charlier, Laguerre, and Krawtchouk polynomials. The obtained results are then used to derive new generating functions and convolution-type formulas of the corresponding classical polynomials. Some consequences of our results are also mentioned.
Khalid Ahbli
2023-06-05T20:27:59Z
http://arxiv.org/abs/2306.05371v1
# New results on the associated Meixner, Charlier, Laguerre, and Krawtchouk polynomials ###### Abstract We give new explicit representations as well as new generating functions for the associated Meixner, Charlier, Laguerre, and Krawtchouk polynomials. The obtained results are then used to derive new generating functions and convolution-type formulas of the corresponding classical polynomials. Some consequences of our results are also mentioned. Faculty of Sciences of Agadir, Ibn Zohr University, Morocco E-mail: [email protected] _Keywords:_ Explicit representation, Associated orthogonal polynomials, Hypergeometric functions, Generating function. _Mathematics Subject Classification (2020):_ 33C45-05A15-33C20 ## 1 Introduction A set of polynomials \(\{P_{n}(x)\}\) is orthogonal if \(P_{n}(x)\) is a polynomial of exact degree \(n\) and there is a positive measure \(d\mu\) on the real line with an infinite number of points increase and for which all the moments are finite such that \(\int_{\mathbb{R}}P_{n}(x)P_{m}(x)d\mu(x)=h_{n}\delta_{nm},\ n,m=0,1,\cdots,\ (h_{n}>0).\) A necessary and sufficient condition for orthogonality [21] is that \(\{P_{n}(x)\}\) satisfies the three-term recurrence relation \[\begin{array}{l}A_{n}P_{n+1}(x)=\left(B_{n}\,x+C_{n}\right)P_{n}(x)-D_{n}P_ {n-1}(x),\quad n=0,1,2,\cdots,\\ P_{-1}(x)=0,\ P_{0}(x)=1,\end{array} \tag{1.1}\] where \(A_{n}\), \(B_{n}\), \(C_{n}\), \(D_{n}\) are real coefficients such that \[A_{n-1}B_{n-1}B_{n}D_{n}>0,\quad n\geq 1. \tag{1.2}\] The associated orthogonal polynomials \(\{P_{n}(x;\gamma)\}\) are defined by (1.1)-(1.2) in terms of coefficients \(A_{n+\gamma}\), \(B_{n+\gamma}\), \(C_{n+\gamma}\) and \(D_{n+\gamma}\) where \(\gamma\geq 0\) is a real association parameter. These polynomials have recently generated wide interest and some work has been devoted to the task of obtaining explicit representation (or closed-form expression) for these polynomials. This has been done successfully for the associated Jacobi polynomials [1] and their special cases, including the associated Gegenbauer [2], Laguerre and Hermite polynomials [3] and more recently for the associated Meixner-Pollaczek polynomials [4]. Associated orthogonal polynomials have many applications in diverse fields, such as queuing and inventory models, chemical kinetics, population dynamics, and quantum optics. For instance, it was shown in [10] that associated Meixner polynomials are birth and death process polynomials (a stationary Markov process) with rates \(\lambda_{n}=c(n+\gamma+\beta)\), \(\mu_{n}=n+\gamma\), \(0<c<1\). In [18, 19], associated Meixner-Pollaczek and Hermite polynomials were used to construct some new sets of nonlinear coherent states which play an important role in quantum optics. For other works on associated orthogonal polynomials and their applications see the articles [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], the book [20] and their references. In this paper, we will be concerned with the associated Meixner polynomials (AMPs), denoted \(\mathscr{M}_{n}(x;\beta,c,\gamma)\), that correspond to coefficients \(A_{n+\gamma}=c\), \(B_{n+\gamma}=c-1\), \(C_{n+\gamma}=(c+1)(n+\gamma)+\beta c\), \(D_{n+\gamma}=(n+\gamma)(n+\gamma+\beta-1)\). These polynomials were first studied by Ismail et al. in [10] where a generating function was derived. They proved that the generating function of these polynomials has the following integral representation: \[\sum_{n=0}^{+\infty}\frac{(c\,t)^{n}}{(\gamma+1)_{n}}\mathscr{M}_{n}(x;\beta,c,\gamma)=\gamma\,(1-c\,t)^{-\beta-x}\,(1-t)^{x}\int_{0}^{1}u^{\gamma-1}\,(1 -c\,t\,u)^{x+\beta-1}\,(1-t\,u)^{-x-1}\,d\,u, \tag{1.3}\] valid for \(\beta>0\), \(\gamma\geq 0\) and \(c\neq 1\) (see [10, p.345(4.4)]). We will show how these polynomials are connected to the associated Meixner-Pollaczek polynomials, \(\mathscr{P}_{n}^{(\nu)}(x;\varphi,\gamma)\), obtained in [22]. Next, we use this connection relation (see Eq.(2.3) below) and the explicit representation of the polynomials \(\mathscr{P}_{n}^{(\nu)}(x;\varphi,\gamma)\), recently obtained by Luo and Raina in [4], to derive a new explicit expression for the polynomials \(\mathscr{M}_{n}(x;\beta,c,\gamma)\). The resulting formula will then be used to obtain a new generating function of these polynomials. Furthermore, we exploit the obtained results to derive new explicit representations as well as generating functions for the associated Charlier, Laguerre, and Krawtchouk polynomials. Different interesting identities, generating functions and convolution-type formulas (in terms of classical polynomials) are also established. Some proofs of the stated results are postponed to Sect. 7. A conclusion is also given at the end of this paper. ## 2 A new explicit formulae and generating function for the AMPs In this section, we will give a new explicit representation for the AMPs in terms of terminating \({}_{4}F_{3}(1)\)-series which we deduced from the one of the associated Meixner-Pollaczek polynomials obtained in [4]. The derived representation is then used to compute a new generating function of these polynomials. ### Explicit representation of the AMPs The AMPs \(\mathscr{M}_{n}(x;\beta,c,\gamma)\) satisfy the three-term recurrence relation: \[\begin{split} c\mathscr{M}_{n+1}(x;\beta,c,\gamma)=[(c-1)x+(c+1) (n+\gamma)&+\beta c]\mathscr{M}_{n}(x;\beta,c,\gamma)\\ &-(n+\gamma)(n+\gamma+\beta-1)\mathscr{M}_{n-1}(x;\beta,c,\gamma),\end{split} \tag{2.1}\] with initial conditions \(\mathscr{M}_{-1}(x;\beta,c,\gamma)=0\) and \(\mathscr{M}_{0}(x;\beta,c,\gamma)=1\). The conditions for orthogonality (1.2) require that \(c>0\), \(c\neq 1\) and \(\gamma+\beta>0\). Now, we proceed to establish an explicit representation for the AMPs. Our main tools here are the explicit representation for associated Meixner-Pollaczek polynomials \(\mathscr{P}_{n}^{(\nu)}(x;\varphi,\gamma)\) obtained Luo and Raina in [4, p.3(1.8)], \[\begin{split}\mathscr{P}_{n}^{(\nu)}(x;\varphi,\gamma)=\frac{(2\nu +\gamma)_{n}}{n!}\sum_{k=0}^{n}e^{i(n-k)\varphi}\left(e^{i\varphi}-e^{-i\varphi }\right)^{k}\frac{(-n)_{k}(\gamma+\nu+ix)_{k}}{(\gamma+1)_{k}(\gamma+2\nu)_{k }}\\ \times_{4}F_{3}\left(\begin{array}{c}k-n,\,\gamma+\nu+ix+k,\, \gamma+2\nu-1,\,\gamma\\ \gamma+\nu+ix,\,\gamma+2\nu+k,\,\gamma+1+k\end{array}\left|1\right.\right), \end{split} \tag{2.2}\] and the following connection relation: \[\mathscr{P}_{n}^{(\nu)}(x;\varphi,\gamma)=\frac{e^{-in\varphi}}{(\gamma+1)_{n }}\mathscr{M}_{n}(ix-\nu;2\nu,e^{-2i\varphi},\gamma), \tag{2.3}\] which can be checked by comparing the recurrence relation (2.1) of \(\mathscr{M}_{n}(x;\beta,c,\gamma)\) with the recurrence relation of \(\mathscr{P}_{n}^{(\nu)}(x;\varphi,\gamma)\) given in [22, p.2256] by \[\begin{split}(n+\gamma+1)\mathscr{P}_{n+1}^{(\nu)}(x;\varphi, \gamma)=2[(n+\gamma+\nu)\cos\varphi+& x\sin\varphi]\mathscr{P}_{n}^{(\nu)}(x; \varphi,\gamma)\\ &-(n+\gamma+2\nu-1)\mathscr{P}_{n-1}^{(\nu)}(x;\varphi,\gamma). \end{split} \tag{2.4}\] Here, \((a)_{k}\) is the Pochhammer symbol defined by \((a)_{0}=1\), \((a)_{k}=a(a+1)\cdots(a+k-1)\), \(k\geq 1\). We are now in position to state the following result. **Proposition 2.1**.: _We have the following explicit representation for the AMPs_ \[\begin{split}\mathscr{M}_{n}(x;\beta,c,\gamma)=c^{-n}\frac{( \gamma+1)_{n}(\gamma+\beta)_{n}}{n!}\sum_{k=0}^{n}(1-c)^{k}\frac{(-n)_{k}( \gamma+\beta+x)_{k}}{(\gamma+1)_{k}(\gamma+\beta)_{k}}\\ \times_{4}F_{3}\left(\begin{array}{c}k-n,\,\gamma+\beta+x+k, \,\gamma+\beta-1,\,\gamma\\ \gamma+\beta+x,\,\gamma+\beta+k,\,\gamma+1+k\end{array}\left|1\right.\right), \end{split} \tag{2.5}\] _valid for \(c>0\), \(c\neq 1\) and \(\gamma+\beta>0\)._ In view of the relation \(\mathscr{M}_{n}(x;\beta,c,\gamma)=c^{-n}\mathscr{M}_{n}(-\beta-x;\beta,c^{-1},\gamma)\), which follows directly from the recurrence relation, we obtain from (2.5) another expression of the AMPs given by \[\mathscr{M}_{n}(x;\beta,c,\gamma)=\frac{(\gamma+1)_{n}(\gamma+\beta)_{n}}{n!}\sum_{k=0}^{n}\tilde{c}^{k}\frac{(-n)_{k}(\gamma-x)_{k}}{(\gamma+1)_{k}( \gamma+\beta)_{k}}{}_{4}F_{3}\left(\begin{array}{c}k-n,\,\gamma-x+k,\, \gamma+\beta-1,\,\gamma\\ \gamma-x,\,\gamma+\beta+k,\,\gamma+1+k\end{array}\left|1\right.\right), \tag{2.6}\] where \(\tilde{c}=\frac{c-1}{c}\). The case \(\gamma=0\) in (2.6) corresponds to the Meixner polynomials [21, p.176 (3.5)]: \[M_{n}(x;\beta,c)=(\beta)_{n}\,{}_{2}F_{1}\left(\begin{array}{c}-n,\,-x\\ \beta\end{array}\left|1-\frac{1}{c}\right.\right). \tag{2.7}\] In the special case \(\gamma+\beta=1\), the polynomials \(\mathscr{M}_{n}(x;\beta,c,\gamma)\) reduce to a particular case of Meixner polynomials. In fact, when \(\gamma+\beta=1\) Eq.(2.6) reduces to \[\mathscr{M}_{n}(x;\beta,c,1-\beta)=(2-\beta)_{n}\sum_{k=0}^{n}\frac{(-n)_{k}( 1-\beta-x)_{k}}{(2-\beta)_{k}}\frac{\tilde{c}^{k}}{k!}=(2-\beta)_{n}\,{}_{2}F_ {1}\left(\begin{array}{c}-n,\,1-\beta-x\\ 2-\beta\end{array}\left|\tilde{c}\right.\right), \tag{2.8}\] and hence \[\mathscr{M}_{n}(x;\beta,c,1-\beta)=M_{n}(x+\beta-1;2-\beta,c). \tag{2.9}\] The above connection formula is known; see [23, Eq.(11)]. **Corollary 2.1**.: _The following finite sum_ \[\sum_{k=0}^{n}t^{k}\frac{(-n)_{k}(a+y)_{k}}{(a+1)_{k}(b+1)_{k}}{}_{4}F _{3}\left(\begin{array}{c}k-n,\,a+y+k,\,a,\,b\\ a+y,\,b+1+k,\,a+1+k\end{array}\left|1\right)\right.\] \[=\frac{n!}{b-a}\left\{\frac{b}{(a+1)_{n}}\,{}_{2}F_{1}\left( \begin{array}{c}1-y,a\\ a-b+1\end{array}\left|t\right){}_{2}F_{1}\left(\begin{array}{c}y,-n-a\\ b-a+1\end{array}\left|t\right)\right.\right. \tag{2.10}\] \[\left.\left.-(1-t)^{n+1}\frac{a}{(b+1)_{n}}\,{}_{2}F_{1}\left( \begin{array}{c}1-y,n+a+1\\ a-b+1\end{array}\left|t\right){}_{2}F_{1}\left(\begin{array}{c}y,1-a\\ b-a+1\end{array}\left|t\right)\right\}\right.\right.\] _is valid for \(a>0\), \(b>-1\), \(b\neq 0\) and \(b-a\neq 0,1,2,...\)._ For special values of parameters (\(y=1\) or \(t=0\)), we obtain: \[\sum_{k=0}^{n}t^{k}\frac{(-n)_{k}}{(b+1)_{k}}{}_{3}F_{2}\left( \begin{array}{c}k-n,\,a,\,b\\ a+1,\,b+1+k\end{array}\left|1\right)= \frac{n!}{b-a}\left\{\frac{b}{(a+1)_{n}}\,{}_{2}F_{1}\left( \begin{array}{c}1,-n-a\\ b-a+1\end{array}\left|t\right)\right.\right.\] \[\left.\left.-(1-t)^{n+1}\frac{a}{(b+1)_{n}}\,{}_{2}F_{1}\left( \begin{array}{c}1,1-a\\ b-a+1\end{array}\left|t\right)\right\},\right.\] and \[{}_{3}F_{2}\left(\begin{array}{c}-n,\,a,\,b\\ a+1,\,b+1\end{array}\left|1\right)=\frac{n!}{b-a}\left\{\frac{b}{(a+1)_{n}}- \frac{a}{(b+1)_{n}}\right\}. \tag{2.11}\] **Remark 2.1**.: _Note that the only known expression of these polynomials was obtained in a quadratic form (cross products) made out of Gauss hypergeometric functions \({}_{2}F_{1}\) as (see [23, Eq(21)]):_ \[\mathscr{M}_{n}(x;\beta,c,\gamma)=\frac{1}{\beta-1}\left\{(\gamma +\beta-1)_{n+1}\,{}_{2}F_{1}\left(\begin{array}{c}x+1,\gamma\\ 2-\beta\end{array}\left|\tilde{c}\right){}_{2}F_{1}\left(\begin{array}{c}-x,-n-\gamma\\ \beta\end{array}\left|\tilde{c}\right)\right.\right. \tag{2.12}\] \[\left.\left.-(\gamma)_{n+1}\,{}_{2}F_{1}\left(\begin{array}{c}x +\beta,\gamma+\beta-1\\ \beta\end{array}\left|\tilde{c}\right){}_{2}F_{1}\left(\begin{array}{c}1- \beta-x,-n-\gamma-\beta+1\\ 2-\beta\end{array}\left|\tilde{c}\right)\right\}\right.\] _where \(\tilde{c}=\frac{c-1}{c}\). This relation is valid under restrictions \(\gamma>0\), \(\gamma+\beta\in\mathbb{R}_{+}^{*}\backslash\{1\}\) and \(\beta\notin\mathbb{N}^{*}\)._ ### A generating function for the AMPs Let us first notice that the generating function (1.3) may be deduced from the generating function for the associated Meixner-Pollaczek polynomials derived in [4, p.3(1.7)] with the help of the connection relation (2.3). Precisely, we have \[\sum_{n=0}^{+\infty}\frac{(c\,t)^{n}}{(\gamma+1)_{n}}\mathscr{M}_{n}(x;\beta, c,\gamma)=(1-c\,t)^{-\beta-x}\,(1-t)^{x}\,F_{1}\left[\gamma,1-\beta-x,1+x; \gamma+1;c\,t,t\right], \tag{2.13}\] where \(F_{1}\) is the first Appell hypergeometric function defined by (7.9). Then, the integral in the R.H.S of (1.3) is just the integral representation of the Appell function \(F_{1}\) (see [27, p.77(4)]). In addition, if we put \(\gamma=0\) in (2.13), we recover the following generating function of Meixner polynomials \[\sum_{n=0}^{+\infty}\frac{(c\,t)^{n}}{n!}M_{n}(x;\beta,c)=(1-c\,t)^{-\beta-x} \,(1-t)^{x}\,. \tag{2.14}\] Substitute (2.14) into (1.3) with \((t,\beta,x)=(tu,2-\beta,-x-1)\), integrate the obtained expression, and then use the Rainville formula [25, p.101], \[\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}A(k,n)=\sum_{n=0}^{\infty}\sum_{k=0}^{n}A( k,n-k), \tag{2.15}\] to get an expression of \(\mathscr{M}_{n}(x;\beta,c,\gamma)\) in terms of \(M_{n}(x;\beta,c)\) as follows. **Proposition 2.2**.: _Let \(\gamma>0\). Then, we have the convolution-type formula_ \[\frac{n!\,\mathscr{M}_{n}(x;\beta,c,\gamma)}{(\gamma+1)_{n}}=\sum_{k=0}^{n} \binom{n}{k}\frac{\gamma}{(k+\gamma)}M_{n-k}(x;\beta,c)M_{k}(-x-1;2-\beta,c). \tag{2.16}\] _Moreover, we have the generating function_ \[\sum_{n=0}^{+\infty}\frac{\gamma\,(c\,t)^{n}}{(n+\gamma)n!}M_{n}(x;\beta,c)=F_ {1}\left[\gamma,x+\beta,-x;\gamma+1;c\,t,t\right],\quad|t|<|c|^{-1},\,(\gamma \text{ arbitrary}). \tag{2.17}\] The last formula is obtained by direct computation with the help of generating functions (2.13) and (2.14). We end this subsection by stating a new generating function for the AMPs (see Sect. 7 for a proof). **Theorem 2.1**.: _A generating function for the AMPs is given by_ \[\sum_{n=0}^{+\infty}\frac{(c\,t)^{n}}{(\gamma+\beta)_{n}}\mathscr{M}_{n}(x; \beta,c,\gamma)=\left(1-c\,t\right)^{-1}F_{1}\left[1,\gamma,-x;\gamma+\beta;t, \frac{t(1-c)}{1-ct}\right] \tag{2.18}\] _for \(|t|<\min\{1,\,c^{-1}\}\), in terms of Appell function \(F_{1}\) defined by (7.9). In particular, we have_ \[\sum_{n=0}^{+\infty}M_{n}(x;\beta,c)t^{n}=\left(1-t\right)^{-1}{}_{2}F_{1} \left(\begin{array}{cc}1,&-x\\ \beta&\end{array}\left|\frac{t(1-c)}{c(1-t)}\right.\right). \tag{2.19}\] ## 3 The associated Charlier polynomials The associated Charlier polynomials (ACPs) are defined by the three-term recurrence relation: \[a\,\mathscr{C}_{n+1}(x;a,\gamma)=(n+\gamma+a-x)\mathscr{C}_{n}(x;a,\gamma)-( n+\gamma)\mathscr{C}_{n-1}(x;a,\gamma) \tag{3.1}\] with initial conditions \(\mathscr{C}_{-1}(x;a,\gamma)=0\) and \(\mathscr{C}_{0}(x;a,\gamma)=1\). Applying the criterion (1.2) we find that \(\mathscr{C}_{n}(x;a,\gamma)\) are orthogonal if \(a>0\). By comparing the recurrence relations of the AMPs and ACPs, it is easy to check the limiting relation (see [23, (35)]): \[\mathscr{C}_{n}(x;a,\gamma)=\lim_{\beta\to\infty}\frac{1}{(\gamma+\beta)_{n}} \mathscr{M}_{n}\left(x;\beta,\frac{a}{a+\beta},\gamma\right). \tag{3.2}\] Representation (2.6) provides for ACPs two interesting explicit formulas. The first one is obtained thanks to the relation (3.2), upon using the limit \[\lim_{a\to\infty}\frac{(ax)^{k}}{(ay+b)_{k}}=\left(\frac{x}{y}\right)^{k},\ k \geq 1, \tag{3.3}\] where \(x,y,b\) are fixed, and is given by \[\mathscr{C}_{n}(x;a,\gamma)=\frac{(\gamma+1)_{n}}{n!}\sum_{k=0}^{n}(-a)^{-k}\frac {(-n)_{k}(\gamma-x)_{k}}{(\gamma+1)_{k}}{}_{3}F_{2}\left(\begin{array}{c}k-n, \,\gamma-x+k,\,\gamma\\ \gamma-x,\,\gamma+k+1\end{array}\left|1\right.\right),\ a>0. \tag{3.4}\] The second formula is obtained from (3.4) by applying the transformation ([26, p.142]): \[{}_{3}F_{2}\left(\begin{array}{c}-m,\,a,\,b\\ c,\,d\end{array}\left|1\right.\right)=\frac{(c-a)_{m}}{(c)_{m}}{}_{3}F_{2} \left(\begin{array}{c}-m,\,a,\,d-b\\ a-c+1-m,\,d\end{array}\left|1\right.\right). \tag{3.5}\] It is given by \[\mathscr{C}_{n}(x;a,\gamma)=\sum_{k=0}^{n}(-a)^{-k}\frac{(-n)_{k}(\gamma-x)_{k }}{k!}\ {}_{3}F_{2}\left(\begin{array}{c}-k,\,\gamma,\,k-n\\ -n,\,\gamma-x\end{array}\left|1\right.\right). \tag{3.6}\] Following the same lines as for the AMPs, we derive the generating function for the ACPs. Precisely, we have the following result (see Sect. 7 for a proof). **Proposition 3.1**.: _A generating function for the ACPs is given by_ \[\sum_{n=0}^{+\infty}\frac{t^{n}}{(\gamma+1)_{n}}\mathscr{C}_{n}(x;a,\gamma)=e^ {t}\left(1-\frac{t}{a}\right)^{x}\Phi_{1}\left[\gamma,x+1;\gamma+1;\frac{t}{a },-t\right],\quad|t|<|a|, \tag{3.7}\] _where \(\Phi_{1}\) is the Humbert's confluent hypergeometric function defined by (7.16). In particular, we have_ \[\sum_{n=0}^{+\infty}\frac{t^{n}}{n!}C_{n}(x;a)=e^{t}\left(1-\frac{t}{a}\right) ^{x}. \tag{3.8}\] Let us now apply an alternative method to compute the generating function of \(\mathscr{C}_{n}(x;a,\gamma)\). Multiplying both the sides of (3.1) by \(t^{n}/(\gamma+1)_{n}\) and then summing with respect to \(n\) from \(0\) to \(+\infty\), to get the following first-order differential equation \[t(a-t)\frac{\partial}{\partial t}\mathscr{G}_{\gamma}(x,t)+\left[t^{2}+(x-a- \gamma)t+a\gamma\right]\mathscr{G}_{\gamma}(x,t)=a\gamma \tag{3.9}\] where \(\mathscr{G}_{\gamma}(x,t)\) denotes the L.H.S of (3.7). Next, we make the substitution \(\mathscr{G}_{\gamma}(x,t)=e^{t}\left(1-t/a\right)^{x}\mathscr{H}_{\gamma}(x,t)\) and obtain for \(\mathscr{H}_{\gamma}(x,t)\) the following differential equation: \[t\frac{\partial}{\partial t}\mathscr{H}_{\gamma}(x,t)+\gamma\mathscr{H}_{ \gamma}(x,t)=\gamma e^{-t}\left(1-\frac{t}{a}\right)^{-x-1}. \tag{3.10}\] The solution of the above equation, after taking into account the condition \(\mathscr{G}_{\gamma}(x,0)=1\), is \[\mathscr{H}_{\gamma}(x,t)=\gamma\int_{0}^{1}u^{\gamma-1}e^{-u\,t}\left(1- \frac{u\,t}{a}\right)^{-x-1}du,\quad|t|<\min\{1,\,|a|\}. \tag{3.11}\] Thus, \(\mathscr{G}_{\gamma}(x,t)\) has the following integral representation \[\mathscr{G}_{\gamma}(x,t)=\gamma e^{t}\left(1-\frac{t}{a}\right)^{x}\int_{0}^ {1}u^{\gamma-1}e^{-t\,u}\left(1-\frac{t\,u}{a}\right)^{-x-1}du. \tag{3.12}\] Again, substitute (3.8) into (3.12) with \((t,\beta,x)=(-tu,-a,-x-1)\), integrate the obtained expression, and then use (2.15), to get the following corollary. **Corollary 3.1**.: _Let \(\gamma>0\). Then we have_ \[\frac{n!\,\mathscr{C}_{n}(x;a,\gamma)}{(\gamma+1)_{n}}=\sum_{k=0}^{n}\binom{n}{k }\frac{\gamma(-1)^{k}}{(\gamma+k)}C_{n-k}(x;a)C_{k}(-x-1;-a). \tag{3.13}\] _Consequently, we get the following generating function_ \[\sum_{n=0}^{+\infty}\frac{\gamma}{\gamma+n}C_{n}(x;a)\frac{t^{n}}{n!}=\Phi_{1} \left[\gamma,-x;\gamma+1;\frac{t}{a},t\right],\quad|t|<|a|,\,(\gamma\text{ arbitrary}). \tag{3.14}\] ## 4 The associated Laguerre polynomials The associated Laguerre polynomials (ALPs) can be defined by the three-term recurrence relation: \[(n+\gamma+1)\mathscr{L}_{n+1}^{(\alpha)}(x;\gamma)=[2(n+\gamma)+\alpha+1-x] \mathscr{L}_{n}^{(\alpha)}(x;\gamma)-(n+\gamma+\alpha)\mathscr{L}_{n-1}^{( \alpha)}(x;\gamma) \tag{4.1}\] with initial conditions \(\mathscr{L}_{-1}^{(\alpha)}(x;\gamma)=0\) and \(\mathscr{L}_{0}^{(\alpha)}(x;\gamma)=1\). Again, with the help of the criterion (1.2), we show that \(\mathscr{L}_{n}^{(\alpha)}(x;\gamma)\) are orthogonal if and only if \(\alpha+\gamma>-1\). By comparing (2.1) with (4.1) it is easy to check the limit relation (see [23, (39)]): \[\mathscr{L}_{n}^{(\alpha)}(x;\gamma)=\lim_{c\to 1}\frac{1}{(\gamma+1)_{n}} \mathscr{M}_{n}\left(\frac{x}{1-c};\alpha+1,c,\gamma\right). \tag{4.2}\] This relation allows us to write an explicit formula for the ALPs as follows \[\mathscr{L}_{n}^{(\alpha)}(x;\gamma)=\frac{(\gamma+\alpha+1)_{n}}{n!}\sum_{k= 0}^{n}\frac{(-n)_{k}\,x^{k}}{(\gamma+1)_{k}(\gamma+\alpha+1)_{k}}{}_{3}F_{2} \left(\begin{array}{c}k-n,\,\gamma+\alpha,\,\gamma\\ \gamma+\alpha+k+1,\,\gamma+k+1\end{array}\big{|}1\right) \tag{4.3}\] where \(\alpha>-1\). Using the transformation (3.5), these polynomials can be rewritten as follows (see also [13, Eq(1.34)]): \[\mathscr{L}_{n}^{(\alpha)}(x;\gamma)=\frac{(\alpha+1)_{n}}{n!}\sum_{k=0}^{n} \frac{(-n)_{k}\,x^{k}}{(\gamma+1)_{k}(\alpha+1)_{k}}{}_{3}F_{2}\left(\begin{array} []{c}k-n,\,1-\alpha+k,\,\gamma\\ -\alpha-n,\,\gamma+k+1\end{array}\big{|}1\right). \tag{4.4}\] We now give a generating function for the ALPs. It was already found in [3, p.25] by using the method of differential equations. The alternative proof we present here (see Sect.7 below) is direct and simpler. **Proposition 4.1**.: _A generating function of the ALPs is_ \[\sum_{n=0}^{+\infty}t^{n}\mathscr{L}_{n}^{(\alpha)}(x;\gamma)=(1-t)^{-\gamma- \alpha-1}\exp\left(\frac{x\,t}{t-1}\right)\Phi_{1}\left[\gamma,\,\gamma+ \alpha,\,\gamma+1;\,\frac{t}{t-1};\,\frac{-x\,t}{t-1}\right] \tag{4.5}\] _where \(|t|<\frac{1}{2},\,x\in\mathbb{R}\) and \(\Phi_{1}\) is the Humbert confluent hypergeometric function defined by (7.16). In particular, for \(\gamma=0\), we recover the generating function of classical Laguerre polynomials_ \[\sum_{n=0}^{+\infty}t^{n}L_{n}^{(\alpha)}(x)=(1-t)^{-\alpha-1}\exp\left(\frac {x\,t}{t-1}\right),\quad|t|<1,\,|x|<+\infty. \tag{4.6}\] Substituting the connection formula ([3, p.24(2.20)]) \(\mathscr{L}_{n}^{(\alpha)}(x;\gamma)=\sum_{k=0}^{n}\frac{\gamma}{k+\gamma}L_{ n-k}^{(\alpha)}(x)L_{k}^{(-\alpha)}(-x)\) in (4.5), applying (2.15) to the obtained expression, and then identifying the generating function (4.6) in the final expression to get the following result. **Corollary 4.1**.: _A generating function for Laguerre polynomials_ \[\sum_{n=0}^{+\infty}\frac{\gamma}{n+\gamma}L_{n}^{(\alpha)}(x)t^{n}=(1-t)^{- \gamma}\Phi_{1}\left[\gamma,\,\gamma-\alpha,\,\gamma+1;\,\frac{t}{t-1};\,\frac{x \,t}{t-1}\right]\quad(\gamma\text{ arbitrary}). \tag{4.7}\] The above generating function generalizes the one given in [24, Eq(9.12.12)], which we obtain by setting \(\gamma=\alpha\) in (4.7). **Remark 4.1**.: _Notice that, by a different method, the explicit polynomial form (4.3) was found by Askey and Wimp in [3, p.22, Eq(2.8)] and it can also be obtained from the associated Meixner-Pollaczek polynomials by using the limit relation:_ \[\mathscr{L}_{n}^{(\alpha)}(x;\gamma)=\lim_{\varphi\to 0}\mathscr{P}_{n}^{(( \alpha+1)/2)}\left(\frac{-x}{2\sin\varphi};\,\varphi,\,\gamma\right),\] _as was explained by Rahman in [13, p.7]._ ## 5 The associated Krawtchouk polynomials The associated Krawtchouk polynomials (AKPs) can be defined by the three-term recurrence relation: \[\begin{split} p(N-n-\gamma)\mathscr{K}_{n+1}(x;p,N,\gamma)=[ pN+(n+\gamma)(1-2p)-x]\,\mathscr{K}_{n}(x;p,N,\gamma)\\ -(n+\gamma)(1-p)\mathscr{K}_{n-1}(x;p,N,\gamma),\end{split} \tag{5.1}\] with initial conditions \(\mathscr{K}_{-1}(x;p,N,\gamma)=0\) and \(\mathscr{K}_{0}(x;p,N,\gamma)=1\). Applying the criterion (1.2) to (5.1) we see that the orthogonality is obtained in the following cases: * \(p<0,\quad N-\gamma<0\) * \(0<p<1,\quad n=0,1,\cdots,\lfloor N-\gamma\rfloor\) * \(p>1,\quad N-\gamma<0\). The notation \(\lfloor x\rfloor\) stands for the largest integer less than or equal to \(x\). The AKPs are related to the AMPs in the following way: \[\mathscr{K}_{n}(x;p,N,\gamma)=\frac{\mathscr{M}_{n}(x;-N,p/(p-1),\gamma)}{(- N+\gamma)_{n}}. \tag{5.2}\] From (2.5) and (5.2) follows the explicit representation for the AKPs \[\begin{split}\mathscr{K}_{n}(x;p,N,\gamma)=\left(\frac{p-1}{p} \right)^{n}\frac{(\gamma+1)_{n}}{n!}\sum_{k=0}^{n}(1-p)^{-k}\frac{(-n)_{k}( \gamma-N+x)_{k}}{(\gamma+1)_{k}(\gamma-N)_{k}}\\ \times_{4}F_{3}\left(\begin{array}{c}k-n,\,\gamma-N+x+k,\, \gamma-N-1,\,\gamma\\ \gamma-N+x,\,\gamma-N+k,\,\gamma+1+k\end{array}\begin{array}{c}\,|1\\ \end{array}\right)\end{split} \tag{5.3}\] valid under restrictions \((i),\;(ii),(iii)\) given above. An alternative form of \(\mathscr{K}_{n}(x;p,N,\gamma)\) stems from (2.6) and is given by \[\mathscr{K}_{n}(x;p,N,\gamma)=\frac{(\gamma+1)_{n}}{n!}\sum_{k=0}^{n}p^{-k} \frac{(-n)_{k}(\gamma-x)_{k}}{(\gamma+1)_{k}(\gamma-N)_{k}}{}_{4}F_{3}\left( \begin{array}{c}k-n,\,\gamma-x+k,\,\gamma-N-1,\,\gamma\\ \gamma-x,\,\gamma-N+k,\,\gamma+1+k\end{array}\begin{array}{c}\,|1\\ \end{array}\right). \tag{5.4}\] Next, we establish the following generating function for the AKPs. **Theorem 5.1**.: _For \(x=x_{j}\), \(j=\{0,1,\cdots,M\}\) (where it is possible that \(M=\infty\)), we have_ \[\sum_{n=0}^{+\infty}t^{n}\mathscr{K}_{n}(x;p,N,\gamma)=(1-t)^{-1}\,F_{1}\left[1,\gamma,-x;\gamma-N;\frac{t(p-1)}{p},\frac{t}{p(t-1)}\right], \tag{5.5}\] _valid for \(\max\left\{\left|\frac{t(p-1)}{p}\right|,\left|\frac{t}{p(t-1)}\right|\right\}<1\)._ It should be noted here that the case when \(n=N-\gamma\in\mathbb{N}\) must be understood by continuity. In fact, in this case the hypergeometric \({}_{4}F_{3}(1)\) in the expression of \(\mathscr{K}_{n}(x;p,N,\gamma)\) reduces to \({}_{3}F_{2}(1)\) and we still have a polynomial of degree \(N-\gamma\). For instance, if we take \(n=N-\gamma\) in (5.4) we get \[\mathscr{K}_{N-\gamma}(x;p,N,\gamma)=\frac{(\gamma+1)_{N-\gamma}}{(N-\gamma)! }\sum_{k=0}^{N-\gamma}p^{-k}\frac{(\gamma-x)_{k}}{(\gamma+1)_{k}}{}_{3}F_{2} \left(\begin{array}{c}\gamma-x+k,\,\gamma-N-1,\,\gamma\\ \gamma-x,\,\gamma+1+k\end{array}\left|1\right). \tag{5.6}\] Therefore, in the case when \(N-\gamma\in\mathbb{N}\), we will need a special notation for the generating function of the AKPs. We define the \(N\)-th partial sum of a power series in \(t\) by \[\left[f(t)\right]_{N}:=\sum_{k=0}^{N}\frac{f^{(k)}(0)}{k!}t^{k}, \tag{5.7}\] for every function \(f\) for which \(f^{(k)}(0)\), \(k=0,1,\cdots,N\) exists. Then, for \(x=0,1,2,\cdots,N-\gamma\), we have the following generating function for the AKPs \[\sum_{n=0}^{N-\gamma}t^{n}\mathscr{K}_{n}(x;p,N,\gamma)=\left[(1-t)^{-1}\,F_{ 1}\left[1,\gamma,-x;\gamma-N;\frac{t(p-1)}{p},\frac{t}{p(t-1)}\right]\right]_{ N-\gamma}. \tag{5.8}\] Taking \(\gamma=0\) in the above relation gives the generating function of the classical Krawtchouk polynomials, \(K_{n}(x;p,N)=\mathscr{K}_{n}(x;p,N,0)\), obtained for \(x=0,1,\cdots,N\) in [24, (1.10.13)]: \[\sum_{n=0}^{N}t^{n}K_{n}(x;p,N)=\left[(1-t)^{-1}\,{}_{3}F_{2}\left(\begin{array} []{cc}1,\,-x\\ -N\end{array}\left|\frac{t}{p(t-1)}\right)\right]_{N}, \tag{5.9}\] where \(N\) is a nonnegative integer. ## 6 Concluding remark In this paper, we have established explicit representation as well as generating function of the AMPs. These results are then used to derive similar relations for a sequence of associated orthogonal polynomials which belong to the same family by limiting procedures or connection formula. We recall that orthogonality measures of AMPs, ACPs, and AKPs are unknown. An attempt to get the orthogonality measures for the AMPs and ACPs was made in [10] where the authors compute the Stieljes transform of the orthogonality measures of these two polynomials. They showed that these measures are unique. As is well known, if elements of one set of orthogonal polynomials converge to those of another set, and the measures are uniquely determined, then there must be the same corresponding limit relation for the measures. This means that it suffices to find the orthogonality measure of the AMPs to deduce those of the other related polynomials. This question may be the subject of forthcoming works. Proofs This section is devoted to some technical proofs. Proof of corollary 2.1.: By combining (2.12) and (2.6) then making the modifications \((t,a,b,y)\leftarrow(\tilde{c},\gamma,\gamma+\beta-1,-x)\), we can readily obtain the following finite sum formula for \({}_{4}F_{3}(1)\): \[\frac{(a+1)_{n}(b+1)_{n}}{n!}\sum_{k=0}^{n}t^{k}\frac{(-n)_{k}(a+ y)_{k}}{(a+1)_{k}(b+1)_{k}}{}_{4}F_{3}\left(\begin{array}{c}k-n,\,a+y+k,\,a, \,b\\ a+y,\,b+1+k,\,a+1+k\end{array}\left|1\right)\right.\] \[=\frac{1}{b-a}\left\{(b)_{n+1}\,{}_{2}F_{1}\left(\begin{array}[] {c}1-y,a\\ a-b+1\end{array}\left|t\right){}_{2}F_{1}\left(\begin{array}{c}y,-n-a\\ b-a+1\end{array}\left|t\right)\right.\right. \tag{7.1}\] \[\left.\left.-(a)_{n+1}\,{}_{2}F_{1}\left(\begin{array}{c}b-a+1- y,b\\ b-a+1\end{array}\left|t\right){}_{2}F_{1}\left(\begin{array}{c}a-b+y,-n-b\\ a-b+1\end{array}\left|t\right)\right\}.\right.\right.\] Next, to \({}_{2}F_{1}\) in the second line of the RHS of the above equation, we apply the Euler transformation [25, p.33(21)]: \[{}_{2}F_{1}\left(\begin{array}{c}A,B\\ C\end{array}\left|z\right)=(1-z)^{C-A-B}{}_{2}F_{1}\left(\begin{array}{c}C-A,C-B\\ C\end{array}\left|z\right), \tag{7.2}\] where \(C\neq 0,-1,-2,...,|\arg(1-z)|<\pi\), for \(A=b-a+1-y,\ B=b,\ C=b-a+1,z=t\) in the first \({}_{2}F_{1}\) and \(A=a-b+y,\ B=-n-b,\ C=a-b+1,z=t\) to get the formula in (2.10). Proof of Theorem 2.1.: Denoting the left-hand side of (2.18) by \(\Upsilon(x,t)\) and substituting the expression of \(\mathscr{M}_{n}(x;\beta,c,\gamma)\), we obtain \[\Upsilon(x,t) =\sum_{n=0}^{+\infty}\frac{t^{n}(\gamma+1)_{n}}{n!}\sum_{k=0}^{n} (1-c)^{k}\frac{(-n)_{k}(\gamma+\beta+x)_{k}}{(\gamma+1)_{k}(\gamma+\beta)_{k }}{}_{4}F_{3}\left(\begin{array}{c}k-n,\,\gamma+\beta+x+k,\,\gamma+\beta-1, \,\gamma\\ \gamma+\beta+x,\,\gamma+\beta+k,\,\gamma+1+k\end{array}\left|1\right)\] \[=\sum_{n=0}^{+\infty}\sum_{k=0}^{n}\sum_{j=0}^{n-k}\frac{t^{n}(1 -c)^{k}}{n!j!}\frac{(-n)_{k}(-n-k)_{j}(\gamma+1)_{n}(\gamma+\beta+x)_{k}( \gamma+\beta+x+k)_{j}(\gamma+\beta-1)_{j}(\gamma)_{j}}{(\gamma+1)_{k}(\gamma+ \beta)_{k}(\gamma+\beta+x)_{j}(\gamma+\beta+k)_{j}(\gamma+1+k)_{j}}\] \[=\sum_{n=0}^{+\infty}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty} \frac{t^{n}(t(c-1))^{k}(-t)^{j}}{n!j!}\frac{(\gamma+1+k+j)_{n}(\gamma+\beta+ x)_{k}(\gamma+\beta+x+k)_{j}(\gamma+\beta-1)_{j}(\gamma)_{j}}{(\gamma+\beta)_{k}(\gamma+ \beta+x)_{j}(\gamma+\beta+k)_{j}},\] where we have used, respectively, the series transformation [25, p.102(17)]: \[\sum_{n=0}^{\infty}\sum_{k=0}^{n}\sum_{j=0}^{n-k}A(j,k,n)=\sum_{n=0}^{\infty} \sum_{k=0}^{\infty}\sum_{j=0}^{\infty}A(j,k,n+k+j), \tag{7.3}\] and the identities \[\frac{(-n-k-j)_{k}(-n-j)_{j}}{(n+k+j)!}=\frac{(-1)^{k+j}}{n!}, \tag{7.4}\] \[\frac{(\gamma+1)_{n+k+j}}{(\gamma+1)_{k}(\gamma+1+k)_{j}}=(\gamma+1+k+j)_{n}, \tag{7.5}\] obtained by virtue of relations \[(-n)_{k}=(-1)^{k}\frac{n!}{(n-k)!},\quad k=0\leq k\leq n, \tag{7.6}\] \[(a)_{n+m}=(a)_{n}(a+n)_{m}=(a)_{m}(a+m)_{n},\ a\in\mathbb{C},\ n,m\in\mathbb{N}. \tag{7.7}\] After summation over \(n\) in the last expression of \(\Upsilon(x,t)\), we obtain \[\Upsilon(x,t)=(1-t)^{-\gamma-1}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty}\left( \frac{t(c-1)}{1-t}\right)^{k}\frac{\left(\frac{t}{t-1}\right)^{j}}{j!}\frac{( \gamma+\beta+x)_{k}(\gamma+\beta+x+k)_{j}(\gamma+\beta-1)_{j}(\gamma)_{j}}{( \gamma+\beta)_{k}(\gamma+\beta+x)_{j}(\gamma+\beta+k)_{j}}.\] We apply again the identity (7.7) to get, after simplifications, \[\Upsilon(x,t) = (1-t)^{-\gamma-1}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty}\left( \frac{t(c-1)}{1-t}\right)^{k}\frac{\left(\frac{t}{t-1}\right)^{j}}{j!}\frac{( \gamma+\beta+x+j)_{k}(\gamma+\beta-1)_{j}(\gamma)_{j}}{(\gamma+\beta)_{j}( \gamma+\beta+j)_{k}}\] \[= (1-t)^{-\gamma-1}\sum_{j=0}^{+\infty}\frac{(\gamma+\beta-1)_{j} (\gamma)_{j}}{(\gamma+\beta)_{j}}{}_{2}F_{1}\left(\begin{array}{c}\gamma+ \beta+x+j,1\\ \gamma+\beta+j\end{array}\left|\frac{t(c-1)}{1-t}\right.\right)\frac{\left( \frac{t}{t-1}\right)^{j}}{j!}.\] Next, to \({}_{2}F_{1}\) in the above equation, we apply the Euler transformation [25, p.33(21)]: \[{}_{2}F_{1}\left(\begin{array}{c}a,b\\ e\end{array}\left|z\right)=(1-z)^{e-a-b}{}_{2}F_{1}\left(\begin{array}{c}e-a,e-b\\ e\end{array}\left|z\right), \tag{7.8}\] where \(e\neq 0,-1,-2,...,|\arg(1-z)|<\pi\), to get \[\Upsilon(x,t)=(1-t)^{x-\gamma}(1-ct)^{-x-1}\sum_{j=0}^{+\infty}\frac{(\gamma+ \beta-1)_{j}(\gamma)_{j}}{(\gamma+\beta)_{j}}{}_{2}F_{1}\left(\begin{array}{ c}-x,\,\gamma+\beta-1+j\\ \gamma+\beta+j\end{array}\left|\frac{t(c-1)}{1-t}\right.\right)\frac{\left( \frac{t}{t-1}\right)^{j}}{j!}.\] We identify the infinite series as the Appell hypergeometric function \(F_{1}\) defined by ([25, p.53(4)]): \[\begin{split} F_{1}\left[\alpha,\,\beta_{1},\,\beta_{2};\,\sigma; \,x,\,y\right]&=\sum_{m,n=0}^{+\infty}\frac{(\alpha)_{m+n}(\beta_{1})_{m}( \beta_{2})_{n}}{(\sigma)_{m+n}}\frac{x^{m}}{m!}\frac{y^{n}}{n!}\\ &=\sum_{m=0}^{+\infty}\frac{(\alpha)_{m}(\beta_{1})_{m}}{(\sigma)_{m }}{}_{2}F_{1}\left(\begin{array}{c}\alpha+m,\beta_{2}\\ \sigma+m\end{array}\left|y\right)\frac{x^{m}}{m!},\quad\max\{|x|,|y|\}<1.\end{split} \tag{7.9}\] Thus, we have \[\Upsilon(x,t)=(1-t)^{x-\gamma}(1-ct)^{-x-1}F_{1}\left[\gamma+\beta-1;\gamma,- x;\gamma+\beta;\frac{t}{t-1},\frac{t(1-c)}{t-1}\right].\] To get (2.18), it suffices to apply the transformation (see [27, p.78]): \[F_{1}\left[\alpha,\,\beta_{1},\,\beta_{2};\,\sigma;\,x,\,y\right]=(1-x)^{- \beta_{1}}(1-y)^{-\beta_{2}}F_{1}\left[\sigma-\alpha,\,\beta_{1},\,\beta_{2};\, \sigma;\,\frac{x}{x-1},\,\frac{y}{y-1}\right] \tag{7.10}\] Finally, the formula (2.19) is a particular case of [24, Eq(9.10.13)] and it is obtained here directly by putting \(\gamma=0\) in (2.18). The proof is complete. Proof of Proposition 3.1.: In fact, recalling the expression of (3.4) and denoting the left-hand side of (3.7) by \(\mathscr{G}(x,t)\), we obtain \[\mathscr{G}(x,t) = \sum_{n=0}^{+\infty}\frac{t^{n}}{n!}\sum_{k=0}^{n}(-a)^{-k}\frac{ (-n)_{k}(\gamma-x)_{k}}{(\gamma+1)_{k}}{}_{3}F_{2}\left(\begin{array}{c}k-n,\,\gamma-x+k,\,\gamma\\ \gamma-x,\,\gamma+k+1\end{array}\left|1\right.\right) \tag{7.11}\] \[= \sum_{n=0}^{+\infty}\sum_{k=0}^{n}\sum_{j=0}^{n-k}\frac{t^{n}(-a )^{-k}}{n!j!}\frac{(-n)_{k}(k-n)_{j}(\gamma-x)_{k}(\gamma-x+k)_{j}(\gamma)_{j} }{(\gamma+1)_{k}(\gamma-x)_{j}(\gamma+k+1)_{j}}\] (7.12) \[= \sum_{n=0}^{+\infty}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty}\frac{ t^{n+k+j}(-a)^{-k}}{(n+k+j)!j!}\frac{(-n-k-j)_{k}(-n-j)_{j}(\gamma-x)_{k}( \gamma-x+k)_{j}(\gamma)_{j}}{(\gamma+1)_{k}(\gamma-x)_{j}(\gamma+k+1)_{j}}. \tag{7.13}\] These calculations can be done by applying the identity (7.3). Next, applying (7.4) and the relation \((\gamma+y)_{k}(\gamma+y+k)_{j}=(\gamma+y)_{j}(\gamma+y+j)_{k}\), for \(y=-x,\ 1\), to the last expression of \(\mathscr{G}(x,t)\) to get \[\mathscr{G}(x,t) = e^{t}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty}\frac{(\gamma-x+j) _{k}(\gamma)_{j}}{(\gamma+1+j)_{k}(\gamma+1)_{j}}\left(\frac{t}{a}\right)^{k} \frac{(-t)^{j}}{j!} \tag{7.14}\] \[= e^{t}\sum_{j=0}^{+\infty}\frac{(\gamma)_{j}}{(\gamma+1)_{j}}{}_ {2}F_{1}\left(\begin{array}{c}\gamma-x+j,\,1\\ \gamma+1+j\end{array}\left|\frac{t}{a}\right.\right)\frac{(-t)^{j}}{j!}.\] Next, to \({}_{2}F_{1}\) in (7.14) we apply the Euler transformation (7.8), to get \[\mathscr{G}(x,t)=e^{t}\left(1-\frac{t}{a}\right)^{x}\sum_{j=0}^{+\infty}\frac {(\gamma)_{j}}{(\gamma+1)_{j}}{}_{2}F_{1}\left(\begin{array}{c}\gamma+j,\,x +1\\ \gamma+1+j\end{array}\left|\frac{t}{a}\right.\right)\frac{(-t)^{j}}{j!}. \tag{7.15}\] By recognizing the Humbert confluent hypergeometric function \(\Phi_{1}\) defined by [25, p.58(36)]: \[\begin{split}\Phi_{1}\left[\alpha_{1},\,\lambda;\,\alpha_{2};\,x, \,y\right]&=\sum_{m,n=0}^{+\infty}\frac{(\alpha_{1})_{m+n}(\lambda)_{m}}{( \alpha_{2})_{m+n}}\frac{x^{m}}{m!}\frac{y^{n}}{n!}\\ &=\sum_{n=0}^{+\infty}\frac{(\alpha_{1})_{n}}{(\alpha_{2})_{n}}{}_ {2}F_{1}\left(\begin{array}{c}\alpha_{1}+n,\,\,\lambda\\ \alpha_{2}+n\end{array}\left|x\right.\right)\frac{y^{n}}{n!},\quad\left|x \right|<1,\,\left|y\right|<+\infty.\end{split} \tag{7.16}\] in the right-hand side of (7.15), we complete the proof of the Proposition 3.1. Proof of Proposition 4.1.: Denote the left-hand side of (4.5) by \(\Lambda(x,t)\). Similar calculations to the ones of the proof of Proposition 3.1 give \[\Lambda(x,t) = \sum_{n=0}^{+\infty}\frac{t^{n}}{n!}\sum_{k=0}^{n}\frac{(\gamma+ \alpha+1)_{n}(-n)_{k}\,x^{k}}{(\gamma+1)_{k}(\gamma+\alpha+1)_{k}}{}_{3}F_{2} \left(\begin{array}{c}k-n,\,\gamma+\alpha,\,\gamma\\ \gamma+\alpha+k+1,\,\gamma+k+1\end{array}\left|1\right.\right) \tag{7.17}\] \[= \sum_{n=0}^{+\infty}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty}\frac {(\gamma+\alpha+1)_{n+k+j}t^{n}}{n!}\frac{(-xt)^{k}(-t)^{j}(\gamma+\alpha)_{j} (\gamma)_{j}}{j!(\gamma+1)_{k+j}}\] (7.18) \[= \sum_{n=0}^{+\infty}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty}\frac {(\gamma+\alpha+1+k+j)_{n}t^{n}}{n!}\frac{(-xt)^{k}(-t)^{j}(\gamma+\alpha)_{j} (\gamma)_{j}}{j!(\gamma+1)_{k+j}}\] (7.19) \[= (1-t)^{-\gamma-\alpha-1}\sum_{k=0}^{+\infty}\sum_{j=0}^{+\infty} \frac{(\frac{xt}{t-1})^{k}(\frac{t}{t-1})^{j}(\gamma+\alpha)_{j}(\gamma)_{j}}{ j!(\gamma+1)_{k+j}}\] (7.20) \[= (1-t)^{-\gamma-\alpha-1}\sum_{j=0}^{+\infty}\frac{(\frac{t}{t-1} )^{j}(\gamma+\alpha)_{j}(\gamma)_{j}}{j!(\gamma+1)_{j}}{}_{1}F_{1}\left( \begin{array}{c}1\\ \gamma+1+j\end{array}\left|\frac{xt}{t-1}\right.\right). \tag{7.21}\] Next, we use the Kummer transformation [25, p.37(7)]: \[{}_{1}F_{1}\left(a;b;z\right)=e^{z}\,{}_{1}F_{1}\left(b-a;b;-z\right), \tag{7.22}\] to get \[\Lambda(x,t)=(1-t)^{-\gamma-\alpha-1}\exp\left(\frac{xt}{t-1}\right)\sum_{j=0} ^{+\infty}\frac{(\frac{t}{t-1})^{j}(\gamma+\alpha)_{j}(\gamma)_{j}}{j!(\gamma+ 1)_{j}}{}_{1}F_{1}\left(\begin{array}{c}\gamma+j\\ \gamma+1+j\end{array}\left|\frac{-xt}{t-1}\right.\right). \tag{7.23}\] Identification of the summation over \(j\) as the Humbert confluent hypergeometric function \(\Phi_{1}\) then completes the proof of Proposition 4.1. ### Acknowledgments I would like to thank Professor Zouhair Mouayn for his comments and invaluable suggestions that improved the presentation of this manuscript. I would like also to thank the Moroccan Association of Harmonic Analysis and Spectral Geometry. ### Data availability statement All data generated or analyzed during this study are included in this published article.
2304.05508
Unilinear residuated lattices: axiomatization, varieties and FEP
We characterize all residuated lattices that have height equal to $3$ and show that the variety they generate has continuum-many subvarieties. More generally, we study unilinear residuated lattices: their lattice is a union of disjoint incomparable chains, with bounds added. We we give two general constructions of unilinear residuated lattices, provide an axiomatization and a proof-theoretic calculus for the variety they generate, and prove the finite model property for various subvarieties.
Nick Galatos, Xiao Zhuang
2023-04-11T21:26:40Z
http://arxiv.org/abs/2304.05508v1
# Unilinear residuated lattices: ###### Abstract. We characterize all residuated lattices that have height equal to \(3\) and show that the variety they generate has continuum-many subvarieties. More generally, we study unilinear residuated lattices: their lattice is a union of disjoint incomparable chains, with bounds added. We we give two general constructions of unilinear residuated lattices, provide an axiomatization and a proof-theoretic calculus for the variety they generate, and prove the finite model property for various subvarieties. Key words and phrases:unilinear residuated lattices, axiomatization, subvarieties, finite embeddability property 2020 Mathematics Subject Classification: 06F05; 08B15, 03G10. 03B47 ## 1. Introduction Residuated lattices generalize various well-known algebraic structures such as lattice-ordered groups, the ideals of a unital ring, and relation algebras, among others. They also form algebraic semantics for various substructural logics, such as classical, intuitionistic, relevance, linear and many-valued logic; as a result further examples of residuated lattices include Boolean, Heyting, MV and BL-algebras. We refer the reader to [10] for an introduction to residuated lattices and substructural logics. A substantial amount of work has focused on the study of totally-ordered residuated lattices (residuated chains) and the variety they generate (semilinear residuated lattices). Here, we start our study by exploring the other extreme: residuated lattices whose elements form an antichain, with two bounds added to obtain a lattice. In Section 2, we show that all residuated lattices of height \(3\) are precisely the ones consisting of two parts: a zero-cancellative monoid and a semigroup of at most three elements, and we specify the process for putting these two parts together. In Section 3 we provide an axiomatization for the positive universal class of residuated lattices of height up to three and of the variety \(\mathsf{M}\) it generates. More generally, we consider the class \(\mathsf{URL}\) of _unilinear_ residuated lattices: they are based on disjoint unions of incomparable chains with two additional bounds. We axiomatize the positive universal class \(\mathsf{URL}\) and the variety \(\mathsf{SRL}\) of _seminilinear_ residuated lattices it generates. Moreover, we show that the finitely subdirectly irreducible members of \(\mathsf{SRL}\) are precisely the unilinear ones. In the particular case of \(\mathsf{M}\), the simplicity of height-\(3\) lattices directly gives the semisimplicity of \(\mathsf{M}\), but we further show that the variety \(\mathsf{bM}\), containing algebras on the expanded language that includes the bounds, is a discriminator variety. We conclude the section with a discussion of the proof-theory of \(\mathsf{SRL}\). In particular we present a hypersequent calculus for \(\mathsf{SRL}\) that enjoys the cut-elimination property, thus resulting in an analytic system for \(\mathsf{SRL}\). In Section 4 we show that there are continuum-many subvarieties of \(\mathsf{M}\). These are actually subvarieties of \(\mathsf{CM}_{\mathsf{G}}\), the variety generated by height-3 unilinear residuated lattices where the middle layer is an abelian group. In fact we show that subvarieties of \(\mathsf{CM}_{\mathsf{G}}\) correspond to \(\mathsf{ISP}_{\mathsf{U}}\)-classes of abelian groups and we further present a completely combinatorial characterization of the subvariety lattice of \(\mathsf{CM}_{\mathsf{G}}\) (without any reference to group theory). We extend this characterization a little further, by allowing the middle layer of the residuated lattice to also include some semigroup elements, coming from the characterization in Section 2. Section 5 contains a proof of the finite embeddability property (FEP) for the variety \(\mathsf{CM}_{\mathsf{G}}\), thus contrasting the complexity coming from the continuum-many subvarieties with the fact that the universal theory of \(\mathsf{CM}_{\mathsf{G}}\) is decidable. We also establish the FEP for more subvarieties of \(\mathsf{SRL}\), which do not have the height-3 restriction. To be more precise, the FEP holds for every subvariety of \(\mathsf{SRL}\) that is axiomatized by equations in the language of multiplication, join and 1, and satisfies any weak commutativity axiom and any knotted rule; we establish this result by using the method of residuated frames. Finally, in Section 6, we focus our attention on unilinear residuated lattices \(\mathbf{R}\) where \(M:=R\setminus\{\bot,\top\}\) is a submonoid and the bounds are absorbing with respect to the elements of \(M\); we call such unilinear residuated lattices _compact_. We provide two constructions of compact residuated lattices, with the first one coming from a finite cyclic monoid. In the second one \(M\) is the cartesian product of a residuated chain and a cancellative monoid, relative to a 2-cocycle; thus it is a generalization of the semidirect product of monoids. We continue with some preliminaries on residuated lattices. A _residuated lattice_ is an algebra \((R,\wedge,\vee,\cdot,\setminus,/,1)\) where * \((R,\wedge,\vee)\) is a lattice, * \((R,\cdot,1)\) is a monoid, and * \(xy\leq z\) iff \(y\leq x\backslash z\) iff \(x\leq z/y\) for all \(x,y,z\in R\). The last condition above is called _residuation_. Given posets \(\mathbf{P}\) and \(\mathbf{Q}\), a map \(f:\mathbf{P}\to\mathbf{Q}\) is said to be _residuated_ if there exists a map \(f^{*}:\mathbf{Q}\to\mathbf{P}\) such that \[f(x)\leq y\text{ iff }x\leq f^{*}(y)\] for all \(x\in P\), \(y\in Q\). The following result is folklore in the theory of residuated maps. **Lemma 1.1**.: _A function \(g\) from a poset \(\mathbf{P}\) to a poset \(\mathbf{Q}\) is residuated if and only if the set \(\{x\in P:g(x)\leq y\}\) has a maximum for all \(y\in Q\) and \(g\) is order-preserving._ Proof.: Let \(S_{y}=\{x\in P:g(x)\leq y\}\) and we assume that \(g\) is residuated with residual \(g^{*}\). Note that \(g^{*}(y)\leq g^{*}(y)\) yields \(g(g^{*}(y))\leq y\) so \(g^{*}(y)\in S_{y}\). Also, for all \(x\in S_{y}\), \(g(x)\leq y\) hence \(x\leq g^{*}(y)\). Therefore, \(g^{*}(y)=\max S_{y}\). If \(x_{1}\leq x_{2}\), then since \(g(x_{2})\leq g(x_{2})\) yields \(x_{2}\leq g^{*}(g(x_{2}))\), we get \(x_{1}\leq g^{*}(g(x_{2}))\); hence \(g(x_{1})\leq g(x_{2})\). Therefore, \(g\) is order-preserving. Now suppose \(S_{y}\) has a maximum for all \(y\in Q\) and \(g\) preserves the order. We define \(g^{*}:Q\to P\) by \(g^{*}(y)=\max S_{y}\); clearly \(g^{*}\) is order-preserving. If \(g(x)\leq y\) for some \(x\in P\), \(y\in Q\), then \(x\in S_{y}\) and \(x\leq g^{*}(y)\) by definition. Conversely, if \(x\leq g^{*}(y)\), then \(g(x)\leq g(g^{*}(y))\) since \(g\) is order-preserving. Moreover, \(g^{*}(y)\in S_{y}\) so \(g(g^{*}(y))\leq y\); thus \(g(x)\leq y\). We mention that if the assumption that \(\{x\in P:g(x)\leq y\}\) has a maximum is replaced by the demand that it has a join, then the order-preservation of \(g\) is not enough to give residuation. Note that a lattice-ordered monoid supports a residuated lattice iff left and right multiplication are residuated. So Lemma 1.1 yields the following fact. **Corollary 1.2**.: _A lattice-ordered monoid \(\mathbf{R}\) is a reduct of a residuated lattice iff multiplication is order-preserving and for all \(x,z\in R\), the sets \(\{y\in R:\,xy\leq z\}\) and \(\{y\in R:\,yx\leq z\}\) have maximum elements. In such a case the expansion to a residuated lattice is unique by \(x\backslash z=\max\{y\in R:xy\leq z\}\) and \(z/x=\max\{y\in R:yx\leq z\}\)._ **Corollary 1.3** (Cor 3.12 of [10]).: _A complete lattice-ordered monoid \(\mathbf{R}\) is a reduct of a residuated lattice iff multiplication distributes over arbitrary joins._ In particular, multiplication distributes over the empty join, if it exists; so if there is a bottom element \(\bot\), then \(x\cdot\bot=\bot=\bot\cdot x\), for all \(x\). For convenience, we set \(x\backslash\!\backslash z:=\{y\in R:xy\leq z\}\) and \(z/\!\!/x:=\{y\in R:yx\leq z\}\) for \(x,z\in R\). **Remark 1.4**.: Let \(\mathbf{P}=(P,\wedge,\vee,\cdot,\bot,\top)\) be a bounded lattice-ordered semigroup. Since \(\bot x=\bot\) for all \(x\in P\), we have \(\bot\backslash\!\backslash x=P\), so \(\bot\backslash x=\max\bot\backslash x=\top\) for all \(x\in P\). Also, since \(x\backslash\!\backslash\top=P\), \(x\backslash\top=\top\) for all \(x\in P\). Similarly, \(x/\bot=\top\) and \(\top/x=\top\) for all \(x\in P\). A residuated lattice with bounds \(\bot\) and \(\top\) is called _rigorously compact_ if \(\top x=x\top=\top\) for all \(x\neq\bot\). In this case we also have that \(xy=\bot\Rightarrow x=\bot\) or \(y=\bot\), since otherwise we get \(x\neq\bot\neq y\), so \(\bot=\bot\top=xy\top=x\top=\top\), a conradiction. Note that in rigorously compact residuated lattices we have \(\bot\backslash x=x/\bot=\top=x\backslash\top=\top/x\), \(\top\backslash y=y/\top=\bot\), \(z\backslash\bot=\bot=\bot/z\) for all \(x\in R\), \(y\neq\top\), \(z\neq\bot\). ## 2. Residuated Lattices on \(\mathbf{M}_{X}\) Residuated lattices based on chains have been studied extensively. We start by looking into residuated lattices based on an antichain, with extra top and bottom elements. ### Properties Given a set \(X\), we denote by \(\mathbf{M}_{X}\) the lattice over the set \(X\cup\{\bot,\top\}\), where \(\top\) is the top element, \(\bot\) is the bottom element, and \(x\lor y=\top\) and \(x\wedge y=\bot\), for distinct \(x,y\in X\). Figure 1. A residuated lattice over \(M_{X}\) The characterization of all residuated lattices based on \(\mathbf{M}_{X}\) where \(X\) is non-empty and closed under multiplication is known ([10] p. 205): \(X\) is a cancellative monoid, \(\bot\) is absorbing in \(M_{X}\) and \(\top\) is absorbing in \(X\cup\{\top\}\). We will characterize all residuated lattices based on \(\mathbf{M}_{X}\), even when \(X\) is not closed under multiplication. Recall that in every bounded residuated lattice the bottom element is absorbing. Also, in a residuated lattice based on \(\mathbf{M}_{X}\) we have \(\top x,x\top\in\{x,\top\}\) for all \(x\), since \(1\leq\top\) implies \(x\leq\top x\) and \(x\leq x\top\). In a residuated lattice \(\mathbf{R}\) on \(\mathbf{M}_{X}\), we define \[U_{R}=\{x\in R\setminus\{\bot,\top\}:x\top=\top\}\,\,\,\text{and}\,\,\,Z_{R}= \{x\in R\setminus\{\bot,\top\}:x\top=x\},\] the set of elements that behave as units for \(\top\) and the set of elements that behave as zeros for \(\top\); when the residuated lattice is clear from the context we drop the subscript in \(U_{R}\) and \(Z_{R}\). Note that \(1\in U\) and \(U\cap Z=\emptyset\). A monoid \(\mathbf{S}\) with a zero (absorbing element) \(0\) is called _\(0\)-cancellative_ if for all \(x,y,z\in S\), \[xy=xz\neq 0 \Rightarrow\ y=z\] \[yx=zx\neq 0 \Rightarrow\ y=z.\] An element \(c\) in a residuated residuated \(\mathbf{R}\) lattice is called _central_ if \(xc=cx\), for all \(x\in R\). Also, we denote by \(\sqcup\) the disjoint union operation. **Theorem 2.1**.: _If \(\mathbf{R}\) is a residuated lattice based on \(\mathbf{M}_{X}\), then_ 1. \(\top\) _is central in_ \(\mathbf{R}\) _and_ \(R=U\sqcup Z\sqcup\{\bot,\top\}\)_._ 2. \(U_{\top}=U\cup\{\top\}\) _is a_ \(\top\)_-cancellative submonoid of_ \(\mathbf{R}\)_._ 3. \(Z_{\bot}=Z\cup\{\bot\}\) _is a subsemigroup of_ \(\mathbf{R}\) _with zero_ \(\bot\)_,_ \(|Z_{\bot}|\leq 3\) _and_ \(xy=\bot\) _for all distinct_ \(x,y\in Z_{\bot}\)_._ _Also, either_ \(Z_{\bot}\) _is idempotent, or_ \(Z_{\bot}=\{b,\bot\}\) _with_ \(b^{2}=\bot\)_._ 4. \(ab=ba=b\) _for all_ \(a\in U\) _and_ \(b\in Z\)_._ Proof.: (1) We will show that \(\top x=x\top\), for all \(x\in R\). If \(x\) is \(\top,\bot\) or \(\top\), then \(\top x\) and \(x\top\) both are equal to \(\top,\bot,x\), respectively. Also, if \(x\) is incomparable to \(1\), then \(x\lor 1=\top\), so \(\top x=(1\lor x)x=x\lor x^{2}=x(1\lor x)=x\top\). Since \(\top\) is central and \(x\top\in\{x,\top\}\) for all \(x\), we have that for every \(x\in R\setminus\{\bot,\top\}\) either \(x\in U\) or \(x\in Z\), but not both. (2) If \(a,b\in U_{\top}\), then \(ab\cdot\top=a\cdot b\top=a\top=\top\), so \(ab\in U_{\top}\). Similarly, \(ba\in U_{\top}\) and \(\top\) is a zero for \(U_{\top}\). If \(x,y,z\in U_{\top}\) and \(xy=xz\neq\top\), then \(x(y\lor z)=xy\lor xz=xy\neq\top\). So \(y\lor z\neq\top\), because \(x\top=\top\); in particular, \(y\neq\top\neq z\). Also, since \(y,z\in U_{\top}\) and \(\bot\not\in U_{\top}\), we get \(y\neq\bot\neq z\); hence \(y,z\in X\) and \(y\lor z\neq\top\). Since, \(\mathbf{R}\) is based on \(\mathbf{M}_{X}\), we get that \(y=z\). Similarly, we obtain the other implication of \(\top\)-cancellativity. (3) If \(c,d\in Z_{\bot}\), then \(cd\cdot\top=c\cdot d\top=cd\). Also, \(cd\leq c\top=c<\top\); hence \(cd\in Z_{\bot}\). Clearly, \(\bot\) is a zero for \(Z_{\bot}\). Since \(Z_{\bot}\subseteq X\cup\{\bot\}\), for distinct \(x,y\in Z_{\bot}\), we have \(xy=xy\wedge xy\leq x\top\wedge\top y=x\wedge y=\bot\). So, if there were distinct \(x,y,z\in Z\), then \(y\lor z=\top\) and \(x=x\top=x(y\lor z)=xy\lor xz=\bot\vee\bot=\bot\), a contradiction. Therefore \(|Z_{\bot}|\leq 3\). If \(b\) is a non-idempotent element of \(Z_{\bot}\subseteq X\cup\{\bot\}\), then \(b\neq\bot\) and \(b^{2}\leq b\top=b\), so \(b^{2}=\bot\). If \(c\) is an element of \(Z_{\bot}\) distinct from \(b\) and \(\bot\), then \(b^{2}=b^{2}\lor\bot=b^{2}\lor bc=b(b\lor c)=b\top=b\), a contradiction. So, if \(Z_{\bot}\) is not idempotent, then \(Z_{\bot}=\{b,\bot\}\) and \(b^{2}=\bot\). (4) For \(a\in U\) and \(b\in Z\), using the centrality of \(\top\), we get \[b=\top b=\top a\cdot b=\top\cdot ab=ab\cdot\top=a\cdot b\top=ab.\] Similarly, we get \(ba=b\). It is straight-forward to see that that the possible options for the subsemigroup \(Z_{\bot}\), mentioned in Theorem 2.1(3) are precisely the ones in Figure 2. Note that if a residuated lattices based on \(\mathbf{M}_{X}\) is integral (i.e., it satisfies \(x\leq 1\)), then \(U=\emptyset\). By taking into account all of the possibilities for \(Z_{\bot}\), it follows that the only integral residuated lattices based on \(\mathbf{M}_{X}\) are the 2-element and 4-element Boolean algebras, the 3-element Heyting algebra and the 3-element MV-algebra. The latter two, together with the 3-element Sugihara monoid, are the only 3-element residuated chains. ### Construction and characterization We now prove the converse of Theorem 2.1. Let \(\mathbf{A}\) be a \(\top\)-cancellative monoid with zero \(\top\) and \(\mathbf{B}\) a semigroup with zero \(\bot\), whose multiplication table is one of those in Figure 2. We define the lattice structure \(\mathbf{M}_{X}\) on the set \(R=A\cup B\), where \(X=R\setminus\{\bot,\top\}\), \(\bot\) is the bottom and \(\top\) is the top. Also, we define a multiplication on \(R\) that extends the multiplications on \(\mathbf{A}\) and \(\mathbf{B}\) by: \(xy=yx=y\), for all \(x\in A\) and \(y\in B\). We denote by \(\mathbf{R_{A,B}}\) the resulting algebra. **Theorem 2.2**.: _If \(\mathbf{A}\) is a \(\top\)-cancellative monoid with zero \(\top\) and \(\mathbf{B}\) is a semigroup with zero \(\bot\), whose multiplication table is one of those in Figure 2, then \(\mathbf{R_{A,B}}\) is the reduct of a residuated lattice based on \(\mathbf{M}_{X}\), where \(X=(A\cup B)\setminus\{\bot,\top\}\)._ Proof.: Since associativity holds in \(\mathbf{A}\) and \(\mathbf{B}\) and every element of \(B\) is an absorbing element for \(A\), we get that multiplication on \(\mathbf{R}\) is associative. Corollary 1.3 ensures that an expansion of \(\mathbf{M}_{X}\) by a monoid structure is a residuated lattice iff multiplication distributes over arbitrary joins. Since \(\bot x=x\bot=\bot\) for all \(x\in R\), multiplication distributes over the empty join. Also, we observe every infinite join is equivalent to a finite join, so it suffices to show \(x(y\lor z)=xy\lor xz\) and \((y\lor z)x=yx\lor zx\) for all \(x,y,z\in R\) and \(y\neq z\). Here we prove \(x(y\lor z)=xy\lor xz\). If \(\bot\in\{x,y,z\}\), then it is easy to check that this equation always holds, so we will assume that \(\bot\notin\{x,y,z\}\). Since \(y\neq z\), we get \(y\lor z=\top\). Now we will verify that \(x\top=xy\lor xz\). If \(x\in B\), then the left-hand side is \(x\). If, further, \(y\in A\) or \(z\in A\), then the right-hand side is \(x\lor xz=x\) or \(xy\lor x=x\), since \(xu\leq x\) for all \(u\in R\). If \(y,z\in B\), then since \(|B|\leq 3\) and \(y,z,\bot\) are distinct, we get \(B=\{y,z,\bot\}\) and \(x=y\) or \(x=z\). In this case, \(xy\lor xz=x\vee\bot=x\), so the equation holds. If \(x\in A\), then the left-hand side is equal to \(\top\). If \(y\in B\) and \(z\in B\), then the right-hand side is \(y\lor z=\top\), since \(y\neq z\). If \(y\in B\) and \(z\in A\), then the right-hand Figure 2. Four multiplication tables side is \(y\lor xz=\top\), since \(y\in B\), \(xz\in A\) and \(\bot\notin\{x,y,z\}\). Likewise, if \(y\in A\) and \(z\in B\), then the right-hand side is \(\top\). If \(y\in A\) and \(z\in A\), then the right-hand side is \(xy\lor xz=\top\) since \(\mathbf{A}\) is \(\top\)-cancellative. Similarly, we can show \((y\lor z)x=yx\lor xz\) for all \(x,y,z\in R\). By Corollary 1.2 the divisions are uniquely determined by the equations \(x\backslash z=\max\{y\in R:xy\leq z\}\) and \(z/x=\max\{y\in R:yx\leq z\}\), and we give the precise values below. It turns out that \(A\cup\{\bot\}\) and \(B\cup\{\top\}\) are subalgebras of \(\mathbf{R_{A,B}}\). In particular, \(B\cup\{\top\}\) is the 2-element Boolean algebra, the 3-element Heyting algebra, 3-element MV-algebra, or the 4-element Boolean algebra, corresponding to the tables in Figure 2. The divisions are given by Remark 1.4 and \[a_{1}\backslash a_{2}=\begin{cases}a_{3}&\text{ if }a_{1}a_{3}=a_{2}\\ \bot&\text{ otherwise}\end{cases}\quad a_{2}/a_{1}=\begin{cases}a_{3}&\text{ if }a_{3}a_{1}=a_{2}\\ \bot&\text{ otherwise}\end{cases}\] for \(a_{1},a_{2},a_{3}\in A\), where the \(a_{3}\) is guaranteed to be unique, when it exists. Finally, for \(a\in A\setminus\{\top\}\) and \(b\in B\), any operation between \(a\) and \(b\) works the same as the operation between \(1\) and \(b\). For example, \(b\backslash a=b\backslash 1\), \(a\wedge b=1\wedge b\), \(ab=1b\), etc. By combining Theorem 2.1 and Theorem 2.2, we obtain the following characterization. **Corollary 2.3**.: _The residuated lattices based on \(\mathbf{M}_{X}\) are precisely the ones of the form \(\mathbf{R_{A,B}}\), where \(\mathbf{A}\) is a \(\top\)-cancellative monoid with zero \(\top\) and \(\mathbf{B}\) is a semigroup with zero \(\bot\), whose multiplication table is one of those in Figure 2._ ## 3. Axiomatizations In this section we will provide axiomatizations for the various classes we will be considering and also discuss their proof theory. ### Axiomatization of residuated lattices based on \(\mathbf{M}_{X}\)'s We start by giving an axiomatization for the variety \(\mathsf{M}\) generated by all residuated lattices based on \(\mathbf{M}_{X}\), where \(X\) is a set; see Corollary 3.4. Since the lattice \(\mathbf{M}_{X}\) is simple, when \(|X|\geq 3\), residuated lattices based on \(\mathbf{M}_{X}\) are also simple; if \(|X|\leq 3\) the residuated lattice is simple, as well. It turns out (Corollary 3.7) that these are precisely the subdirectly irreducible algebras in \(\mathsf{M}\) and we will provide an axiomatization for them. Actually, we can also expand the language of residuated lattices to include constants which then evaluate as bounds. A _bounded residuated lattice_ is an expansion of a residuated lattice that happens to be based on a bounded lattice, by the addition of constants \(\bot\) and \(\top\), evaluating at these bounds (so \(\bot\leq x\leq\top\), for all \(x\)). We will consider both cases where the language includes the bounds or not, but opt for the axioms to be expressible without the need for bounds. We can arrange for the axioms we will be considering to be positive universal sentences, which is convenient for applying the correspondence provided in [8]. A (bounded) residuated lattice is called _unilinear_ if it satisfies: (URL) \[\forall u_{1},u_{2},z,w\;(u_{1}\leq u_{2}\text{ or }u_{2}\leq u_{1}\text{ or }(u_{1}\wedge u_{2}\leq w\text{ and }z\leq u_{1}\lor u_{2}))\] Note that a residuated lattice is unilinear iff it is linear or else the lattice is actually bounded and every pair of incomparable elements join to the top of the lattice and meet to the bottom of the lattice. In other words the non-linear residuated lattices consist of two bounds and the rest of the lattice is a disjoint union of totally incomparable chains; see Figure 3. For these non-linear unilinear residuated lattices, we will be denoting these bounds by \(\bot\) and \(\top\), even when the language does not include constants for the bounds. We denote by \(\mathsf{URL}\) and \(\mathsf{bURL}\) the (positive universal) classes of unilinear and bounded unilinear residuated lattices, respectively. Clearly, (bounded) residuated lattices on an \(\mathbf{M}_{X}\) are unilinear. What distinguishes \(\mathbf{M}_{X}\) from other lattices is its height, so we axiomatize unilinear residuated lattices whose height is no greater than a given number. We are careful to formulate the first-order sentence so it has no implication in it and it remains a positive sentence. **Proposition 3.1**.: _Given a natural number \(n\), a (bounded) unilinear residuated lattice has height at most \(n\) if and only if it satisfies_ \[(h_{n})\qquad\qquad\forall x_{1},\dots,x_{n+1}\,(\underset{1\leq m\leq n}{ \operatorname{OR}}x_{1}\vee\dots\lor x_{m}=x_{1}\vee\dots\lor x_{m+1}).\] _Also, it has width at most \(n\) if and only if it satisfies_ \[(w_{n})\qquad\qquad\qquad\forall x_{1},\dots,x_{n+1}\,(\underset{1\leq i\neq j \leq n+1}{\operatorname{OR}}x_{i}\leq x_{j}).\] Proof.: Having height at most \(n\) is equivalent to saying that every subchain has at most \(n\) elements. Now, every subchain always has the form \(a_{1}\leq a_{1}\lor a_{2}\leq a_{1}\lor a_{2}\lor a_{3}\leq\dots\leq a_{1}\lor \dots\lor a_{k}\), where \(a_{1},\dots,a_{k}\) are elements of the lattice and where the number of the inequalities that are equalities determines the number of elements in the chain. So, having height at most \(n\) is equivalent to stipulating that in every chain \(a_{1},a_{1}\lor a_{2},\dots,a_{1}\vee\dots\lor a_{n+1}\), at least two adjacent elements are equal. Having width at most \(n\) is equivalent to having at most \(n\) pairwise incomparable elements. We denote by \(\mathsf{URL}_{n}\) the subclass of \(\mathsf{URL}\) axiomatized by \((h_{n})\). In particular, \((h_{3})\) is the universal closure (which we often suppress) of \[x_{1}=x_{1}\lor x_{2}\text{ or }x_{1}\lor x_{2}=x_{1}\lor x_{2}\lor x_{3} \text{ or }x_{1}\lor x_{2}\lor x_{3}=x_{1}\lor x_{2}\lor x_{3}\lor x_{4}.\] **Corollary 3.2**.: _The (bounded) residuated lattices that are based on \(\mathbf{M}_{X}\), for some \(X\), together with the trivial algebra, are precisely the ones in the class \(\mathsf{URL}_{3}\) (\(\mathsf{bURL}_{3}\))._ Figure 3. A non-linear unilinear residuated lattice ### Equational basis for \(\mathsf{M}\) The class \(\mathsf{URL}_{3}\) is axiomatized by positive universal sentences. We note that [8] provides a general method for axiomatizing the variety of residuated lattices generated by a positive universal class. In detail, if \[1\leq p_{1}\text{ or }\cdots\text{ or }1\leq p_{n}\] is a positive universal formula, then the variety generated by the residuated lattices satisfying the universal closure of the formula is axiomatized by the infinitely many equations \[1=\gamma_{1}(p_{1})\vee\cdots\vee\gamma_{n}(p_{n})\] where \(\gamma_{1},\ldots,\gamma_{n}\in\Gamma(Var)\), the set of all iterated conjugates. The _left conjugate_ of \(a\) by \(x\) is the term \(x\backslash ax\wedge 1\) and the _right conjugate_ is \(xa/x\wedge 1\); _iterated conjugates_ are obtained by repeated applications of left and right conjugates by various conjugating elements from the set \(Var\) of variables. If \(\phi\) is a set of positive universal formulas, we denote by \(\mathsf{V}_{\phi}\) the variety axiomatized by the set \(\Gamma_{\phi}\) of all the equations corresponding to the positive universal formulas in \(\phi\). We consider the variety \(\mathsf{SRL}\) generated by the class \(\mathsf{URL}\) and we call its elements _semiunilinear_. Since \(\mathsf{URL}\) is axiomatized by \[x\leq y\text{ or }y\leq x\text{ or }(x\wedge y\leq z\text{ and }w\leq x\lor y),\] which can be written as the conjunction of the two sentences \[x\leq y\text{ or }y\leq x\text{ or }x\wedge y\leq z,\qquad x\leq y\text{ or }y\leq x\text{ or }w\leq x\lor y,\] and, in turn, as \[1\leq x\backslash y\text{ or }1\leq y\backslash x\text{ or }1\leq(x\wedge y) \backslash z,\ \ 1\leq x\backslash y\text{ or }1\leq y\backslash x\text{ or }1\leq w \backslash(x\lor y),\] we get the following result. **Corollary 3.3**.: _The variety \(\mathsf{SRL}\) of semiunilinear residuated lattices is axiomatized by the infinitely many equations_ \[1=\gamma_{1}(x\backslash y)\vee\gamma_{2}(y\backslash x)\vee\gamma_{3}((x \wedge y)\backslash z)\qquad 1=\gamma_{4}(x\backslash y)\vee\gamma_{5}(y \backslash x)\vee\gamma_{6}(w\backslash(x\lor y)),\] _where \(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4},\gamma_{5},\gamma_{6}\in\Gamma( Var)\)._ **Corollary 3.4**.: _The variety \(\mathsf{M}\) generated by the class \(\mathsf{URL}_{3}\), of residuated lattices on an \(\mathbf{M}_{X}\), is axiomatized relative to \(\mathsf{SRL}\) by : \(1=\)_ \[\gamma_{1}((x_{1}\lor x_{2})\backslash x_{1})\vee\gamma_{2}((x_{1}\lor x_{2} \lor x_{3})\backslash(x_{1}\lor x_{2}))\vee\gamma_{3}((x_{1}\lor x_{2}\lor x_{ 3}\lor x_{4})\backslash(x_{1}\lor x_{2}\lor x_{3}))\] _where \(\gamma_{1},\gamma_{2},\gamma_{3}\in\Gamma(Var)\)._ We denote by \(\mathsf{bM}\) the corresponding variety of bounded residuated lattices. Also, we can characterize the finitely subdirectly irreducible algebras in these varieties. **Theorem 3.5**.: _The finitely subdirectly irreducible (FSI) semiunilinear residuated lattices are precisely the unilinear residuated lattices: \(\mathsf{SRL}_{FSI}=\mathsf{URL}\). More generally, if \(\phi\) is a set of positive universal sentences, then the FSIs in \(\mathsf{SRL}\cap\mathsf{V}_{\phi}\) are precisely the unilinear residuated lattices that satisfy \(\phi\)._ Proof.: It follows from the proof of Theorem 9.73(2) of [8] that an FSI algebra satisfies the unilinearity condition iff it satiefies the equations of Corollary 3.3, i.e., iff it is semiunilinear. So, the semiunilinear FSIs are actually unilinear. Conversely, if an algebra is unilinear, then its negative cone \(\downarrow 1\) is a chain. Therefore, the convex normal submonoids of the negative cone are nested and \(\{1\}\) cannot be the intersection of two non-trivial convex normal submonoids; see [10] for the correspondence between congruences and convex normal submonoids of the negative cone of residuated lattices. Therefore, the trivial congruence is meet-irreducible and the algebra is FSI (and semiunilinear, as it is unilinear). **Corollary 3.6**.: _Every semiunilinear residuated lattice is a subdirect product of unilinear ones._ **Corollary 3.7**.: _The subdirectly irreducibles in \(\mathsf{M}\) are the same as the finitely subdirectly irreducible in \(\mathsf{M}\) and as the simple ones in \(\mathsf{M}\) and they are precisely the non-trivial residuated lattices based on \(\mathbf{M}_{X}\), for some \(X\). The same holds for \(\mathsf{b}\mathsf{M}\)._ That every subdirectly irreducible in each of the varieties \(\mathsf{M}\) and \(\mathsf{b}\mathsf{M}\) is actually simple follows from the fact that its negative cone has two elements. Consequently, these varieties are semisimple. For \(\mathsf{b}\mathsf{M}\) we can say a bit more. We define the following terms \[r(x)=(1\lor x)(1\wedge x)\wedge(1\lor 1/x)(1\wedge 1/x)\qquad x\leftrightarrow y =x\backslash y\wedge y\backslash x\wedge 1\] \[t(x,y,z)=r(x\leftrightarrow y)\cdot z\vee(r(x\leftrightarrow y)\backslash\bot \wedge 1)\cdot x\] **Lemma 3.8**.: \(\mathsf{b}\mathsf{M}\) _is a discriminator variety with discriminator term \(t\)._ Proof.: If \(\mathbf{R}\in\mathsf{b}\mathsf{M}_{SI}\) then, by Corollary 3.7, \(\mathbf{R}\) is a non-trivial bounded residuated lattice based on \(\mathbf{M}_{X}\) for some \(X\). Note that if \(x\) is incomparable to \(1\), then also \(1/x\) is incomparable to \(1\) or is equal to \(\bot\), so \(1\wedge x=1\wedge 1/x=\bot\), hence \(r(x)=\bot\). Also, if \(x\in\{\bot,\top\}\), then \(\{x,1/x\}=\{\bot,\top\}\), so \(1\wedge x=\bot\) or \(1\wedge 1/x=\bot\), hence \(r(x)=\bot\). Finally, since \(1/1=1\), we have \(r(x)=1\), if \(x=1\) and \(r(x)=\bot\) otherwise. Note that for all \(x,y\in R\), we have \(x\leftrightarrow y\leq 1\), i.e., \(x\leftrightarrow y\in\{\bot,1\}\). Moreover, \(x\leftrightarrow y=1\) iff \(1=x\backslash y\wedge y\backslash x\wedge 1\) iff \(1\leq x\backslash y\wedge y\backslash x\) iff \((1\leq x\backslash y\) and \(1\leq\wedge y\backslash x)\) iff \((x\leq y\) and \(y\leq x)\) iff \(x=y\). Thus we have \(x\leftrightarrow y=1\) if \(x=y\) and \(x\leftrightarrow y=\bot\) if \(x\neq y\). Therefore, \(t(x,y,z)=r(1)\cdot z\vee(r(1)\backslash\bot\wedge 1)\cdot x=1\cdot z\vee(1 \backslash\bot\wedge 1)\cdot x=z\vee\bot\cdot x=z\), if \(x=y\); and \(t(x,y,z)=r(\bot)\cdot z\vee(r(\bot)\backslash\bot\wedge 1)\cdot x=\bot\cdot z\vee( \bot\backslash\bot\wedge 1)\cdot x=(\top\wedge 1)\cdot x=1\cdot x\), if \(x\neq y\). ### Including (or not) the bounds in the signature Note that when axiomatizing classes of unilinear residuated lattices for which the non-linear members are asked to satisfy a certain positive universal sentence, oftentimes the axiomatization looks nicer in the case where the language includes constants for the bounds. For example, the class of URLs whose non-linear members satisfy \(\top x=x\top\) is axiomatized by the positive universal formula \[u\leq v\text{ or }v\leq v\text{ or }x(u\lor v)=(u\lor v)x.\] For non-linear bURL's this formula is equivalent to \[x\top=\top x.\] For the sake of readability, we will allow ourselves to denote the first of these sentences as the more pleasing to the eye: \[x\overline{\top}=\overline{\top}x.\] We call a (bounded) unilinear residuated lattice \(\top\)_-central_, if it satisfies this formula. More generally, if \(\Phi\) is the sentence \(\forall\vec{x}\,(\varphi(\vec{x},\top,\bot))\), where \(\varphi\) is in the language of URL's, we denote by \(\overline{\Phi}\) the sentence \[\forall\vec{x}\,(\varphi(\vec{x},\overline{\top},\bot)):=\forall u,\forall v,\forall\vec{x}\,(u\leq v\text{ or }v\leq u\text{ or }\varphi(\vec{x},u\lor v,u\wedge v))\] where \(u,v\) are fresh variables. Likewise, we call a (bounded) unilinear residuated lattice \(\top\)_-unital_, if it satisfies the formula \[x=\overline{\bot}\text{ or }x\overline{\top}=\overline{\top}=\overline{\top}x,\] since in the non-linear models every non-bottom element acts as a unit for the top. Note that for non-linear bURLs being \(\top\)-unital is the same as being rigorously compact. **Lemma 3.9**.: _Let \(\varphi\) be a positive universal formula in the language of URLs, let \(\Phi\) be \(\forall\vec{x}\left(\varphi(\vec{x},\top,\bot)\right)\) and let \(\overline{\Phi}\) be \(\forall\vec{x}\left(\varphi(\vec{x},\overline{\top},\overline{\bot})\right)\)._ 1. _The non-linear bURLs that satisfy_ \(\Phi\) _are precisely the non-linear bURLs that satisfy_ \(\overline{\Phi}\)_._ 2. _The non-linear URLs that satisfy_ \(\overline{\Phi}\) _are precisely the bound-free reducts of the non-linear bURLs that satisfy_ \(\overline{\Phi}\)_._ 3. _The linear (bounded) URLs that satisfy_ \(\overline{\Phi}\) _are precisely the (bounded) residuated chains._ Proof.: (1) If \(\mathbf{R}\) is a non-linear bURL, then it satisfies \(\overline{\Phi}\) iff it satisfies it for all incomparable elements \(u,v\) (as \(\overline{\Phi}\) automatically holds for comparable elements \(u,v\)) iff it satisfies \(\Phi\) (since when \(u,v\) are incomparable, we have \(u\lor v=\top\) and \(u\wedge v=\bot\)). (2) follows from the fact that all non-linear URLs are bounded, say \(b\) and \(t\) are the bounds, and that for bounded non-linear URL's \(\overline{\Phi}\) is equivalent to \(\forall\vec{x}\left(\varphi(\vec{x},t,b)\right)\). (3) follows from the fact that \(\overline{\Phi}\) holds in all totally ordered algebras. We note that there might be linear bURLs that satisfy \(\overline{\Phi}\), but fail to satisfy \(\Phi\). This happens for example when \(\Phi\) is \(\top x=x\top\). ### Proof theory for SRL Certain varieties of residuated lattices admit a proof-theoretic analysis, which is often complementary to their algebraic study and which often yields interesting results. Not all varieties of residuated lattices admit a proof-theoretic calculus, but we show that SRL does admit a hypersequent calculus. We present the hypersequent system, but we do not pursue any further applications in this paper. As a motivating example, we mention the equational theory of lattices, which is axiomatized by the standard basis of the semilattice and the absorption laws. New valid equations can be derived from these axioms using the derivational system of equational logic, which includes the rules of reflexivity, symmetry, transitivity, and replacement/congruence. This system is not amenable to an inverse proof search analysis as, given an equation \(s=t\), to determine if it is derivable in the system one cannot simply go through all applications of these derivational rules that could have the equation as a conclusion and proceed recursively: the transitivity rule \(\frac{s=t}{s=r}\) introduces (read upward) a new term that does not appear in the equation. Also, using inequational reasoning, where for example \(\frac{s\leq t}{s\leq r}\) is used instead and the axioms are replaced by inequational axioms such as \(s\leq s\lor t\), does not make the problem go away: simply omitting this transitivity rule from the system changes the set of derivable inequalities. However, a way to bypass this problem is to replace the lattice axioms by inference rules; for example we replace \(s\leq s\lor t\) by the inference rule \(\frac{r<s}{r\leq s\lor t}\). The axiom and the rule are equivalent in the presence of transitivity, but the rule has elements of transitivity _injected_ in it when compared to the axiom: the rule implies the axiom by instantiation, but the axiom implies the rule only with the help of transitivity. Moreover, the new rule does not suffer from the problem of transitivity as all terms in the numerator are already contained in the denominator; so it is safe to replace the axiom by the rule. There is a way to inject transitivity into all the axioms, converting them to innocent inference rules, such that in the new system the transitivity rule itself becomes completely redundant. The resulting system can be used to show the decidability of lattice equations. A similar approach works for certain subvarieties of residuated lattices; the axioms in the subvariety may or may not be amenable to injecting transitivity to them. Also, since there are more operations than in lattices, the above inequalities have to be replaced by _sequents_. These are expressions of the form \(s_{1},s_{2},\ldots,s_{n}\Rightarrow s_{0}\), where the \(s_{i}\)'s are residuated-lattice terms, and their interpretation is given by \(s_{1}\cdot s_{2}\cdots s_{n}\leq s_{0}\). The transitivity rule itself at the level of sequents takes the form or a rule called (cut) and the goal is cut-elimination, in the same spirit as above, for lattices; we often write \(\Gamma\Rightarrow\Pi\) for sequents, where \(\Gamma\) is a sequence of formulas and \(\Pi\) is a single formula. The corresponding derivational systems/calculi define different types of _substructural logics_ and varieties of residuated lattices serve as algebraic semantics for them; see [10]. The variety of all residuated lattices admits a sequent derivation system, which leads to the decidability of the equational theory of residuated lattices, among other things. The variety of semilinear residuated lattices (generated by residuated chains) however, provably does not admit a sequent calculus, due to the shape of its axioms. It does, however, admit a hypersequent calculus. Hypersequents are more complex syntactic objects of the form \(\Gamma_{1}\Rightarrow\Pi_{1}\mid\Gamma_{2}\Rightarrow\Pi_{2}\mid\cdots\mid \Gamma_{m}\Rightarrow\Pi_{m}\), i.e., they are multisets of sequents. We denote by **HRL** the basis hypersequent system for the variety of residuated lattices; additional inference rules can be added in order to obtain systems for subvarieties. We follow [3], which describes the process of injecting transitivity into hypersequents, and we obtain a hypersequent system for the variety \(\mathsf{SRL}\) that admits cut elimination. We start with the axioms of \(\mathsf{URL}\), the positive universal class that generates \(\mathsf{SRL}\). First we convert the first axiom \(\forall x,y,z(x\leq y\text{ or }y\leq x\text{ or }z\leq(x\lor y))\) to the equivalent form \(\forall x,y,z,t_{1},t_{2},t_{3},s_{1},s_{2},s_{3}\) \[t_{1}\leq x\text{ and }y\leq s_{1}\text{ and }t_{2}\leq y\text{ and }x\leq s_{2}\text{ and }t_{3}\leq z\text{ and }(x\lor y)\leq s_{3}\] \[\Rightarrow t_{1}\leq s_{1}\text{ or }t_{2}\leq s_{2}\text{ or }t_{3}\leq s_{3}\] by injecting some transitivity. This then allows to remove the \(\vee\) from the axiom, by rewriting it as \(\forall x,y,z,t_{1},t_{2},t_{3},s_{1},s_{2},s_{3}\) \[t_{1}\leq x\text{ and }y\leq s_{1}\text{ and }t_{2}\leq y\text{ and }x\leq s_{2}\text{ and }t_{3}\leq z\text{ and }x\leq s_{3}\text{ and }y\leq s_{3}\] \[\Rightarrow t_{1}\leq s_{1}\text{ or }t_{2}\leq s_{2}\text{ or }t_{3}\leq s_{3}\] In the terminology of [3], the clause is _linear_ and _exclusive_, so we eliminate the redundant variables in the premise (noting that \(z\) appears only on the right side of inequations, while \(x\) and \(y\) appear on both sides): we apply transitivity closure and removal of variables in the premise of the clause. The procedure yields the equivalent clause \(\forall t_{1},t_{2},t_{3},s_{1},s_{2},s_{3}\) \[t_{1}\leq s_{2}\text{ and }t_{1}\leq s_{3}\text{ and }t_{2}\leq s_{1}\text{ and }t_{2}\leq s_{3}\] \[\Rightarrow t_{1}\leq s_{1}\text{ or }t_{2}\leq s_{2}\text{ or }t_{3}\leq s_{3}\] We now instantiate \(s_{j}\) by \(c\backslash p_{j}/d\) and use residuation to rewrite \(t_{i}\leq s_{j}\) as \(t_{i}\leq c\backslash p_{j}/d\) and as \(ct_{i}d\leq p_{j}\). This results in the equivalent clause \(\forall t_{1},t_{2},t_{3},c,p_{1},p_{2},p_{3},d\) \(ct_{1}d\leq p_{2}\) and \(ct_{1}d\leq p_{3}\) and \(ct_{2}d\leq p_{1}\) and \(ct_{2}d\leq p_{3}\) \(\Rightarrow ct_{1}d\leq p_{1}\) or \(ct_{2}d\leq p_{2}\) or \(ct_{3}d\leq p_{3}\) Converting the clause to the corresponding hypersequent rule we get \(\frac{\Xi\mid\Gamma,\Sigma_{1},\Delta\Rightarrow\Pi_{2}}{\Xi\mid\Gamma, \Sigma_{1},\Delta\Rightarrow\Pi_{3}}{\Xi\mid\Gamma,\Sigma_{2},\Delta \Rightarrow\Pi_{1}}{\Xi\mid\Gamma,\Sigma_{2},\Delta\Rightarrow\Pi_{3}}\) \(\Xi\mid\Gamma,\Sigma_{1},\Delta\Rightarrow\Pi_{1}\mid\Gamma,\Sigma_{2}, \Delta\Rightarrow\Pi_{2}\mid\Gamma,\Sigma_{3},\Delta\Rightarrow\Pi_{3}\) Likewise the second axiom of unilinearity gives the hypersequent rule \(\frac{\Xi\mid\Gamma,\Sigma_{2},\Delta\Rightarrow\Pi_{1}}{\Xi\mid\Gamma, \Sigma_{3},\Delta\Rightarrow\Pi_{1}}{\Xi\mid\Gamma,\Sigma_{1},\Delta \Rightarrow\Pi_{2}}{\Xi\mid\Gamma,\Sigma_{3},\Delta\Rightarrow\Pi_{2}}\) \(\Xi\mid\Gamma,\Sigma_{1},\Delta\Rightarrow\Pi_{1}\mid\Gamma,\Sigma_{2}, \Delta\Rightarrow\Pi_{2}\mid\Gamma,\Sigma_{3},\Delta\Rightarrow\Pi_{3}\) We refer to these hypersequent rules as (URL1) and (URL2), respectively. **Corollary 3.10**.: _The extension of_ **HFL** _with the rules_ (URL1) _and_ (URL2) _provides a cut-free hypersequent calculus for the variety_ **SRL** _by [3]._ It is notable, that even though **SRL** has an infinite equational axiomatization involving iterated conjugates, there are only two inference rules needed for the hypersequent calculus. This is because hypersequent calculi have the ability to go directly to the level of (finitely) subdirectly irreducibles (\(\textsf{SRL}_{FSI}=\textsf{URL}\) in this case) and read off the axiomatization from there. ## 4. Continnum-many subvarieties of \(\mathsf{M}\) Even though we have a fairly good understanding of the residuated lattices based on \(\mathbf{M}_{X}\), where \(X\) is a set, we now show that there are continuum-many subvarieties of \(\mathsf{M}\). More precisely, we will prove that the variety \(\mathsf{M}_{\mathsf{G}}\) generated by all the residuated lattices of the form \(\mathbf{M}_{\mathbf{G}}\), where \(\mathbf{G}\) is an (abelian) group, has continuum-many subvarieties. We start with an equational basis for \(\mathsf{M}_{\mathsf{G}}\). **Proposition 4.1**.: _The variety \(\mathsf{M}_{\mathsf{G}}\) is axiomatized by the equations \(1=\gamma_{1}(u\backslash v)\vee\gamma_{2}(v\backslash u)\vee\gamma_{3}(x \backslash(u\wedge v))\vee\gamma_{4}((u\lor v)\backslash x)\vee\gamma_{5}(x(x \backslash 1))\), where \(\gamma_{1}\),\(\gamma_{2}\), \(\gamma_{3}\), \(\gamma_{4}\), \(\gamma_{5}\in\Gamma(Var)\)._ Proof.: The formula \(x=\overline{\bot}\) or \(x=\overline{\top}\) or \(x(x\backslash 1)=1\) axiomatizes the FSIs in the variety, so the result follows by Theorem 3.5. It is known that there are continuum-many varieties of groups (for example, see [13]) and we can use this fact to show that there is a continuum of subvarieties of \(\mathsf{M}_{\mathbf{G}}\), as follows. Starting with two varieties \(\mathcal{V}_{1}\neq\mathcal{V}_{2}\) of groups, we can consider the free groups \(\mathbf{F}_{1}\) and \(\mathbf{F}_{2}\) on countably many generators in these varieties; hence we have \(\mathsf{V}(\mathbf{F}_{1})=\mathcal{V}_{1}\neq\mathcal{V}_{2}=\mathsf{V}( \mathbf{F}_{2})\). Then, it is possible to show that \(\mathsf{V}(\mathbf{M}_{\mathbf{F}_{1}})\neq\mathsf{V}(\mathbf{M}_{\mathbf{F} _{2}})\). It is also well known that there are only countably-many varieties of abelian groups. However, we are still able to show that the variety \(\mathsf{CM}_{\mathsf{G}}\) of the commutative algebras in \(\mathsf{M}_{\mathbf{G}}\) also has continuum-many subvarieties. Actually, we give a full description of the subvariety lattice of \(\mathsf{CM}_{\mathsf{G}}\). We consider the direct power \(\mathbb{N}^{\omega}\) of countably many copies of the chain \((\mathbb{N},\leq)\) and its subset \(I\) of (not necessarily strictly) decreasing sequences that are eventually zero, such as \((4,2,1,1,0,0,\dots)\), \((3,2,1,1,1,0,0,\dots)\) etc. We will also denote these sequences by \((4,2,1,1)\) and \((3,2,1,1,1)\), respectively. It is easy to see that \(I\) defines a sublattice \(\mathbf{I}\) of the direct product. We also consider the subset \(I^{\oplus\omega}\) of the direct product \(\mathbf{I}^{\omega}\) of all sequences of elements of \(I\) that are eventually the zero sequence. It is easy to see that this defines a sublattice \(\mathbf{I}^{\oplus\omega}\) of the direct product \(\mathbf{I}^{\omega}\); it makes sense to call \(\mathbf{I}^{\oplus\omega}\) the _direct sum_ of \(\omega\) copies of \(\mathbf{I}\). We use commas to separate the numbers in each sequence in \(I\), but we use semicolons to separate the sequences in each element of \(I^{\oplus\omega}\); this allows for dropping parenthesis, if desired. Therefore, \((2,1;3,1,1;0;2,1,1;0;\dots)\) is an example of an element of \(I^{\oplus\omega}\). Now let \(\mathbf{P}=\mathbf{2}\times\mathbf{I}^{\oplus\omega}\), where \(\mathbf{2}\) is the two-element lattice on \(\{0,1\}\). For \(a\in P\), we define \(\exp(a)\) to be the maximum number appearing in \(a\); e.g., \[\exp(1;2,1;3,1,1;0;2,1,1;0;\dots)=3\text{ and }\exp(0;1,1,1;4,1;3,2;0; \dots)=4.\] Also, for \(a\in P\) we write \(a=(a_{0};a_{1};a_{2};\dots)\), where \(a_{0}\in\{0,1\}\) and \(a_{n}\in I\), for \(n>0\); we define \(\text{primes}(a)=\{n\in\mathbb{N}:a_{n}\neq 0\}\). For \(T\subseteq P\), we define \(\exp(T)=\{\exp(a):a\in T\}\) and \(\text{primes}(T)=\bigcup\{\text{primes}(a):a\in T\}\). A downset \(D\) of \(\mathbf{P}\) is said to be \(\mathbb{Z}\)_-closed_ if for all \(a\in P\), \(\exp(D\cap\uparrow a)\) or \(\text{primes}(D\cap\uparrow a)\) is unbounded implies \(a\vee(1;0;0;\dots)\in D\). For example, for \(a=(0;1;0;0;0;...)\), this condition has the following consequences: \[\begin{array}{c}(0;1;1;0;0;\dots),(0;1;2;0;0;\dots),(0;1;3;0;0;\dots),\dots \in D\\ \text{or}\\ (0;1,1;0;0;\dots),(0;2,1;0;0;\dots),(0;3,1;0;0;\dots),\dots\in D\end{array}\] implies \((1;1;0;0;0;\dots)\in D\), because \(\exp(D\cap\uparrow a)\) is unbounded. Also, \[(0;1;1;0;0;\dots),(0;1;0;1;0;\dots),(0;1;0;0;1;\dots),\dots\in D\] implies \((1;1;0;0;0;\dots)\in D\), because \(\text{primes}(D\cap\uparrow a)\) is unbounded. However, \[(0;1;1;0;0;\dots),(0;1;1,1;0;0;\dots),(0;1;1,1;0;0;\dots),\dots\in D\] does not imply \((1;1;0;0;0;\dots)\in D\). We denote the lattice of all \(\mathbb{Z}\)-closed downsets of \(\mathbf{P}\) by \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P})\). **Theorem 4.2**.: _The subvariety lattice of \(\mathsf{CM}_{\mathsf{G}}\) is isomorphic to \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P})\)._ Proof.: Recall that a class of algebras is closed under \(\mathsf{HSP}_{\mathsf{U}}\) iff it is axiomatizable by positive universal sentences. In other words, \(\mathsf{HSP}_{\mathsf{U}}\)-classes coincide with positive universal classes. Let \(\mathcal{F}\) be a congruence-distributive variety such that \(\mathcal{F}_{FSI}\) is a positive universal class. We claim that the subvarieties of \(\mathcal{F}\) are in bijective correspondence with \(\mathsf{HSP}_{\mathsf{U}}\)-subclasses of \(\mathcal{F}_{FSI}\), where the correspondence is given by \(\mathcal{V}\mapsto\mathcal{V}_{FSI}\) and \(\mathcal{K}\mapsto\mathsf{HSP}(\mathcal{K})\); furthermore, it is clear that this correspondence preserves and reflects the inclusion order. Indeed, \(\mathcal{V}_{FSI}=\mathcal{V}\cap\mathcal{F}_{FSI}\), so \(\mathcal{V}_{FSI}\) is axiomatized by positive universal sentences and the forward map of the correspodence is well defined. To show that the two maps are inverses of each other note that \(\mathsf{HSP}(\mathcal{V}_{FSI})\subseteq\mathcal{V}\subseteq\mathsf{SP}( \mathcal{V}_{SI})\subseteq\mathsf{HSP}(\mathcal{V}_{FSI})\) and by Jonsson's Lemma \(\mathcal{K}=\mathcal{K}_{FSI}\subseteq\mathsf{HSP}(\mathcal{K})_{FSI} \subseteq\mathsf{HSP}_{\mathsf{U}}(\mathcal{K})=\mathcal{K}\). Note that residuated lattices form a congruence distributive variety by [10] and, by Theorem 3.5 and Proposition 4.1, \((\mathsf{CM}_{\mathsf{G}})_{FSI}=\mathsf{CM}_{\mathsf{G}}\cap\mathsf{SRL}_{FSI}\) is axiomatized by positive universal sentences. So, by the preceding paragraph, the lattice of subvarieties of \(\mathsf{CM}_{\mathsf{G}}\) is isomorphic to the lattice of \(\mathsf{HSP}_{\mathsf{U}}\)-classes of FSIs in \(\mathsf{CM}_{\mathsf{G}}\) which by Theorem 3.5 and Proposition 4.1 are \(\mathsf{HSP}_{\mathsf{U}}\)-classes of algebras of the form \(\mathbf{M}_{\mathbf{G}}\), where \(\mathbf{G}\) is an abelian group. Further note that \(\mathsf{H}\) can be replaced by \(\mathsf{l}\). Indeed, every ultrapower of algebras of the form \(\mathbf{M}_{\mathbf{G}}\), where \(\mathbf{G}\) is an abelian group, is also an algebra of the same form (it satisfies the same first-order sentences, hence also all positive universal sentences). Also, subalgebras are also of the same form (where we also include the trivial algebra). Finally, since every algebra of this form is simple (since their lattice reduct is simple), \(\mathsf{H}\) does not contribute any new algebras. So we are interested in \(\mathsf{ISP}_{\mathsf{U}}\)-classes of algebras of the form \(\mathbf{M}_{\mathbf{G}}\), where \(\mathbf{G}\) is an abelian group. We now prove that such classes are in bijective correspondence with \(\mathsf{ISP}_{\mathsf{U}}\)-classes of abelian groups, by showing that for every class \(\mathcal{K}\) of abelian groups, we have \(\mathsf{ISP}_{\mathsf{U}}(\{\mathbf{M}_{\mathbf{H}}:\mathbf{H}\in\mathcal{K} \})=\mathsf{l}\{\mathbf{M}_{\mathbf{G}}:\mathbf{G}\in\mathsf{SP}_{\mathsf{U}}( \mathcal{K})\}\) and thus this class can be associated with \(\mathsf{ISP}_{\mathsf{U}}(\mathcal{K})\); clearly this correspondence preserves and reflects the order. First we show \(\mathsf{IP}_{\mathsf{U}}(\{\mathbf{M}_{\mathbf{H}}:\mathbf{H}\in\mathcal{K} \})=\mathsf{l}\{\mathbf{M}_{\mathbf{G}}:\mathbf{G}\in\mathsf{IP}_{\mathsf{U}}( \mathcal{K})\}\). For a residuated lattice \(\mathbf{R}\), if \(\mathbf{R}\in\mathsf{IP}_{\mathsf{U}}(\{\mathbf{M}_{\mathbf{H}}:\mathbf{H}\in \mathcal{K}\})\), then \(\mathbf{R}\) satisfies all first-order sentences that hold in the \(\mathbf{M}_{\mathbf{H}}\)'s, where \(\mathbf{H}\in\mathcal{K}\). In particular, \(\mathbf{R}\) is commutative, unilinear, has height at most \(3\), and all of its non-bound elements are invertible, closed under multiplication and serve as units for the top. Therefore, \(\mathbf{R}\) is isomorphic to \(\mathbf{M}_{\mathbf{G}}\) for some abelian group \(\mathbf{G}\). Also, clearly, all algebras in \(\mathsf{P}_{\mathsf{U}}(\mathcal{K})\) are abelian groups. Therefore the classes on both sides of the equation contain only algebras isomorphic to \(\mathbf{M}_{\mathbf{G}}\) for some abelian group \(\mathbf{G}\), and it is enough to focus on such algebras: we show that for every abelian group \(\mathbf{G}\), \(\mathbf{M}_{\mathbf{G}}\in\mathsf{IP}_{\mathsf{U}}(\{\mathbf{M}_{\mathbf{H}}: \mathbf{H}\in\mathcal{K}\})\) iff \(\mathbf{G}\in\mathsf{IP}_{\mathsf{U}}(\mathcal{K})\); we will identify the bounds in all algebras to omit \(\mathsf{l}\). If \(\mathbf{M}_{\mathbf{G}}\in\mathsf{P}_{\mathsf{U}}(\{\mathbf{M}_{\mathbf{H}}: \mathbf{H}\in\mathcal{K}\})\), there exists an index set \(I\), an ultrafilter \(U\) on \(I\) and \(\mathbf{H}_{i}\in\mathcal{K}\), \(i\in I\), such that \(\mathbf{M}_{\mathbf{G}}=\prod\mathbf{M}_{\mathbf{H}_{i}}/U\). So, for every \(g\in G\) there exists \(x_{g}\in\prod\mathbf{M}_{\mathbf{H}_{i}}\) such that \(g=[x_{g}]\), the equivalence class of \(x_{g}\). We will use \(\overline{\top}\) and \(\overline{\bot}\) to denote the tuples \((\top)_{i\in I}\) and \((\bot)_{i\in I}\) in \(\prod\mathbf{M}_{\mathbf{H}_{i}}\) respectively. Then for all \(g\in G\), we have \(g\neq\overline{[\top]}\) and \(g\neq\overline{[\bot]}\), since \(g\) is invertible while \(\overline{[\top]}\) and \(\overline{[\bot]}\) are idempotents different than the identity. So we know \(\{i\in I:x_{g}(i)\neq\top\}\in U\) and \(\{i\in I:x_{g}(i)\neq\bot\}\in U\), hence \(\{i\in I:x_{g}(i)\in H\}=\{i\in I:x_{g}(i)\neq\top\}\cap\{i\in I:x_{g}(i)\neq \bot\}\in U\). Now define a tuple \(x\) in \(\prod\mathbf{H}_{\mathbf{i}}\) by \(x(i)=x_{g}(i)\), if \(x_{g}(i)\in H_{i}\), and \(x(i)=1\) otherwise. Then we have \(g=[x_{g}]=[x]\in\prod H_{i}/U\), so \(\mathbf{G}\in\mathsf{P}_{\mathsf{U}}(\mathcal{K})\). If \(\mathbf{G}\in\mathsf{P}_{\mathsf{U}}(\mathcal{K})\), then there exists an index set \(I\), an ultrafilter \(U\) on \(I\) and \(\mathbf{H}_{i}\in\mathcal{K}\), \(i\in I\), such that \(\mathbf{G}=\prod\mathbf{H}_{\mathbf{i}}/U\). Using the same index set \(I\) and ultrafilter \(U\) on \(I\), we know \(\prod\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}/U\) is also of the form \(\mathbf{M}_{\mathbf{K}}\), where \(\mathbf{K}\) is an abelian group. Since \(\overline{[\top]}\vee[x]=\overline{[\top\lor x]}=\overline{[\top]}\) and \(\overline{[\bot]}\wedge[x]=\overline{[\bot\wedge x]}=\overline{[\bot]}\), we get \(\overline{[\top\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}]}=\overline{\top}_{\prod \mathbf{M}_{\mathbf{H}_{\mathbf{i}}}/U}\) and \([\overline{\bot_{\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}}}]=\overline{\bot}_{\prod \mathbf{M}_{\mathbf{H}_{\mathbf{i}}}/U}\). For \([x]\in K\), we have \([x]\neq\overline{[\top\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}]}\) and \([x]\neq\overline{[\bot_{\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}}]}\). So \(\{i\in I:x(i)\neq\top_{\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}}\}\in U\) and \(\{i\in I:x(i)\neq\bot_{\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}}\}\in U\), hence \(\{i\in I:x(i)\in H_{i}\}=\{i\in I:x(i)\neq\top_{\mathbf{M}_{\mathbf{H}_{\mathbf{i} }}}\}\cap\{i\in I:x(i)\neq\bot_{\mathbf{M}_{\mathbf{H}_{\mathbf{i}}}}\}\in U\); so \([x]\in\prod H_{i}/U=\mathbf{G}\) and \(K\subseteq G\). Conversely, if \([x]\in\prod H_{i}/U=\mathbf{G}\) then \([x]\in K\), so \(G\subseteq K\). Therefore \(\mathbf{M}_{\mathbf{G}}\in\mathsf{P}_{\mathsf{U}}(\{\mathbf{M}_{\mathbf{H}}: \mathbf{H}\in\mathcal{K}\})\). Again note that to show \(\mathsf{S}(\mathbf{M}_{\mathbf{H}})=\{\mathbf{M}_{\mathbf{G}}:\mathbf{G}\in \mathsf{S}(\mathbf{H})\}\) it is enough to focus on algebras of the form \(\mathbf{M}_{\mathbf{G}}\), where \(\mathbf{G}\) is an abelian group. If \(\mathbf{M}_{\mathbf{G}}\in\mathsf{S}(\mathbf{M}_{\mathbf{H}})\), then for all \(x,y\in G\), we have \(x\cdot_{\mathbf{G}}y=x\cdot_{\mathbf{M}_{\mathbf{G}}}y=x\cdot_{\mathbf{M}_{ \mathbf{H}}}y=x\cdot_{\mathbf{H}}y\) and \(x^{-1\mathbf{G}}=x\backslash_{\mathbf{M}_{\mathbf{G}}}1=x\backslash_{\mathbf{M}_ {\mathbf{H}}}1=x^{-1\mathbf{H}}\); so \(\mathbf{G}\in\mathsf{S}(\mathbf{H})\). Conversely, if \(\mathbf{G}\in\mathsf{S}(\mathbf{H})\), then for all \(x,y\in M_{G}\setminus\{\bot,\top\}\) we have \(x\cdot_{\mathbf{M}_{G}}y=x\cdot_{\mathbf{G}}y=x\cdot_{\mathbf{H}}y=x\cdot_{ \mathbf{M}_{\mathbf{H}}}y\), \(x\backslash_{\mathbf{M}_{G}}y=x^{-1\mathbf{G}}\cdot_{\mathbf{G}}y=x^{-1\mathbf{H }}\cdot_{\mathbf{H}}y=x\backslash_{\mathbf{M}_{H}}y\) and \(y/\mathbf{M}_{\mathbf{G}}x=y\cdot_{m\mathbf{G}}x^{-1\mathbf{G}}=y\cdot_{m\mathbf{H}} x^{-1\mathbf{H}}=y/\mathbf{M}_{H}x\). Also, since \(\mathbf{M}_{G}\) is rigorously compact, the operations on \(\mathbf{G}\) and \(\mathbf{H}\) also agree if one of \(x\), \(y\) is in \(\{\bot,\top\}\). So \(\mathbf{M}_{\mathbf{G}}\in\mathsf{S}(\mathbf{M}_{\mathbf{H}})\). Actually, given that every algebra is an ultraproduct of its finitely generated subalgebras, \(\mathsf{ISP}_{\mathsf{U}}\)-classes of abelian groups are fully determined by their intersection with the class of finitely generated abelian groups. Therefore, we are interested only in such intersections; clearly this correspondence preserves and reflects the order. By the fundamental theorem of finitely generated abelian groups we know that every finitely generated abelian group is isomorphic to exactly one group of the form \[\mathbb{Z}^{m}\times(\mathbb{Z}_{p_{1}^{n_{1,1}}}\times\cdots\times\mathbb{Z}_ {p_{1}^{n_{1,m_{1}}}})\times\cdots\times(\mathbb{Z}_{p_{k}^{n_{k,1}}}\times \cdots\times\mathbb{Z}_{p_{k}^{n_{k,m_{k}}}})\] for some \(m,k,m_{1},\ldots,m_{k},n_{i,j}\in\mathbb{N}\), where \(n_{i,j}\geq n_{i,j+1}\) for all suitable \(i,j\), and \(p_{1}<p_{2}<\cdots<p_{k}<\ldots\) is the listing of all primes. We denote by \(\mathcal{FA}\) the set of all groups of this form; also by \(f\mathcal{A}\) we denote all the finite algebras in \(\mathcal{FA}\) (i.e., where \(m=0\)). Since \(\mathcal{FA}\) is a full set of representatives of the isomorphism classes of finitely generated abelian groups, instead of considering intersections of \(\mathsf{ISP}_{\mathsf{U}}\)-classes of abelian groups with the class of finitely generated abelian groups, we can instead focus on intersections of \(\mathsf{ISP}_{\mathsf{U}}\)-classes of abelian groups with \(\mathcal{FA}\). In other words, we have established that the subvariety lattice of \(\mathsf{CM}_{\mathbf{G}}\) is isomorphic to \(\{\mathcal{K}\cap\mathcal{FA}:\mathcal{K}\text{ is an }\mathsf{ISP}_{\mathsf{U}}\text{-class of abelian groups}\}\), where the order is given by: \(\mathcal{K}\cap\mathcal{FA}\leq\mathcal{L}\cap\mathcal{FA}\) iff \(\mathsf{ISP}_{\mathsf{U}}(\mathcal{K}\cap\mathcal{FA})\subseteq\mathsf{ISP}_{ \mathsf{U}}(\mathcal{L}\cap\mathcal{FA})\). In the following, we will write \(\mathcal{K}_{\mathcal{FA}}\) for \(\mathcal{K}\cap\mathcal{FA}\). To the abelian group displayed above, we associate the sequence \[(m;(n_{1,1},\ldots,n_{1,m_{1}},0,\ldots);\ldots;(n_{k,1},\ldots,n_{k,m_{k}},0, \ldots);(0,\ldots);\ldots)\] which is an element of the lattice \(\mathbb{N}\times\mathbf{I}^{\oplus\omega}\). Also, note that the bijective correspondence from \(\mathcal{FA}\) to \(\mathbb{N}\times I^{\oplus\omega}\) is actually a lattice isomorphism between \(\mathbb{N}\times\mathbf{I}^{\oplus\omega}\) and \(\mathcal{FA}\) under the order given by: \(\mathbf{G}\leq_{\mathcal{FA}}\mathbf{H}\) iff \(\mathbf{G}\in\mathsf{IS}(\mathbf{H})\). Now, sets of the form \(\mathcal{K}_{\mathcal{FA}}\), where \(\mathcal{K}\) is an \(\mathsf{ISP}_{\mathsf{U}}\)-class of abelian groups, are of course downsets of \(\mathcal{FA}\), but unfortunately not all downsets of \(\mathcal{FA}\) are of this form. For example, note that for \(r,s\in\mathbb{Z}^{+}\), \(\mathbf{G}\in f\mathcal{A}\) and \(\mathcal{K}\) an \(\mathsf{ISP}_{\mathsf{U}}\)-class of abelian groups, we have: \(\mathbf{G}\times\mathbb{Z}^{r}\in\mathcal{K}\) iff \(\mathbf{G}\times\mathbb{Z}^{s}\in\mathcal{K}\). (So, for example \(\downarrow\{\mathbb{Z}^{2}\}=\{\{1\},\mathbb{Z}^{2},\mathbb{Z}\}\) is a downset of \(\mathcal{FA}\) that is not of the form \(\mathcal{K}_{\mathcal{FA}}\).) To prove this, it suffices to prove: if \(\mathbf{G}\times\mathbb{Z}\in\mathcal{K}\) then \(\mathbf{G}\times\mathbb{Z}^{t}\in\mathcal{K}\) for all \(t\in\mathbb{Z}^{+}\). Let \(U\) be a non-principal ultrafilter on \(\mathbb{N}\) and consider the elements \(a=[\overline{1}]_{U}\) and \(b=[(2,2^{2},2^{3},\ldots)]_{U}\) of \(\mathbb{Z}^{\mathbb{N}}/U\); each has infinite order. Note that for all \(m,n\in\mathbb{N}\), the set \(\{i\in\mathbb{N}:m\cdot 1=n\cdot 2^{i}\}\) contains at most one element. Since \(U\) is not principal, we get \(\{i\in\mathbb{N}:m\cdot 1=n\cdot 2^{i}\}\not\in U\), so \(ma\neq nb\). Thus \(\langle a,b\rangle\cong\mathbb{Z}\times\mathbb{Z}\) and \(\mathbb{Z}\times\mathbb{Z}\in\mathsf{P}_{\mathsf{U}}(\mathbb{Z})\). Similarly, to show \(\mathbb{Z}^{t}\in\mathsf{P}_{\mathsf{U}}(\mathbb{Z})\), it suffices to take \(a_{p_{1}}=[(p_{1},p_{1}^{2},p_{1}^{3},\ldots)]_{U}\), \(a_{p_{2}}=[(p_{2},p_{2}^{2},p_{2}^{3},\ldots)]_{U}\), \(\ldots\), \(a_{p_{t}}=[(p_{t},p_{t}^{2},p_{t}^{3},\ldots)]_{U}\), where \(p_{1},p_{2},\ldots,p_{t}\) are distinct primes, and we have \(\langle a_{p_{1}},\ldots,a_{p_{t}}\rangle\cong\mathbb{Z}^{t}\). More generally, we can show \(\{\mathbf{G}\times\mathbb{Z}^{t}:t\in\mathbb{Z}^{+}\}\subseteq\mathsf{P}_{ \mathsf{U}}(\mathbf{G}\times\mathbb{Z})\) for any \(\mathbf{G}\in f\mathcal{A}\). For this reason, it makes sense to identify \(\mathbf{G}\times\mathbb{Z}^{r}\) and \(\mathbf{G}\times\mathbb{Z}^{s}\) whenever \(r\) and \(s\) are both non-zero. This can be done by considering the subset \(\mathcal{FA}^{\prime}=f\mathcal{A}\cup\{\mathbb{Z}\times\mathbf{G}:\mathbf{G} \in f\mathcal{A}\}\) of \(\mathcal{FA}\). The set \(\mathcal{FA}^{\prime}\) also forms a lattice (actually a sublattice of \(\mathcal{FA}\)) isomorphic to \(\mathbf{P}=\mathbf{2}\times\mathbf{I}^{\oplus\omega}\). Therefore, moving through the isomorphism, we can apply the definitions of \(\exp\) and primes also to downsets of \(\mathcal{FA}^{\prime}\). To be more specific, a downset \(D\) of \(\mathcal{FA}^{\prime}\) is \(\mathbb{Z}\)-closed if for all \(\mathbf{G}\in f\mathcal{A}\), \(\exp(D\cap\uparrow\mathbf{G})\) or \(\operatorname{primes}(D\cap\uparrow\mathbf{G})\) being unbounded implies that \(\mathbb{Z}\times\mathbf{G}\in D\). Also, by the fact established in the last paragraph we have a lattice isomorphism between \(\{\mathcal{K}_{\mathcal{FA}}:\mathcal{K}\text{ is a }\mathsf{ISP}_{\mathsf{U}}\text{-class}\}\) and \(\{\mathcal{K}_{\mathcal{FA}^{\prime}}:\mathcal{K}\text{ is a }\mathsf{ISP}_{\mathsf{U}}\text{-class}\}\), where \(\mathcal{K}_{\mathcal{FA}^{\prime}}=\mathcal{K}\cap\mathcal{FA}^{\prime}\). Clearly, if \(\mathcal{K}\) is an \(\mathsf{ISP}_{\mathsf{U}}\text{-class}\) of abelian groups, then \(\mathcal{K}_{\mathcal{FA}^{\prime}}\) is a downset of \(\mathcal{FA}^{\prime}\). Unfortunately, still not every downset of \(\mathcal{FA}^{\prime}\) is of this form. For example, \(\{\mathbb{Z}_{p}:p\text{ is prime}\}\) is a downset of \(\mathcal{FA}^{\prime}\), but since \(\mathbb{Z}\in\mathsf{P}_{\mathsf{U}}(\{\mathbb{Z}_{p}:p\text{ is prime}\})\), \(\{\mathbb{Z}_{p}:p\text{ is prime}\}\) is not of the form \(\mathcal{K}_{\mathcal{FA}^{\prime}}\). In the following we show that \(\{\mathcal{K}_{\mathcal{FA}^{\prime}}:\mathcal{K}\text{ is an }\mathsf{ISP}_{\mathsf{U}}\text{-class}\}\) is equal to the lattice of \(\mathbb{Z}\)-closed downsets of \(\mathcal{FA}^{\prime}\). First we note that for \(X\subseteq P\), we have that \(\exp(X)\) and \(\text{primes}(X)\) are bounded iff there exist \(K,N\in\mathbb{N}\) such that for all \(a\in X\), \(k>K\), \(n,m\in\mathbb{N}\), we have \(a_{k}=\overline{0}\) and \(a_{n,m}\leq N\). Therefore, for \(X\subseteq\mathcal{FA}^{\prime}\), we have that \(\exp(X)\) and \(\text{primes}(X)\) are bounded iff there exist \(K,N\in\mathbb{N}\) such that the cyclic groups in the decomposition of groups in \(X\) are among the \(\mathbf{Z}_{p_{k}^{n}}\), where \(k\leq K\) and \(n\leq N\). This is in turn equivalent to asking that there is \(M\in\mathbb{N}\) such that all elements in all the groups in \(X\) have order at most \(M\) (by taking \(M=(p_{1}\cdots p_{K})^{N}\)). Now, for an \(\mathsf{ISP}_{\mathsf{U}}\text{-class }\mathcal{K}\) of abelian groups, \(\mathcal{K}_{\mathcal{FA}^{\prime}}\) is a downset of \(\mathcal{FA}^{\prime}\). To show that it is \(\mathbb{Z}\)-closed, let \(\mathbf{G}\in f\mathcal{A}\). If one of \(\exp(\mathcal{K}_{\mathcal{FA}^{\prime}}\cap\uparrow\mathbf{G})\), \(\text{primes}(\mathcal{K}_{\mathcal{FA}^{\prime}}\cap\uparrow\mathbf{G})\) is unbounded, there is no uniform bound in the order of the elements in the groups from \(\mathcal{K}_{\mathcal{FA}^{\prime}}\); so, there is an infinite subset \(\{\mathbf{H}_{n}:n\in\mathbb{N}\}\) of \(\mathcal{K}_{\mathcal{FA}^{\prime}}\cap\uparrow\mathbf{G}\) such that \(\mathbf{H}_{n}\) contains an element of order greater than \(n\), say \(h_{n}\). Therefore, the element \([(h_{n})]\) in any fixed non-principal ultraproduct \(\mathbf{H}\) of \(\{\mathbf{H}_{n}:n\in\mathbb{N}\}\) has infinite order, and consequently \(\mathbf{H}\) contains a copy of \(\mathbb{Z}\). On the other hand, note that if \(\mathbf{G}=\{g_{1},\ldots,g_{k}\}\), then for every group \(\mathbf{A}\) we have \(\mathbf{G}\in\mathsf{IS}(\mathbf{A})\) iff \(\mathbf{A}\vDash\phi_{\mathbf{G}}\), where \(\phi_{\mathbf{G}}\) encodes the multiplication of \(\mathbf{G}\): \(\exists x_{g_{1}},\ldots,x_{g_{k}}\) (\(\bigwedge\{x_{g_{i}}\neq x_{g_{j}}:i\neq j\}\wedge\bigwedge\{x_{g_{i}}x_{g_{j}}= x_{g_{i}g_{j}}:1\leq i,j\leq n\}\)). Since, for all \(n\), \(\mathbf{H}_{n}\) contains a copy of \(\mathbf{G}\), \(\mathbf{H}_{n}\) satisfies \(\phi_{\mathbf{G}}\); hence \(\mathbf{H}\) also satisfies \(\phi_{\mathbf{G}}\) and \(\mathbf{H}\) contains a subgroup isomorphic to \(\mathbf{G}\). Therefore, \(\mathbb{Z}\times\mathbf{G}\in\mathsf{IS}(\mathbf{H})\subseteq\mathsf{IS}( \mathcal{K})=\mathcal{K}\) and so \(\mathbb{Z}\times\mathbf{G}\in\mathcal{K}_{\mathcal{FA}^{\prime}}\). Conversely, for a \(\mathbb{Z}\)-closed downset \(D\) of \(\mathcal{FA}^{\prime}\), we define \(\mathcal{K}_{D}=\mathsf{ISP}_{\mathsf{U}}(D)\) and prove that \(\mathcal{K}_{D}\cap\mathcal{FA}^{\prime}=D\). Since \(D\subseteq\mathcal{K}_{D}\) and \(D\subseteq\mathcal{FA}^{\prime}\), it suffices to prove \(\mathcal{K}_{D}\cap\mathcal{FA}^{\prime}\subseteq D\). If \(\mathbb{Z}^{m}\times\mathbf{G}\in\mathcal{K}_{D}\cap\mathcal{FA}^{\prime}\), where \(m\in\{0,1\}\) and \(\mathbf{G}\in f\mathcal{A}\), then a copy of \(\mathbb{Z}^{m}\times\mathbf{G}\) is contained in the ultraproduct \(\prod\mathbf{A}_{i}/U\) of some \(\{\mathbf{A}_{i}:i\in I\}\subseteq D\). Since \(\prod\mathbf{A}_{i}/U\) contains a copy of \(\mathbf{G}\), it satisfies the sentence \(\phi_{\mathbf{G}}\), so \(I_{\mathbf{G}}:=\{i\in I:\mathbf{G}\in\mathsf{IS}(\mathbf{A}_{i})\}=\{i\in I: \mathbf{A}_{i}\vDash\phi_{\mathbf{G}}\}\in U\). If \(m=1\), then \(\prod\mathbf{A}_{i}/U\) contains a copy of \(\mathbb{Z}\), so it has an element of infinite order. Therefore, there is no \(M\) such that \(\prod\mathbf{A}_{i}/U\) satisfies the sentence \((\forall x)(Mx=0)\), so there is no \(M\) such that \(\{\mathbf{A}_{i}:i\in I_{\mathbf{G}}\}\) satisfy the sentence, so there is no uniform bound on the orders of the elements of \(\{\mathbf{A}_{i}:i\in I_{\mathbf{G}}\}\); thus \(\exp(\{\mathbf{A}_{i}:i\in I_{\mathbf{G}}\})\) or \(\text{primes}(\{\mathbf{A}_{i}:i\in I_{\mathbf{G}}\})\) is unbounded. Since, \(\exp(\{\mathbf{A}_{i}:i\in I_{\mathbf{G}}\})\subseteq\exp(D\cap\uparrow\mathbf{G})\), \(\text{primes}(\{\mathbf{A}_{i}:i\in I_{\mathbf{G}}\})\subseteq\text{primes}(D\cap \uparrow\mathbf{G})\) and \(D\) is a \(\mathbb{Z}\)-closed downset, we get \(\mathbb{Z}^{m}\times\mathbf{G}=\mathbb{Z}\times\mathbf{G}\in D\). If \(m=0\), then we also have \(\mathbb{Z}^{m}\times\mathbf{G}=\mathbf{G}\in D\). Thus the lattice \(\{\mathcal{K}_{\mathcal{FA}^{\prime}}:\mathcal{K}\text{ is a }\mathsf{ISP}_{\mathsf{U}}\text{-class}\}\) is isomorphic to \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P})\), and hence the lattice \(\Lambda(\mathsf{CM}_{\mathsf{G}})\) of subvarieties of \(\mathsf{CM}_{\mathsf{G}}\) is isomorphic to the lattice \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P})\). **Corollary 4.3**.: _The variety generated by \(\{\mathbf{M}_{\mathbb{Z}_{p}}:p\text{ is prime}\}\) has continuum-many subvarieties. Therefore the subvariety lattices of \(\mathsf{M}_{\mathsf{G}}\) and of \(\mathsf{M}\) have size continuum._ Proof.: For every prime \(p\), the variety \(\mathsf{V}(\mathbf{M}_{\mathbb{Z}_{p}})\) corresponds to the principal downset of the sequence \((0;0;\ldots;0;1;0;\ldots)\) in \(\mathbf{P}\), where the \(1\) is at the position of the prime \(p\). The variety generated by all \(\mathbf{M}_{\mathbb{Z}_{p}}\text{'s is the join of all of the }\mathsf{V}(\mathbf{M}_{\mathbb{Z}_{p}})\), where \(p\) is prime, and corresponds to the \(\mathbb{Z}\)-closed downset \[\overline{P\mathbb{N}}:=\{(1;0;0;\ldots),(0;1;0;\ldots),\ldots,(0;\ldots;1;0; \ldots),\ldots\}\] in \(\mathbf{P}\). The \(\mathbb{Z}\)-closed subdownsets of \(\overline{P\mathbb{N}}\) in the lattice \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P})\) is clearly isomorphic, as a lattice, to \(\mathcal{P}(\mathbb{N})\). We denote by \(\mathsf{CM}_{\mathbf{G}Z}\) the variety generated by the algebras in \(\mathsf{M}\) that satisfy the formula (ZGroup) \[x\overline{\top}=x\text{ or }x(x\backslash 1)=1.\] Let \(\mathbf{F}\) be the poset on \(\{0,1,2,3\}\), where \(0<1,2,3\) and \(1,2,3\) are incomparable. For a downset \(D\) of \(\mathbf{P}\times\mathbf{F}\) and \(i\in F\), we set \(D_{i}=\{a:(a,i)\in D\}\). A downset \(D\) of \(\mathbf{P}\times\mathbf{F}\) is called \(\mathbb{Z}\)-closed if \(D_{0}\), \(D_{1}\), \(D_{2}\) and \(D_{3}\) are \(\mathbb{Z}\)-closed downsets of \(\mathbf{P}\); we denote by \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P}\times\mathbf{F})\) the lattice of all \(\mathbb{Z}\)-closed downsets of \(\mathbf{P}\times\mathbf{F}\). **Theorem 4.4**.: _The subvariety lattice of \(\mathsf{CM}_{\mathbb{G}Z}\) is isomorphic to \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P}\times\mathbf{F})\)._ Proof.: By Theorem 3.5 and Corollary 2.3 the FSI members of \(\mathsf{CM}_{\mathbb{G}Z}\) are unilinear residuated lattices of the form \(\mathbf{R}\), \(\mathbf{R}+1\), \(\mathbf{R}+2\) or \(\mathbf{R}+3\), where \(\mathbf{R}=\mathbf{M}_{\mathbf{G}}\) and \(\mathbf{G}\) is an abelian group, \(\mathbf{A}\) is the \(\top\)-cancellative monoid on \(G\cup\{\top\}\); \(\mathbf{R}+1=\mathbf{R}_{\mathbf{A},\mathbf{B}_{1}}\), where \(\mathbf{B}_{1}\) is the \(\bot\)-semigroup based on \(\{\bot,b\}\) given in Figure 2 with \(b^{2}=\bot\); \(\mathbf{R}+2=\mathbf{R}_{\mathbf{A},\mathbf{B}_{2}}\), where \(\mathbf{B}_{2}\) is the \(\bot\)-semigroup based on \(\{\bot,b_{1},b_{2}\}\) given in Figure 2; and \(\mathbf{R}+3=\mathbf{R}_{\mathbf{A},\mathbf{B}_{3}}\), where \(\mathbf{B}_{3}\) is the \(\bot\)-semigroup based on \(\{\bot,b\}\) given in Figure 2 with \(b^{2}=b\); we define \(\mathbf{R}+0=\mathbf{R}\). Note that \(\mathbf{R}\) is a subalgebra of \(\mathbf{R}+i\), for all \(i\in\{0,1,2,3\}\). In the proof of Theorem 4.2, we saw that subvarieties of \(\mathsf{CM}_{\mathbb{G}}\) are determined by the \(\mathbb{Z}\)-closed downsets of \(\mathcal{F}\mathcal{A}^{\prime}\). We now sketch how subvarieties of \(\mathsf{CM}_{\mathbb{G}Z}\) are determined by the \(\mathbb{Z}\)-closed downsets of the poset \(\mathbf{M}_{\mathcal{F}\mathcal{A}^{\prime}}+\mathbf{F}:=\{\mathbf{M}_{ \mathbf{G}}+i:\mathbf{G}\in\mathcal{F}\mathcal{A}^{\prime},i\in F\}\), where the order is given by \(\mathbf{M}_{\mathbf{G}}+i\leq\mathbf{M}_{\mathbf{H}}+j\) iff \(\mathbf{G}\leq_{\mathcal{F}\mathcal{A}^{\prime}}\mathbf{H}\) and \(i\leq_{\mathbf{F}}j\); this poset is clearly isomorphic to \(\mathbf{P}\times\mathbf{F}\), so the definition of \(\mathbb{Z}\)-closed downsets of \(\mathbf{P}\times\mathbf{F}\) can be transferred here. More specifically, a downset \(D\) of \(\mathbf{M}_{\mathcal{F}\mathcal{A}^{\prime}}+\mathbf{F}\) is \(\mathbb{Z}\)-closed iff for all \(0\leq i\leq 3\), \(D\cap(\mathbf{M}_{\mathcal{F}\mathcal{A}^{\prime}}+\{i\})\) is isomorphic to a \(\mathbb{Z}\)-closed downset of \(\mathcal{F}\mathcal{A}^{\prime}\). Every subvariety \(\mathcal{V}\) of \(\mathsf{CM}_{\mathbb{G}Z}\) is determined by its finitely generated FSI algebras. These are finitely generated algebras of the form \(\mathbf{R}\), \(\mathbf{R}+1\), \(\mathbf{R}+2\) or \(\mathbf{R}+3\), where \(\mathbf{R}\in(\mathsf{CM}_{\mathbb{G}})_{FSI}\), i.e., \(\mathbf{R}=\mathbf{M}_{\mathbf{G}}\), and \(\mathbf{G}\) is a finitely generated abelian group. So, \(\mathcal{V}_{FSI}\) is a downset of \(\mathbf{M}_{\mathcal{F}\mathcal{A}^{\prime}}+\mathbf{F}\). For \(0\leq i\leq 3\), if \(\mathbf{G}\in f\mathcal{A}\) and \(\exp(D_{i}\cap\uparrow G)\) or primes\((D_{i}\cap\uparrow G)\) is unbounded, where \(D_{i}=\{\mathbf{K}\in\mathcal{F}\mathcal{A}^{\prime}:\mathbf{M}_{\mathbf{K}}+i \in\mathcal{V}_{FSI}\}\), then by the proof of Theorem 4.2, we have \(\mathbb{Z}\times\mathbf{G}\in D_{i}\). So \(D_{i}\) is a \(\mathbb{Z}\)-closed downset of \(\mathcal{F}\mathcal{A}^{\prime}\) for \(0\leq i\leq 3\) and hence \(\mathcal{V}_{FSI}\) is a \(\mathbb{Z}\)-closed downset of \(M_{\mathcal{F}\mathcal{A}^{\prime}}+\mathbf{F}\). By Corollary 2.3, for every downset \(D\) of \(\mathbf{M}_{\mathcal{F}\mathcal{A}^{\prime}}+\mathbf{F}\), the ultraproducts of algebras from \(D\) are isomorphic to \(\mathbf{M}_{\mathbf{G}}+i\), for some \(0\leq i\leq 3\). It can be easily shown that for such ultraproduct \(\mathbf{M}_{\mathbf{G}}+i\), \(\mathbf{G}\) is an ultraproduct of \(\{\mathbf{H}:i\leq_{\mathbf{F}}j,\mathbf{M}_{\mathbf{H}}+j\in D\}\); since \(D\) is a downset, actually \(\mathbf{G}\) is an ultraproduct of \(\{\mathbf{H}:\mathbf{M}_{\mathbf{H}}+i\in D\}\). (Also, conversely, if \(\mathbf{G}\) is an ultraproduct of \(\{\mathbf{H}_{j}:j\in J\}\) and \(i\in F\), then \(\mathbf{M}_{\mathbf{G}}+i\) is isomorphic to an ultraproduct of algebras in the downset \(\{\mathbf{M}_{\mathbf{K}_{j}}+k:j\in J,\mathbf{K}_{j}\leq_{\mathcal{F}\mathcal{A} ^{\prime}}\mathbf{H}_{j},k\leq_{\mathbf{F}}i\}\) of \(\mathbf{M}_{\mathcal{F}\mathcal{A}^{\prime}}+\mathbf{F}\).) So if \(D\) is a \(\mathbb{Z}\)-closed, then \(\mathbf{G}\in D_{i}\); hence \(\mathbf{M}_{\mathbf{G}}+i\in D\). Consequently, we have \(\mathsf{ISP}_{\mathsf{U}}(D)\cap(M_{\mathcal{F}\mathcal{A}^{\prime}}+\mathbf{F})=D\), hence the subvariety lattice of \(\mathsf{CM}_{\mathbb{G}Z}\) is isomorphic to \(\mathcal{O}_{\mathbb{Z}}(\mathbf{P}\times\mathbf{F})\) ## 5. The finite embeddability property In this section we establish the finite embeddability property for certain subvarieties of \(\mathsf{SRL}\). Recall that a class \(\mathcal{K}\) is said to have the _finite embeddability property_ (FEP) if for every algebra \(\mathbf{A}\in\mathcal{K}\) and a finite subset \(B\) of \(A\), there exists a finite algebra \(\mathbf{C}\in\mathcal{K}\) such that the partial subalgebra \(\mathbf{B}\) of \(\mathbf{A}\) induced by \(B\) embeds in \(\mathbf{C}\). For varieties axiomatized by a recursive set of equations, the valid universal sentences form a recursively enumerable set. Also, if the variety has the FEP, then any universal sentence that is not valid will fail in a finite algebra of the variety. By enumerating these finite algebras (using the finite axiomatizability of the variety) we can thus enumerate the universal sentences that fail in the variety. Therefore, recursively axiomatizable varieties with the FEP have a decidable universal theory; moreover, they are generated as universal classes (thus also as quasivarieties and as varieties) by their finite algebras. **Theorem 5.1**.: _The variety \(\mathsf{CM_{G}}\) has the FEP._ Proof.: To prove this, first we claim that the variety of abelian groups has FEP. By Theorem 5.1 of [12], an abelian group is subdirectly irreducible if and only if it is a subgroup of a \(p\)-cyclic group, i.e., either it is a \(p^{\infty}\)-group or a cyclic group of order \(p^{n}\), where \(p\) is a prime. So every finitely generated subdirectly irreducible abelian group is finite. By Corollary 2 in [2] every finitely generated abelian group is residually finite. By Theorem 1 in [4] this is equivalent to having the FEP, so the variety of abelian groups has the FEP. Note that the above characterization of the finitely generated subdirectly irreducibles does not extend to algebras in \(\mathsf{CM_{G}}\), since the notion of subdirectly irreducible is different. Nevertheless, we can make use of the FEP for abelian groups. It suffices to prove the FEP for the subdirectly irrducible algebras in \(\mathsf{CM_{G}}\). Let \(\mathbf{G}\) be an abelian group and \(\mathbf{B}\) a finite subset of \(\mathbf{M_{G}}\). Without loss of generality, we can assume \(\bot,\top\in B\), where \(\top\) and \(\bot\) denote the bounds of \(\mathbf{M_{G}}\), so \((B,\wedge,\vee)\) is a sublattice of \(\mathbf{M}_{G}\). Then \((B^{\prime},\cdot,1)\) is a finite partial subgroup of \(\mathbf{G}\), where \(B^{\prime}=B\setminus\{\top,\bot\}\). By the FEP for abelian groups, there exists a finite abelian group \(\mathbf{C}^{\prime}\) such that \((B^{\prime},\cdot,1)\) can be embedded into \(\mathbf{C}^{\prime}\); without loss of generality we assume that \(B^{\prime}\subseteq C^{\prime}\). We consider the set \(C=C^{\prime}\cup\{\top,\bot\}\) and define an order keeping the elements of \(C^{\prime}\) incomparable and setting \(\bot<x<\top\), for all \(x\in C^{\prime}\). Also, we extend the multiplication of \(\mathbf{C}^{\prime}\) by stipulating that \(\top\) is absorbing for \(C\cup\{\top\}\) and \(\bot\) is absorbing for \(C^{\prime}\). Finally, we define \(x\to y=x^{-1}\cdot y\) for \(x\in C^{\prime}\), \(\top\to u=\bot=v\to\bot\) for \(u\neq\top\) and \(v\neq\bot\), and \(w\to\top=\top=\bot\to w\), for all \(w\). Since \((B,\wedge,\vee)\) is a sublattice of \(\mathbf{M}_{G}\) and \(B^{\prime}\subseteq C^{\prime}\), \((B,\wedge,\vee)\) is a sublattice of \((C,\wedge,\vee)\). For all \(x,y\in B^{\prime}\), if \(x\cdot_{\mathbf{B}}y\in B\), then \(x\cdot_{\mathbf{B}}y=x\cdot_{\mathbf{B^{\prime}}}y=x\cdot_{\mathbf{C^{\prime} }}y=x\cdot_{\mathbf{C}}y\), since \(\mathbf{G}\) is closed under multiplication; if \(x\to_{\mathbf{B}}y\in B\), then \(x^{-1\mathbf{B}}\in B\) and \(x\to_{\mathbf{B}}y=x^{-1\mathbf{B}}\cdot_{\mathbf{B}}y=x^{-1\mathbf{B}^{ \prime}}\cdot_{\mathbf{B}^{\prime}}y=x^{-1\mathbf{C}^{\prime}}\cdot_{\mathbf{C }^{\prime}}y=x\to_{\mathbf{C}}y\), since \(\mathbf{G}\) is also closed under inverses. Finally, if \(x,y\in B\) and \(x\in\{\bot,\top\}\) or \(y\in\{\bot,\top\}\), then the embedding works since \(\bot\to_{\mathbf{M_{G}}}a=\top=a\to_{\mathbf{M_{G}}}\top\), \(a\bot=\bot=\bot a\) for all \(a\in M_{G}\) and \(b\to_{\mathbf{M_{G}}}\bot=\bot=\top\to_{\mathbf{M_{G}}}c\), \(b\top=\top=\top b\) for all \(b\neq\bot\) and \(c\neq\top\). **Corollary 5.2**.: _The universal theory of the variety \(\mathsf{CM_{G}}\) is decidable._ We can actually prove the FEP for many more subvarieties of \(\mathsf{SRL}\), unrelated to \(\mathsf{GM}_{\mathsf{G}}\), using a construction based on residuated frames. An equation is called _knotted_ if it is of the form \(x^{m}\leq x^{n}\), where \(n\neq m\). Also, we consider the following weak versions of commutativity. For every \(n\in\mathbb{Z}^{+}\) and nonconstant _partition_\(a\) of \(n+1\) (i.e., \(a=(a_{0},a_{1},\ldots,a_{n})\), where \(a_{0}+a_{1}+\cdots+a_{n}=n+1\) and not all \(a_{i}\)'s are \(1\)), we consider the \((n+1)\)-variable identity \((a)\): \[xy_{1}xy_{2}\cdots y_{n}x=x^{a_{0}}y_{1}x^{a_{1}}y_{2}\cdots y_{n}x^{a_{n}}.\] For example, \((2,0)\) is the identity \(xyx=xxy\) and \((2,0,1)\) is the identity \(xyxzx=xxyzx\). We call all of these identities _weak commutativity_ identities. **Theorem 5.3**.: _If a subvariety of \(\mathsf{SRL}\) is axiomatized by a knotted identity, a weak commutativity identity and any additional (possibly empty) set of equations over \(\{\vee,\cdot,1\}\), then it has the FEP._ Proof.: If \(\mathcal{V}\) is such a variety, it suffices to prove the FEP for the subdirectly irreducible algebras in \(\mathcal{V}\); so it suffices to prove it for unilinear residuated lattices. Let \(\mathbf{A}\) be a unilinear residuated lattice in \(\mathcal{V}\) and \(\mathbf{B}\) be a finite partial subalgebra of \(\mathbf{A}\). Let \(\mathbf{W}\) be the submonoid of \(\mathbf{A}\) generated by \(B\), \(W^{\prime}=W\times B\times W\) and let \(N\subseteq W\times W^{\prime}\) be defined by: \(x\ N\ (y,b,z)\) if \(yxz\leq b\). Then \(\mathbf{W}_{\mathbf{A},\mathbf{B}}=(W,W^{\prime},N,\cdot,1)\) is a residuated framed in the sense of [9] and the Galois algebra \(\mathbf{W}_{\mathbf{A},\mathbf{B}}{}^{+}=(\gamma_{N}[\mathcal{P}(W)],\cap, \cup_{\gamma_{N}},\gamma_{N},\gamma(\{1\}),\setminus,/)\) is a residuated lattice, where \(X\cup_{\gamma}Y=\gamma(X\cup Y)\), \(X\cdot_{\gamma}Y=\gamma(X\cdot Y)\), \(X\backslash Y=\{z\in W:zX\subseteq Y\}\) and \(Y/X=\{z\in W:Xz\subseteq Y\}\). Moreover, [9] shows that \(\mathbf{W}_{\mathbf{A},\mathbf{B}}{}^{+}\) satisfies all \(\{\vee,\cdot,1\}\)-equations that \(\mathbf{A}\) satisfies and that \(\mathbf{B}\) embeds in \(\mathbf{W}_{\mathbf{A},\mathbf{B}}{}^{+}\). Also, [1] shows that such \(\mathbf{W}_{\mathbf{A},\mathbf{B}}{}^{+}\) is finite, due to the knotted rule and the weak commutativity. So it suffices to show that it is in \(\mathsf{SRL}\); we will show that \(\mathbf{W}_{\mathbf{A},\mathbf{B}}{}^{+}\) is actually unilinear. Note that for all \((y,b,z)\in W^{\prime}\), we have \(a\in\{(y,b,z)\}^{\triangleleft}\) iff \(a\ N\ (y,b,z)\) iff \(yaz\leq b\) iff \(a\leq y\backslash b/z\). Therefore, \(\{(y,b,z)\}^{\triangleleft}=\downarrow(y\backslash b/z)\). By basic properties of Galois connections, every element \(X\) of \(\gamma_{N}[\mathcal{P}(W)]\) is an intersection of sets of the form \(\{(y,b,z)\}^{\triangleleft}\); actually \(X=\bigcap\{\{w\}^{\triangleleft}:w\in X^{\triangleright}\}\). Therefore, \(X\) is an intersection of principal downsets of \(\mathbf{A}\). Since \(\mathbf{A}\) is unilinear, \(X\) is either equal to \(A\) itself or a linear downset of \(A\). Now, let \(X,Y\in\gamma_{N}[\mathcal{P}(W)]\); hence each of them is either equal to \(A\) or a linear subset of \(A\). If \(X\nsubseteq Y\) and \(Y\nsubseteq X\), then none of them equals \(A\), hence they are both linear downsets. Since \(X\nsubseteq Y\), there is an \(x\in X\) such that \(x\not\in Y\). Since, \(Y\nsubseteq X\), not every element of \(Y\) is below \(x\), so there exists \(y\in Y\) with \(y\not\leq x\). Since \(x\not\in Y\) and \(Y\) is a downset, we get \(x\not\leq y\); therefore in this case \(\mathbf{A}\) is not linear. By unilinearity of \(\mathbf{A}\), it has a top \(\top\) and \(\top=x\lor y\in X\cup_{\gamma}Y\), which is also a downset; hence \(X\cup_{\gamma}Y=A\). Also, if \(z\in X\cap Y\), then \(z\leq x,y\) and by the unilinearity of \(\mathbf{A}\), we get \(z=\bot\); so \(X\cap Y=\{\bot\}\). Consequently, \(\gamma_{N}[\mathcal{P}(W)]\) is unilinear. Note that all knotted identities and all weak commutativity identities are equations over \(\{\vee,\cdot,1\}\). So, the theorem includes cases where multiple knotted and/or multiple weak commutativity equations are included in the axiomatization. **Corollary 5.4**.: _If a subvariety of \(\mathsf{SRL}\) is axiomatized by a knotted identity, a weak commutativity identity and any (possibly empty) set of equations over \(\{\vee,\cdot,1\}\), then its universal theory is decidable._ ## 6. Constructing Compact URLs A unilinear residuated lattice \(\mathbf{R}\) is called _compact_ if it is \(\top\)_-unital_ (i.e., it satisfies: \(x=\overline{\bot}\) or \(x\overline{\top}=\overline{\top}=\overline{\top}x\)) and \(R\setminus\{\top,\bot\}\) is closed under multiplication. In other words, non-linear compact URLs are obtained by a partially-ordered monoid \(\mathbf{M}\) that is a union of chains by adding bounds that absorb all elements of \(M\). We will provide some constuctions of compact URLs, but first we start by giving an axiomatization. **Lemma 6.1**.: _The class of compact URLs is axiomatized by the sentences \(\forall x(x=\overline{\bot}\) or \(x\overline{\top}=\overline{\top}=\overline{\top}x)\) and \(\forall x,y,z\,(x=\overline{\top}\) or \(x(y\wedge z)=xy\wedge xz)\)._ Proof.: By the definition of compactness, it suffices to show that, for every \(\top\)-unital non-linear unilinear residuated lattice \(\mathbf{R}\), the second formula captures the fact that \(R\setminus\{\top,\bot\}\) is closed under multiplication. Note that if \(a,b\not\in\{\top,\bot\}\), then \(ab\top=a\top=\top\), so \(ab\neq\bot\). Assume first that \(\mathbf{R}\) satisfies the second formula, but there exist \(a_{1},a_{2}\in R\setminus\{\bot,\top\}\) such that \(a_{1}a_{2}=\top\). Since \(\mathbf{R}\) is not linear, there exists an element \(a_{3}\) that is incomparable to \(a_{1}\) or to \(a_{2}\); without loss of generality, \(a_{3}\) is incomparable to \(a_{2}\), so \(a_{3}\in R\setminus\{\bot,\top\}\). Hence \[\bot=a_{1}\bot=a_{1}(a_{2}\wedge a_{3})=a_{1}a_{2}\wedge a_{1}a_{3}=\top\wedge a _{1}a_{3}=a_{1}a_{3},\] a contradiction. Thus \(R\setminus\{\top,\bot\}\) is closed under multiplication. Now assume \(R\setminus\{\top,\bot\}\) is closed under multiplication and that \(x,y,z\in R\) with \(x\neq\top\). If \(x=\bot\), then the formula holds, so we assume that \(x\neq\bot\). Also, if \(y\) and \(z\) are comparable, then \(x(y\wedge z)=xy\wedge xz\) holds since multiplication preserves the order; so we assume that \(y\) and \(z\) are incomparable. In this case, \(xy\lor xz=x(y\lor z)=x\top=\top\). Since \(R\setminus\{\top,\bot\}\) is closed under multiplication, \(xy\) and \(xz\) are incomparable, hence \(x(y\wedge z)=x\cdot\bot=\bot=xy\wedge xz\). It follows that an alternative second formula is \(\forall x,y,z\,(x=\overline{\top}\;\text{or}\;(y\wedge z)x=yx\wedge zx)\). **Corollary 6.2**.: _The variety generated by the class of compact URL is axiomatized by_ \[1= \gamma_{1}(u\backslash v)\vee\gamma_{2}(v\backslash u)\vee\gamma_{ 3}(x\backslash(u\wedge v))\vee\gamma_{4}((u\lor v)\backslash(x(u\lor v) \wedge(u\lor v)x))\] \[1= \gamma_{5}(u\backslash v)\vee\gamma_{6}(v\backslash u)\vee\gamma_ {7}((u\lor v)\backslash x)\vee\gamma_{8}((xu\wedge xv)\backslash x(u\wedge v))\] _where \(\gamma_{1}\),\(\gamma_{2}\), \(\gamma_{3}\), \(\gamma_{4}\), \(\gamma_{5}\), \(\gamma_{6}\),\(\gamma_{7}\), \(\gamma_{8}\in\Gamma(Var)\)._ **Lemma 6.3**.: _If \(\mathbf{R}\) is a compact URL, then the comparability relation \(\equiv\) on \(\mathbf{M}\), where \(M=R\setminus\{\bot,\top\}\), is a congruence relation and the quotient monoid \(\mathbf{M}/\equiv\) is cancellative. Also, \([1]_{\equiv}\) defines a totally-ordered submonoid of \(\mathbf{M}\)._ Proof.: That the comparability relation \(\equiv\) is a congruence on \(\mathbf{M}\) follows from the order-preservation of multiplication and the unilinear order. For the cancellativity of \(\mathbf{M}/\equiv\), note that if for \(x,y,z\in M\) and \(y\parallel z\), we have \(\top=x(y\lor z)=xy\lor xz\), and since \(\mathbf{M}\) is closed under multiplication, we get \(xy\parallel xz\). Finally, \([1]_{\equiv}\) is a totally-ordered submonoid of \(\mathbf{M}\) since \(x\equiv 1\) and \(y\equiv 1\) implies \(xy\equiv 1\cdot 1=1\). ### From a finite cyclic monoid We show how to construct a compact URL starting from a finite cyclic monoid. Given a finite cyclic monoid \(\mathbf{M}\) generated by an element \(a\) of \(M\), there is a smallest natural number \(r\), called the _index_, such that \(a^{r}=a^{r+s}\) for some positive integer \(s\); the smallest such \(s\) then is called the _period_. So \(M=\{1,a,\ldots,a^{r},\ldots,a^{r+s-1}\}\) and \(|M|=r+s\). Note that every natural number \(n>r\) can be written as \(n=r+ms+k\) for unique \(m\in\mathbb{N}\) and \(0\leq k<s\); we define \([n]_{r}^{s}:=r+k\) for \(n\geq r+s\) and \([n]_{r}^{s}:=n\) for \(0\leq n<r+s\). (We will write \([n]\), when \(r,s\) are clear from the context.) Then the multiplication on \(\mathbf{M}\) is given by \(a^{i}\cdot a^{j}=a^{[i+j]_{r}^{s}}\). In particular, \(\{a^{r},\ldots,a^{r+s-1}\}\) is a subsemigroup of \(\mathbf{M}\) and it is a group in its own right with identity element \(a^{t}\) such that \(t\equiv 0\,(\operatorname{mod}\,s)\); so it is isomorphic to \(\mathbb{Z}_{s}\). We extend the multiplication of \(\mathbf{M}\) to the set \(R=M\cup\{\bot,\top\}\) by \(\bot x=x\bot=\bot\) for all \(x\in R\), and \(\top x=x\top=\top\) for all \(x\neq\bot\). Also we define an order on \(R\) by \(\bot\leq x\leq\top\) for all \(x\in R\) and \(a^{i}\leq a^{j}\) if and only if \(j=i+ns\) for some \(n\in\mathbb{N}\), where \(0\leq i,j\leq r+s-1\); see Figure 4(left). It is easy to see that this yields a unilinear lattice order; we denote by \(\mathbf{R}_{\mathbf{M}}\) the resulting lattice-ordered monoid. **Theorem 6.4**.: _If \(\mathbf{M}\) is a finite cyclic monoid, then \(\mathbf{R}_{\mathbf{M}}\) is the reduct of a residuated lattice._ Proof.: Since both \(\top\) and \(\bot\) are zero elements for \(\mathbf{M}\) and \(\bot\top=\top\bot=\bot\), the associativity of \(\mathbf{M}\) easily extends to the associativity of \(\mathbf{R}_{\mathbf{M}}\). Since \(R\) is finite, by Corollary 1.3, it suffices to show that multiplication distributes over binary joins; we will show distribution from the left: \(x(y\lor z)=xy\lor xz\), for all \(x,y,z\in R\). If any of \(x,y,z\) is \(\top\) or \(\bot\), it easy to see that the equation holds, so we assume that \(x,y,z\in M\): \(x=a^{i},y=a^{j}\) and \(z=a^{k}\) for some \(0\leq i,j,k\leq r+s-1\). If \(y=a^{j}\) and \(z=a^{k}\) are incomparable, then \(j\not\equiv k\,(\operatorname{mod}\,s)\) by definition, so we have \(i+j\not\equiv i+k\,(\operatorname{mod}\,s)\) and hence \(xy\parallel xz\). Thus we have \(x(y\lor z)=x\top=\top=xy\lor xz\). If \(a^{j}=y\leq z=a_{k}\), we have \(k=j+ns\) for some \(0\leq n\leq\lfloor(r+s-1-j)/s\rfloor\); we will show that \(xy=a^{[i+j]}\leq a^{[i+j+ns]}=xz\). This is true since for \(\ell=i+j\), we have \([\ell+ns]=[\ell]+ms\), where \(m=n\) if \(\ell+ns<r+s\) and \(m=([\ell+ns]-[\ell])/s\) if \(\ell+ns\geq r+s\) Figure 4. The two URLs based on a finite cyclic monoid The (commutative) residuated lattice based on \(\mathbf{R_{M}}\) is compact so we have \(\bot\to x=\top=x\to\top\), \(\top\to y=\bot\), \(z\to\bot=\bot\) for all \(x\in R_{M}\), \(y\neq\top\), \(z\neq\bot\). Also, the remaining implications can be easily calculated to be as follows: \[a^{i}\to a^{j}=\begin{cases}\bot&\text{if $j<i\leq r$ or $j<r\leq i\leq r+s-1$}\\ a^{j-i}&\text{if $i\leq j<r$}\\ a^{j-i+\lfloor\frac{r+s-1+i-j}{s}\rfloor s}&\text{if $i<r\leq j\leq r+s-1$}\\ a^{k}&\text{if $r\leq i,j,k\leq r+s-1$ and $a^{i}a^{k}=a^{j}$.}\end{cases}\] In particular, the subsemigroup \(\{a^{r},\dots,a^{r+s-1}\}\) is closed under implication, but \(M\) is not. It is easy to see that if we impose the dual order on the elements of \(\mathbf{M}\) instead, then we can obtain a different unilinear residuated lattice; see Figure 4(right). Residuation in this second example works differently: \[a^{i}\to a^{j}=\begin{cases}a^{j-i}&\text{if $i\leq j\leq r+s-1$}\\ a^{j-i+\lceil\frac{i-j}{s}\rceil s}&\text{if $j<i\leq r+s-1$}\end{cases}\] In this case, \(M\) is closed under implication, but \(\{a^{r},\dots,a^{r+s-1}\}\) is not. **Remark 6.5**.: Actually, we can prove that given a finite cyclic monoid \(M\), these are the only two ways where \(M\cup\{\bot,\top\}\) is the monoid reduct of a compact unilinear residuated lattice. Suppose \(M\cup\{\bot,\top\}\) is the monoid reduct of a compact URL \(\mathbf{R}\). Let \(a^{i}\) and \(a^{j}\) be distinct group elements in \(M\). If \(a^{i}<a^{j}\), then \(e=a^{i}a^{k}<a^{j}a^{k}\), where \(e\) is the identity for the group elements in \(M\) and \(a^{k}\) is the inverse of \(a^{i}\) in the group. Then \(e<a^{j}a^{k}<(a^{j}a^{k})^{2}<\cdots\), so \(M\) contains an infinite ascending chain, contradicting the fact that \(M\) is finite. Thus the group elements in \(M\) are pairwise incomparable. We also observe that given \(0\leq i<j<r+s\), (*) \[\begin{split}& a^{i}<a^{j}\text{ iff \ for all }0\leq k\leq i,\,a^{i-k}<a^{j-k}\\ & a^{j}<a^{i}\text{ iff \ for all }0\leq k\leq i,\,a^{j-k}<a^{i-k}\end{split}\] The backward direction is trivial, so we just show the forward direction. Given \(0\leq i<j<r+s\) such that \(a^{i}<a^{j}\) and \(0\leq k\leq i\), if \(a^{i-k}\parallel a^{j-k}\), then \(\top=a^{k}\top=a^{k}(a^{i-k}\lor a^{j-k})=a^{i}\lor a^{j}\), so \(a^{i}\parallel a^{j}\), a contradiction; if \(a^{i-k}>a^{j-k}\), then \(a^{i}>a^{j}\) since multiplication is order-preserving and \(a^{i}\) is distinct from \(a^{j}\). Finally, we know \(1\equiv e\), since otherwise we would have \(\top=e(1\lor e)=e\lor e^{2}=e\), a contradiction. Now let \(t\) be the smallest natural number such that \(a^{t}\equiv 1\). If \(t=0\), then by (*), \(a^{i}\parallel a^{j}\) for all \(0\leq i<j<r+s\); otherwise \(1\equiv a^{j-i}\) where \(j-i>0\), a contradiction. Especially we have \(e=1\) in this case, so \(M\) is a group and \(\mathbf{R}\) is based on \(\mathbf{M}_{X}\). Now we assume \(t>0\). If \(1<a^{t}\), then we have \(a^{r}\leq a^{r+t}\) and both of them are group elements in \(M\). Since all group elements are pairwise incomparable, we know \(a^{r}=a^{r+t}\), so \(t=s\). Since \(s=t\) is the smallest integer such that \(1<a^{s}\), we know \(1\parallel a^{k}\) for all \(1<k<s\), thus by (*) \(a^{k}\parallel a^{l}\) for all \(0\leq k\neq l\leq s-1\). Since \(1<a^{s}\), we have \(1<a^{s}<a^{2s}<\cdots<a^{ms}\), where \(ms<r+s\leq(m+1)s\). Hence \(a^{i}<a^{j}\) iff \(1<a^{j-i}\) iff \(j=i+ns\) for some \(n\in\mathbb{Z}^{+}\), so \(a^{i}\leq a^{j}\) iff \(j=i+ns\) for some \(n\in\mathbb{N}\) and \(\mathbf{R}\) is of the form as the left in Figure 4(left). Similarly we can prove \(\mathbf{R}\) is of the form as the right in Figure 4(right) if \(a^{t}<1\). From a semidirect product of a residuated chain and a cancellative monoid; monoid extensions with 2-cocycles We first provide a general construction of compact residuated lattices and then show that under certain assumptions a compact residuated lattice is exactly of this form. Let \(\mathbf{A}\) be a residuated chain, \(\mathbf{K}\) a cancellative monoid and \(\varphi:\mathbf{K}\to\mathbf{ResEnd}(\mathbf{A})\) a monoid homomorphism, where \(\mathbf{ResEnd}(\mathbf{A})\) is the monoid of residuated maps on the chain \((A,\leq)\) which are also endomorphisms of the monoid \((A,\cdot,1)\). If \(\varphi,\psi\in\mathbf{ResEnd}(\mathbf{A})\) with residuals \(\varphi^{*}\) and \(\psi^{*}\), then \((\psi\circ\varphi)(a)\leq b\) iff \(\varphi(a)\leq\psi^{*}(b)\) iff \(a\leq(\varphi^{*}\circ\psi^{*})(b)\) for all \(a,b\in A\); so \(\psi\circ\varphi\) is also residuated. Thus, \(\mathbf{ResEnd}(\mathbf{A})\) is a submonoid of \(\mathbf{End}(\mathbf{A})\). Consequently, the semidirect product \(\mathbf{A}\rtimes_{\varphi}\mathbf{K}\) of the monoid reduct of \(\mathbf{A}\) and \(\mathbf{K}\) with respect to \(\varphi\) is also a monoid with multiplication given by \[(a_{1},k_{1})\cdot(a_{2},k_{2})=(a_{1}\varphi_{k_{1}}(a_{2}),k_{1}k_{2}),\] for all \((a_{1},k_{1}),(a_{2},k_{2})\in A\times K\), and identity \((1_{\mathbf{A}},1_{\mathbf{K}})\). We define an order on \(\mathbf{A}\rtimes_{\varphi}\mathbf{K}\) by: for all \((a_{1},k_{1}),(a_{2},k_{2})\in A\times K\), \[(a_{1},k_{1})\leq(a_{2},k_{2})\text{ if and only if }k_{1}=k_{2}\text{ and }a_{1}\leq a_{2}.\] Also, we extend the multiplication and order of \(\mathbf{A}\rtimes_{\varphi}\mathbf{K}\) to \(R=(A\times K)\cup\{\top,\bot\}\) by: \(\bot\leq x\leq\top\), \(\bot x=x\bot=\bot\) and \(\top y=y\top=\top\) for all \(x\in R\), \(y\neq\bot\). It is clear that this defines a lattice order; see Figure 5. We denote by \(\mathbf{A}\rtimes_{\varphi}^{b}\mathbf{K}\) the resulting bounded lattice-ordered monoid. **Theorem 6.6**.: _If \(\mathbf{A}\) is a residuated chain, \(\mathbf{K}\) is a cancellative monoid and \(\varphi:\mathbf{K}\to\mathbf{ResEnd}(\mathbf{A})\) is a monoid homomorphism, then \(\mathbf{A}\rtimes_{\varphi}^{b}\mathbf{K}\) is a residuated lattice._ The proof of the above theorem follows from a more general construction. Given a monoid \(\mathbf{K}\), a totally-ordered monoid \(\mathbf{A}\) and a map \(\varphi:\mathbf{K}\to\mathbf{ResEnd}(\mathbf{A})\), then a function \(f:K\times K\to A\) is called a \(2\)_-cocycle_ with respect to \(\mathbf{K},\mathbf{A},\varphi\), if it satisfies the following conditions: 1. \(f(k_{1},k_{2})\) is invertible, for all \(k_{1},k_{2}\in K\). Figure 5. A URL based on a semidirect product 2. \(f(k,1)=f(1,k)=1\), for all \(k\in K\). 3. \(\varphi_{1_{\mathbf{K}}}=\operatorname{id}_{\mathbf{A}}\) and \(\varphi_{k_{1}k_{2}}(a)=f(k_{1},k_{2})\cdot\varphi_{k_{1}}\varphi_{k_{2}}(a) \cdot f(k_{1},k_{2})^{-1}\) for all \(k_{1},k_{2}\in K\) and \(a\in A\). 4. \(f(k_{1},k_{2}k_{3})\varphi_{k_{1}}(f(k_{2},k_{3}))=f(k_{1}k_{2},k_{3})f(k_{1}, k_{2})\), for \(k_{1},k_{2},k_{3}\in K\). Now, given a cancellative monoid \(\mathbf{K}\), a residuated chain \(\mathbf{A}\), a map \(\varphi:K\to\mathbf{ResEnd}(\mathbf{A})\) and a \(2\)-cocycle \(f:K\times K\to A\), we define multiplication on \(A\times K\) by \[(a_{1},k_{1})\cdot(a_{2},k_{2})=(a_{1}\varphi_{k_{1}}(a_{2})f(k_{1},k_{2})^{-1 },k_{1}k_{2})\] Also, we extend the multiplication to \(R=A\times K\cup\{\bot,\top\}\) by making \(\bot\) absorbing for \(R\) and \(\top\) absorbing for \(R\setminus\{\bot\}\), and we define a lattice ordering \(\leq\) by: for all \(a,a_{1},a_{2}\in A\) and \(k,k_{1},k_{2}\in K\), \(\bot=\bot<(a,k)<\top=\top\) and \[(a_{1},k_{1})\leq(a_{2},k_{2})\text{ iff }a_{1}\leq_{\mathbf{A}}a_{2}\text{ and }k_{1}=k_{2}.\] We denote the resulting algebra by \(\mathbf{R}_{\varphi,f}\). **Theorem 6.7**.: _If \(\mathbf{K}\) is a cancellative monoid, \(\mathbf{A}\) is a residuated chain, \(\varphi:\mathbf{K}\to\mathbf{ResEnd}(\mathbf{A})\) is a map, and \(f:K\times K\to A\) is a \(2\)-cocycle with respect to \(\mathbf{K}\), \(\mathbf{A}\) and \(\varphi\), then \(\mathbf{R}_{\varphi,f}\) is the reduct of a residuated lattice._ Proof.: In the following we use \(\mathbf{R}\) for \(\mathbf{R}_{\varphi,f}\) and \(M\) for \(A\times K\). Clearly, \(M\) is closed under multiplication and \((1,1)\) is the identity. Also, \[(a_{1},k_{1})(a_{2},k_{2})\cdot(a_{3},k_{3})\] \[= (a_{1}\varphi_{k_{1}}(a_{2})f(k_{1},k_{2})^{-1},k_{1}k_{2})\cdot (a_{3},k_{3})\] \[= (a_{1}\varphi_{k_{1}}(a_{2})f(k_{1},k_{2})^{-1}\varphi_{k_{1}k_{ 2}}(a_{3})f(k_{1}k_{2},k_{3})^{-1},k_{1}k_{2}\cdot k_{3})\] \[= (a_{1}\varphi_{k_{1}}(a_{2})f(k_{1},k_{2})^{-1}\cdot f(k_{1},k_{ 2})\varphi_{k_{1}}\varphi_{k_{2}}(a_{3})f(k_{1},k_{2})^{-1}\cdot f(k_{1}k_{2},k_{3})^{-1},\] \[k_{1}k_{2}\cdot k_{3})\] \[= (a_{1}\varphi_{k_{1}}(a_{2})\varphi_{k_{1}}\varphi_{k_{2}}(a_{3} )f(k_{1},k_{2})^{-1}f(k_{1}k_{2},k_{3})^{-1},k_{1}k_{2}\cdot k_{3})\] \[= (a_{1}\varphi_{k_{1}}(a_{2}\varphi_{k_{1}}\varphi_{k_{2}}(a_{3}) \varphi_{k_{1}}(f(k_{2},k_{3})^{-1})f(k_{1},k_{2}k_{3})^{-1},k_{1}k_{2}\cdot k _{3})\] \[= (a_{1}\varphi_{k_{1}}(a_{2}\varphi_{k_{2}}(a_{3})f(k_{2},k_{3})^{- 1})f(k_{1},k_{2}k_{3})^{-1},k_{1}\cdot k_{2}k_{3})\] \[= (a_{1},k_{1})\cdot(a_{2}\varphi_{k_{2}}(a_{3})f(k_{2},k_{3})^{-1},k_{2}k_{3})\] \[= (a_{1},k_{1})\cdot(a_{2},k_{2})(a_{3},k_{3})\] where we used the identities \[\varphi_{k_{1}k_{2}}(a)=f(k_{1},k_{2})\cdot\varphi_{k_{1}}\varphi_ {k_{2}}(a)\cdot f(k_{1},k_{2})^{-1}\] \[f(k_{1},k_{2}k_{3})\varphi_{k_{1}}(f(k_{2},k_{3}))=f(k_{1}k_{2},k _{3})f(k_{1},k_{2})\] and the assumption that \(\varphi_{k}\) is an endomorphism. Therefore \(\mathbf{M}=(M,\cdot,(1,1))\) is a monoid. Since both \(\top\) and \(\bot\) are absorbing elements for \(\mathbf{M}\) and \(\top\bot=\bot\top=\bot\), associativity holds on \(\mathbf{R}\). We now prove that multiplication is order-preserving: \(y\leq z\implies(xy\leq xz\) and \(yx\leq zx)\) for all \(x,y,z\in R\). If \(y=z\) or \(x,y,z\) is \(\bot\) or \(\top\), then it it easy to see that the implication holds; so we assume that \(\bot<x<\top\) and \(\bot<y<z<\top\). Also, we assume that \(x=(a_{1},k_{1})\), \(y=(a_{2},k_{2})\) and \(z=(a_{3},k_{2})\) with \(a_{2}<a_{3}\). Using the order preservation of \(\varphi_{k_{1}}\) (it is a residuated map) and of multiplication in \(\mathbf{A}\), we get \[(a_{1},k_{1})(a_{2},k_{2}) =(a_{1}\varphi_{k_{1}}(a_{2})f(k_{1},k_{2})^{-1},k_{1}k_{2})\] \[\leq(a_{1}\varphi_{k_{1}}(a_{3})f(k_{1},k_{2})^{-1},k_{1}k_{2})\] \[=(a_{1},k_{1})(a_{3},k_{2})\] \[(a_{2},k_{2})(a_{1},k_{1}) =(a_{2}\varphi_{k_{2}}(a_{1})f(k_{2},k_{1})^{-1},k_{2}k_{1})\] \[\leq(a_{3}\varphi_{k_{2}}(a_{1})f(k_{2},k_{1})^{-1},k_{2}k_{1})\] \[=(a_{3},k_{2})(a_{1},k_{1})\] Next we show that the sets \(x\backslash\!\!\backslash z\) and \(z/\!\!/x\) have maximum elements for all \(x,z\in R\). By Remark 1.4, we know \(\bot\backslash\!\!\backslash z=z/\!\!/\bot=x\backslash\top=\top/\!\!/x=R\) for all \(x,z\in R\), so the maximum element of all of these sets is \(\top\). Also, by construction, \(x\backslash\!\!\backslash\bot=\bot/\!\!/x=\top\backslash\!\!\backslash z=z/\! \!/\top=\{\bot\}\) for all \(x\in R\setminus\{\bot\}\) and \(z\in R\setminus\{\top\}\), so the maximum for all these sets is \(\bot\). We now assume that \(\bot<x,z<\top\) and that \(x=(a,k)\) and \(z=(a^{\prime},k^{\prime})\) for some \((a,k),(a^{\prime},k^{\prime})\in A\times K\). For all \((a_{1},k_{1}),(a_{2},k_{2})\in x\backslash\!\!\backslash z\), we have \((a\varphi_{k}(a_{1})f(k,k_{1})^{-1},kk_{1})=(a,k)(a_{1},k_{1})\leq(a^{\prime},k ^{\prime})\) and \((a\varphi_{k}(a_{2})f(k,k_{2})^{-1},kk_{2})=(a,k)(a_{2},k_{2})\leq(a^{\prime}, k^{\prime})\), so \(kk_{1}=k^{\prime}=kk_{2}\) and \(k_{1}=k_{2}\), by the cancellativity of \(\mathbf{K}\). Since, \(\mathbf{A}\) is a chain, we get that \((a_{1},k_{1})\) and \((a_{2},k_{2})\) are comparable; hence \(x\backslash\!\!\backslash z\) is a chain. For all \((a^{\prime\prime},k^{\prime\prime})\), we have that \((a^{\prime\prime},k^{\prime\prime})\in x\backslash\!\!\backslash z\) iff \((a,k)(a^{\prime\prime},k^{\prime\prime})\leq(a^{\prime},k^{\prime})\) iff \((a\varphi_{k}(a^{\prime\prime})f(k,k^{\prime\prime})^{-1},kk^{\prime\prime}) \leq(a^{\prime},k^{\prime})\) iff \((a\varphi_{k}(a^{\prime\prime})f(k,k^{\prime\prime})^{-1}\leq a^{\prime}\) and \(k^{\prime}=kk^{\prime\prime})\). Since multiplication is residuated, \(\varphi_{k}\) is residuated, say with residual \(\varphi_{k}^{*}\), and \(f(k,k^{\prime\prime})\) is invertible, we have: \(a\varphi_{k}(a^{\prime\prime})f(k,k^{\prime\prime})^{-1}\leq a^{\prime}\) iff \(\varphi_{k}(a^{\prime\prime})\leq a\backslash\!\!\backslash a^{\prime}f(k,k^{ \prime\prime})\) iff \(a^{\prime\prime}\leq\varphi_{k}^{*}(a\backslash_{\mathbf{A}}a^{\prime}f(k,k^{ \prime\prime}))\). Therefore, we have \((a^{\prime\prime},k^{\prime\prime})\in x\backslash\!\!\backslash z\) iff \((a^{\prime\prime},k^{\prime\prime})\leq(\varphi_{k}^{*}(a\backslash_{\mathbf{A}}a^ {\prime}f(k,k^{\prime\prime})),k^{\prime\prime})\). Consequently, \(\max(x\backslash\!\!\backslash z)\) exists and it is one of the elements \(\bot,(\varphi_{k}^{*}(a\backslash_{\mathbf{A}}a^{\prime}f(k,k^{\prime\prime})),k ^{\prime\prime}),\top\). Likewise, \(\max(z/\!\!/x)\) is one of the elements \(\bot,(a^{\prime}f(k^{\prime\prime},k)/_{\mathbf{A}}\varphi_{k^{\prime\prime}}(a ),k^{\prime\prime}),\top\). By Corollary 1.2, \(\mathbf{R}_{\varphi,f}\) is the reduct of a compact residuated lattice. So \(\mathbf{R}_{\varphi,f}\) is the reduct of a compact residuated lattice, which we will also denote by \(\mathbf{R}_{\varphi,f}\) and whose divisions are given by \[x\backslash y =\begin{cases}\bot&\text{if $x=(a_{1},k_{1}),y=(a_{2},k_{2})$ and $k_{2}\notin k_{1}K$}\\ (\varphi_{k_{1}}^{*}(a_{1}\backslash_{\mathbf{A}}a_{2}f(k_{1},k)),k)&\text{if $x=(a_{1},k_{1}),y=(a_{2},k_{1}k)$} \end{cases}\] \[y/x =\begin{cases}\bot&\text{if $x=(a_{1},k_{1}),y=(a_{2},k_{2})$ and $k_{2}\notin Kk_{1}$}\\ (a_{2}f(k,k_{1})/_{\mathbf{A}}\varphi_{k}(a_{1}),k)&\text{if $x=(a_{1},k_{1}),y=(a_{2},kk_{1})$} \end{cases}\] and the standard divisions involving \(\bot\) and \(\top\) are given by Remark 1.4. Theorem 6.6 follows as the special case where the \(2\)-cocycle is trivial, thus implying that \(\varphi\) is a monoid homomorphism. **Corollary 6.8**.: _If \(\mathbf{K}\) is a cancellative monoid, \(\mathbf{A}\) is a residuated chain, \(\varphi:\mathbf{K}\to\mathbf{ResEnd}(\mathbf{A})\) is a map, and \(f:K\times K\to A\) is the trivial \(2\)-cocycle with respect to \(\mathbf{K}\), \(\mathbf{A}\) and \(\varphi\), then \(\varphi\) is a homomorphism and \(\mathbf{R}_{\varphi,f}=\mathbf{A}\rtimes_{\varphi}^{b}\mathbf{K}\)._ In particular, when \(\varphi\) is trivial we get \(\mathbf{A}\times^{b}\mathbf{K}\), where \(\mathbf{A}\) is a residuated chain and \(\mathbf{K}\) is a cancellative monoid. Note that the examples of section 6.1 are not embeddable into a residuated lattice of the form \(\mathbf{A}\times^{b}\mathbf{K}\). For example, consider the URL \(\mathbf{R}\) where \(R=\{\bot,1,a,a^{2},\top\}\) with \(a^{3}=a\) and \(1<a^{2}\). If \({\bf R}\) were embeddable then we would have \(1\mapsto(1,1)\), \(a\mapsto(a_{1},k)\), \(a^{2}\mapsto(a_{1}^{2},k^{2})\) and \(a^{3}\mapsto(a_{1}^{3},k^{3})\). So, \((1,1)<(a_{1}^{2},k^{2})\) implies \(1<a_{1}^{2}\) and \(k^{2}=1\); thus \(1<a_{1}\) and \(k=1\). But then \(a_{1}\leq a_{1}^{2}\leq a_{1}^{3}=a_{1}\), so \(a_{1}^{2}=a_{1}\), hence \((a_{1}^{2},k^{2})=(a_{1},k)\), a contradiction. Even though not all compact URLs are of the form \({\bf R}_{\varphi,f}\), we show that this holds when the comparability relation on \(R\setminus\{\bot,\top\}\) is an _admissible_ congruence and the chain of \(1\) is _cancellative with respect to_ the factor monoid. We say that the congruence \(\equiv\) on \({\bf M}\) is _admissible_ if \(x[1]_{=}=[x]_{=}=[1]_{=}x\), for all \(x\in M\). Also, we say \({\bf H}\) is \({\bf K}\)_-cancellative_ if there exists a selection of representatives \(\bar{\phantom{\}}{}^{-}:K\to M\) (i.e., for all \(x\in M\), if \(\overline{k}\equiv x\) then \(x\in k\)) satisfying \(\overline{1_{\bf K}}=1_{\bf M}\) and the left and right multiplications by \(\overline{k}\) are injective on \(H\). The terminology \({\bf K}\)-cancellative and \(2\)-cocycle come from [11]. **Proposition 6.9**.: _If \({\bf R}\) is a compact unilinear residuated lattice, the comparability relation \(\equiv\) is an admissible congruence of \({\bf M}\), where \(M=R\setminus\{\bot,\top\}\), and \({\bf H}\) is \({\bf K}\)-cancellative, where \(H=[1]_{=}\) and \({\bf K}={\bf M}/\)\(\equiv\), then \({\bf R}\cong{\bf R}_{\varphi,f}\) for some map \(\varphi:K\to{\bf ResAut}({\bf H})\) and \(2\)-cocycle \(f:K\times K\to H\) with respect to \({\bf H}\), \({\bf K}\) and \(\varphi\)._ Proof.: Since \({\bf H}\) is \({\bf K}\)-cancellative, there exists a selection of representatives \(\bar{\phantom{\}}{}^{-}:K\to M\). We denote by \(L_{x}\) and \(R_{x}\) the left and right multiplication by \(x\in M\), respectively. We know that for all \(k\in K\), the maps \(R_{\overline{k}},L_{\overline{k}}:H\to k\) are injective and since \(\equiv\) is an admissible congruence on \({\bf M}\) and \(H=[1]_{=}\), they are also surjective. So, for any \(k\in K\), the map \(\varphi_{k}:{\bf H}\to{\bf H}\) given by \(\varphi_{k}(h)=R_{\overline{k}}^{-1}L_{\overline{k}}(h)\) is a well-defined bijection on \(H\); hence \(\overline{k}h=\varphi_{k}(h)\overline{k}\). Note that \[\varphi_{k}(h_{1}h_{2})\overline{k} =\overline{k}\cdot h_{1}h_{2}\] \[=\overline{k}h_{1}\cdot h_{2}\] \[=\varphi_{k}(h_{1})\overline{k}\cdot h_{2}\] \[=\varphi_{k}(h_{1})\cdot\overline{k}h_{2}\] \[=\varphi_{k}(h_{1})\cdot\varphi_{k}(h_{2})\overline{k}\] \[=\varphi_{k}(h_{1})\varphi_{k}(h_{2})\cdot\overline{k}.\] Since \({\bf H}\) is \({\bf K}\)-cancellative, we have \(\varphi_{k}(h_{1}h_{2})=\varphi_{k}(h_{1})\varphi_{k}(h_{2})\). Now suppose \(h_{1}\leq h_{2}\) for some \(h_{1},h_{2}\in H\). Since \({\bf R}\) is residuated, we get \[\varphi_{k}(h_{1})\leq\varphi_{k}(h_{2}) \text{iff }R_{\overline{k}}^{-1}L_{\overline{k}}(h_{1})\leq R_{ \overline{k}}^{-1}L_{\overline{k}}(h_{2})\] \[\text{iff }R_{\overline{k}}(R_{\overline{k}}^{-1}L_{\overline{k}}(h_{1 }))\leq L_{\overline{k}}(h_{2})\] \[\text{iff }L_{\overline{k}}(h_{1})\leq L_{\overline{k}}(h_{2}).\] It follows from the order-preservation of \(L_{\overline{k}}\) that \(\varphi_{k}\) is order-preserving. So \(\varphi_{k}\) is an automorphism of the totally-ordered monoid \({\bf H}\), and \(\overline{1_{\bf K}}=1_{\bf H}\) yields \(\varphi_{1}=\operatorname{id}_{\bf H}\). Since \(\equiv\) is admissible on \({\bf M}\) and \({\bf K}={\bf M}/\)\(\equiv\), we have \[{\bf H}\overline{k_{1}k_{2}}=k_{1}k_{2}={\bf H}\overline{k_{1}}{\bf H} \overline{k_{2}}={\bf H}\overline{k_{1}}\,\overline{k_{2}}.\] Therefore there exist \(f(k_{1},k_{2})\) and \(g(k_{1},k_{2})\) in \(H\) such that \[\overline{k_{1}k_{2}}=f(k_{1},k_{2})\overline{k_{1}}\,\overline{k_{2}},\qquad \overline{k_{1}k_{2}}=g(k_{1},k_{2})\overline{k_{1}k_{2}}\] for all \(k_{1},k_{2}\in K\). Since \({\bf H}\) is \({\bf K}\)-cancellative, it follows that \(f\) and \(g\) are well-defined functions from \(K\times K\) to \(H\). Moreover, since \(f(k_{1},k_{2})g(k_{1},k_{2})=g(k_{1},k_{2})f(k_{1}k_{2})=1\) for all \(k_{1},k_{2}\in K\), we get that \(f(k_{1},k_{2})\) and \(g(k_{1},k_{2})\) are invertible. By definition, we have \[\overline{k_{2}}=f(1_{{\bf K}},k_{2})\overline{1_{{\bf K}}}\,\overline{k_{2}},\qquad\overline{k_{1}}=f(k_{1},1_{{\bf K}})\overline{k_{1}}\,\overline{1_{{ \bf K}}}.\] Again by the \({\bf K}\)-cancellativity of \({\bf H}\), we get \(f(1_{{\bf K}},k)=f(k,1_{{\bf K}})=1_{{\bf H}}\) for all \(k\in K\). Also, by the definition of \(f\), we know \[L_{\overline{k_{1}}k_{2}}=L_{f(k_{1},k_{2})}L_{\overline{k_{1}}}L_{\overline{ k_{2}}},\qquad R_{\overline{k_{1}}k_{2}}=R_{\overline{k_{2}}}R_{\overline{k_{1}}} R_{f(k_{1},k_{2})}.\] Thus by the \({\bf K}\)-cancellativity of \({\bf H}\) we have \[\varphi_{k_{1}k_{2}} =R_{\overline{k_{1}}k_{2}}^{-1}L_{\overline{k_{1}}k_{2}}\] \[=R_{f(k_{1},k_{2})}^{-1}R_{\overline{k_{1}}}^{-1}L_{\overline{k_{ 2}}}^{-1}L_{f(k_{1},k_{2})}L_{\overline{k_{1}}}L_{\overline{k_{2}}}\] \[=R_{f(k_{1},k_{2})}^{-1}L_{f(k_{1},k_{2})}R_{\overline{k_{1}}}^{- 1}L_{\overline{k_{1}}}R_{\overline{k_{2}}}^{-1}L_{\overline{k_{2}}}\] \[=R_{f(k_{1},k_{2})}^{-1}L_{f(k_{1},k_{2})}\varphi_{k_{1}}\varphi_{ k_{2}}\] for all \(k_{1},k_{2}\in K\). So we get \[\varphi_{k_{1}k_{2}}(h)=f(k_{1},k_{2})\cdot_{{\bf H}}\varphi_{k_{1}}\varphi_{ k_{2}}(h)\cdot_{{\bf H}}f(k_{1},k_{2})^{-1}\] for all \(h\in H\). Finally, we observe that \[\overline{k_{1}\cdot k_{2}k_{3}}=\overline{k_{1}k_{2}\cdot k_{3}}\] \[\text{iff }f(k_{1},k_{2}k_{3})\overline{k_{1}\,k_{2}k_{3}}=f(k_{1 }k_{2},k_{3})\overline{k_{1}k_{2}}\,\overline{k_{3}}\] \[\text{iff }f(k_{1},k_{2}k_{3})\overline{k_{1}}f(k_{2},k_{3}) \overline{k_{2}\,k_{3}}=f(k_{1}k_{2},k_{3})f(k_{1},k_{2})\overline{k_{1}}\, \overline{k_{2}}\cdot\overline{k_{3}}\] \[\text{iff }f(k_{1},k_{2}k_{3})\varphi_{k_{1}}(f(k_{2},k_{3})) \overline{k_{1}}\cdot\overline{k_{2}\,k_{3}}=f(k_{1}k_{2},k_{3})f(k_{1},k_{2} )\overline{k_{1}\,k_{2}}\cdot\overline{k_{3}}.\] So by the associativity of \({\bf K}\) and the \({\bf K}\)-cancellativity of \({\bf H}\), we get \[f(k_{1},k_{2}k_{3})\varphi_{k_{1}}(f(k_{2},k_{3}))=f(k_{1}k_{2},k_{3})f(k_{1}, k_{2})\] for all \(k_{1},k_{2},k_{3}\in K\). Therefore \(f\) is a 2-cocycle with respect to \({\bf H}\), \({\bf K}\) and \(\varphi\). Finally, we define the map \(\psi:{\bf R}\to{\bf R}_{\varphi,f}\), given by \(\psi(\perp)=\perp\), \(\psi(\top)=\top\) and \(\psi(x)=(h_{x},k_{x})\), where \(k_{x}=[x]_{\equiv}\) is the chain to which \(x\) belongs and \(h_{x}=R_{\overline{k_{x}}}^{-1}(x)\). Since \(\equiv\) is admissible, \({\bf H}\) is \({\bf K}\)-cancellative and \(H\) is totally-ordered, \(L_{\overline{k_{x}}}\) and \(R_{\overline{k_{x}}}\) are order isomorphisms between the sets \(H\) and \(k_{x}\), so \(\psi\) is well-defined. We will show that \(\psi\) is a residuated-lattice isomorphism. Suppose \(\psi(x)=\psi(y)\) for some \(x,y\in M\). Then \(k_{x}=k_{y}\) and \(h_{x}=h_{y}\), i.e., \(x\equiv y\) and \(R_{\overline{k_{x}}}^{-1}(x)=R_{\overline{k_{y}}}^{-1}(y)\). Since \(R_{\overline{k_{x}}}=R_{\overline{k_{y}}}\) is a bijection between \(H\) and \(k_{x}\), we have \(x=y\). For \((h,k)\in H\times K\), let \(x=h\overline{k}\). Since \(R_{\overline{k}}\) is a bijection, we know \(h=R_{\overline{k}}^{-1}(x)\), so \(\psi(x)=(h,k)\). Since \(\psi(\perp)=\perp\) and \(\psi(\top)=\top\) are uniquelly defined, \(\psi\) is a bijection between \(R\) and \(R_{\varphi,f}\). Since \(R_{\overline{k_{x}}}\) is an order isomorphism between \({\bf H}\) and the chain \(k_{x}\), \(x\leq_{{\bf R}}y\) iff \(k_{x}=k_{y}\) and \(R_{\overline{k_{x}}}^{-1}(x)\leq_{{\bf H}}R_{\overline{k_{y}}}^{-1}(y)\), hence \(x\leq_{{\bf R}}y\) iff \(\psi(x)\leq_{{\bf R}_{\varphi,f}}\psi(y)\) for all \(x,y\in M\). Since \(\psi(\perp)=\perp\) and \(\psi(\top)=\top\), \(\psi\) is a lattice isomorphism between \({\bf R}\) and \({\bf R}_{\varphi,f}\). Since \(k_{xy}=k_{x}\cdot_{{\bf K}}k_{y}\), we have \(\overline{k_{xy}}=\overline{k_{x}k_{y}}=f(k_{x},k_{y})\overline{k_{x}\,k_{y}}\), so for all \(x,y\in M\) \[\psi(xy)=(R_{\overline{k_{xy}}}^{-1}(xy),k_{xy})=(R_{\overline{k_{x}}}^{-1}R_{ \overline{k_{y}}}^{-1}(xy)f^{-1}(k_{x},k_{y}),k_{x}k_{y}).\] On the other hand, \[\psi(x)\psi(y)= (R_{\overline{k_{x}}}^{-1}(x),k_{x})(R_{\overline{k_{y}}}^{-1}(y),k _{y})\] \[= (R_{\overline{k_{x}}}^{-1}(x)\varphi_{k_{x}}(R_{\overline{k_{y}}}^ {-1}(y))f^{-1}(k_{x},k_{y}),k_{x}k_{y})\] \[= (\varphi_{k_{x}}(L_{\overline{k_{x}}}^{-1}(x))\varphi_{k_{x}}(R_{ \overline{k_{y}}}^{-1}(y))f^{-1}(k_{x},k_{y}),k_{x}k_{y})\] \[= (\varphi_{k_{x}}(L_{\overline{k_{x}}}^{-1}(x)R_{\overline{k_{y}}} ^{-1}(y))f^{-1}(k_{x},k_{y}),k_{x}k_{y})\] Since \[xy=L_{\overline{k_{x}}}L_{\overline{k_{x}}}^{-1}(x)\cdot R_{\overline{k_{y}}} R_{\overline{k_{y}}}^{-1}(y)=R_{\overline{k_{y}}}L_{\overline{k_{x}}}(L_{ \overline{k_{x}}}^{-1}(x)R_{\overline{k_{y}}}^{-1}(y)),\] we have that \[L_{\overline{k_{x}}}^{-1}(x)R_{\overline{k_{y}}}^{-1}(y)=L_{\overline{k_{x}}} ^{-1}R_{\overline{k_{y}}}^{-1}(xy).\] So \[R_{\overline{k_{x}}}^{-1}R_{\overline{k_{y}}}^{-1}(xy)=R_{\overline{k_{x}}}^ {-1}(L_{\overline{k_{x}}}L_{\overline{k_{x}}}^{-1})R_{\overline{k_{y}}}^{-1}( xy)=\varphi_{k_{x}}(L_{\overline{k_{x}}}^{-1}R_{\overline{k_{y}}}^{-1}(xy))= \varphi_{k_{x}}(L_{\overline{k_{x}}}^{-1}(x)R_{\overline{k_{y}}}^{-1}(y)),\] hence \[\psi(xy)=\psi(x)\psi(y).\] Since \(\mathbf{R}\) is compact, we know \(\psi(xy)=\psi(x)\psi(y)\) for all \(x,y\in R\). So \(\psi\) is a lattice-ordered monoid isomorphism. Since both of \(\mathbf{R}\) and \(\mathbf{R}_{\varphi,f}\) are residuated lattices, \(\psi\) is a lattice and monoid isomorphism, and the divisions are definable by the order and multiplication, we get that \(\psi\) is a residuated-lattice isomorphism.
2307.04346
Can Large Language Models Write Good Property-Based Tests?
Property-based testing (PBT), while an established technique in the software testing research community, is still relatively underused in real-world software. Pain points in writing property-based tests include implementing diverse random input generators and thinking of meaningful properties to test. Developers, however, are more amenable to writing documentation; plenty of library API documentation is available and can be used as natural language specifications for PBTs. As large language models (LLMs) have recently shown promise in a variety of coding tasks, we investigate using modern LLMs to automatically synthesize PBTs using two prompting techniques. A key challenge is to rigorously evaluate the LLM-synthesized PBTs. We propose a methodology to do so considering several properties of the generated tests: (1) validity, (2) soundness, and (3) property coverage, a novel metric that measures the ability of the PBT to detect property violations through generation of property mutants. In our evaluation on 40 Python library API methods across three models (GPT-4, Gemini-1.5-Pro, Claude-3-Opus), we find that with the best model and prompting approach, a valid and sound PBT can be synthesized in 2.4 samples on average. We additionally find that our metric for determining soundness of a PBT is aligned with human judgment of property assertions, achieving a precision of 100% and recall of 97%. Finally, we evaluate the property coverage of LLMs across all API methods and find that the best model (GPT-4) is able to automatically synthesize correct PBTs for 21% of properties extractable from API documentation.
Vasudev Vikram, Caroline Lemieux, Joshua Sunshine, Rohan Padhye
2023-07-10T05:09:33Z
http://arxiv.org/abs/2307.04346v2
# Can Large Language Models Write Good Property-Based Tests? ###### Abstract Property-based testing (PBT), while an established technique in the software testing research community, is still relatively underused in real-world software. Pain points in writing property-based tests include implementing diverse random input generators and thinking of meaningful properties to test. Developers, however, are more amenable to writing documentation; plenty of library API documentation is available and can be used as natural language specifications for property-based tests. As large language models (LLMs) have recently shown promise in a variety of coding tasks, we explore the potential of using LLMs to synthesize property-based tests. We call our approach PBT-GPT, and propose three different strategies of prompting the LLM for PBT. We characterize various failure modes of PBT-GPT and detail an evaluation methodology for automatically synthesized property-based tests. PBT-GPT achieves promising results in our preliminary studies on sample Python library APIs in numpy, networkx, and datetime. ## I Introduction Property-based testing (PBT) is a powerful testing technique for testing properties of a program through random generation of inputs. Unlike traditional testing methods that rely on manually written test cases and examples, PBT uses automatic generation of a wide range of inputs that can invoke a diverse set of program behaviors. PBT was first popularized by the Quickcheck [1] library in Haskell, and has used to find a plethora of bugs in a variety of real-world software [2, 3, 4, 5]. Additional techniques have been built on top of PBT [6, 7, 8] and have demonstrated their potential in providing stronger testing for software. Despite its proven results and impact in the research community, PBT is not as widely adopted by open source and industry software developers. Using the Open Source Insights [9] dependency dataset, we find that only 222 out of 180,000 PyPI packages list the Python PBT library _Hypothesis_ as a dependency, despite it being a very popular project (6.7k+ stars on GitHub). Harrison et al. [10] conducted a series of interviews and detail a set of challenges faced by professional developers when attempting to use PBT in their software. Developers reported difficulties in (1) writing random data generators for inputs and (2) articulating and implementing properties that would meaningfully test their code. Furthermore, they describe the "critical mass problem" that PBT is still relatively unknown and unpopular among the software industry. While developers have been reticent to adopt PBT, the practice of _documenting code_ is widespread. Documentation for library API methods is fairly common for certain languages such as Python and contains valuable information about input parameters and properties of the output. An truncated version of the documentation for the numpy.cumsum API method can be seen in Figure 1. Recently, the use of pre-trained large language models (LLMs) for code generation has become increasingly popular [11, 12, 13]. LLMs have been effective at translating natural language specifications and instructions to concrete code [14, 15]. Additionally, LLMs have shown potential to improve existing automated unit test generation techniques [16], and even generate unit tests from scratch [17, 18]. In this paper, we investigate the potential of using LLMs to generate _property-based tests_ when provided API documentation. We believe that the documentation of an API method can assist the LLM in producing logic to generate random inputs for that method and deriving meaningful properties of the result to check. We can see the potential of using LLMs for PBT in Figure 2, which displays an LLM-generated property-based test for the numpy.cumsum method when provided the documentation in Figure 1. First, the logic for generating random values for the input parameters \(a\) and _axis_ is in lines 10-17. Then, the cumsum method is invoked on these arguments on line 20. Finally, lines 25-37 contain contain property assertions for the output cumsum_result. We specifically note that these properties assertions match natural language descriptions in the API documentation in Figure 1. The documentation specifies that "result has the same size as _a_", which has a direct translation to the assertion in line 30. Similarly, the specification that result has "the same shape as \(a\) if _axis_ is not None or \(a\) is a 1-d array" is checked conditionally as an assertion in lines 25-26. Finally, the property assertion shown in lines 35-37 checks that the last element of the result is equal to np.sum(a) if the array is not of float type. This assertion translates information from the notes section in the documentation into a useful property to check. While not a perfect property-based test, this example demonstrates the ability of LLMs to write logic for generating random inputs and derive meaningful property assertions from API documentation. In this paper, we propose an approach of applying LLMs to generate property-based tests; we call this _PBT-GPT_, since we chose to use GPT-4 as the underlying models in our implementation. We outline three different approaches for PBT-GPT, in which we sample the generator and properties _independently_, _consecutively_, and _together_. In preliminary exploration of the use of LLMs to generate PBT, we noticed a variety of different failure modes for PBT. We characterize these failure modes and propose a methodology to evaluate (1) the quality of the generator, and (2) the quality of the properties. We note this methodology could be applied to different forms of automated PBT generation. We report preliminary results using our proposed evaluation methodology on three Python library APIs. ## II Background ### _Property-based Testing_ Property-based testing [1] aims to probabilistically test a program by generating a large number of random inputs and checking whether the corresponding outputs of the program adhere to a set of desired properties. A property-based test can be defined as the following: given a function/method under test \(f\), an input space X, and a property \(P\), we want to validate that \(\forall x\in\mathrm{X}:P(x,f(x))\). Often, \(P\) comprises a collection of component properties, that is, \(P=p_{1}\wedge p_{2}\land\ldots p_{k}\). In practice, we are unable to enumerate all inputs in \(\mathrm{X}\). So, we write a _generator_ function _gen_ that produces a random input in \(\mathrm{X}\), i.e. \(x=\textit{gen}()\). Then we write a _parametrized test_\(T::X\rightarrow\{\textit{true},\textit{false}\}\) that returns \(P(x,f(x))\). The property is checked on many randomly generated values \(x\) and a violation of the property causes the test to fail. Although PBT cannot prove the absence of property violations, it is nevertheless an improvement over testing specific hard-coded inputs and outputs as is commonly done in unit testing. PBT is thus a form of _fuzz testing_. We next describe how our formal definition of a property-based test translates to property-based testing code in Hypothesis [7], a popular PBT library for Python. Suppose our function under test is the Python sorted function, which takes in a list as input and returns a sorted version. We would like to test the property that the elements of the sorted list are monotonically increasing. First, we must write our generator _gen_ that samples an input from the input space of lists. An example of such a generator Fig. 1: Truncated Numpy documentation for the numpy.cumsum API method. The documentation has natural language descriptions of properties about the result shape/size and additional information about the last element of the result. Fig. 2: A GPT-4 generated property based test for numpy.cumsum. The test first generates random integer arrays between size 1 and 20 and a random axis. Then, the API method under test np.cumsum is invoked on the randomly generated inputs. Finally, three properties are checked on the output array, all derived from information in the API documentation. All comments are also generated by GPT-4. is the generate_lists function in lines 4-8 of Figure 3. Hypothesis has a built-in set of sampling _strategies_ for various data structures. The lists and integers strategies in line 6 are used to randomly generate and return a Python integer list of size \(\geq 1\). Next, we must write the parametrized test \(T\) that takes in an input \(x\) and returns \(P(x,f(x))\), where \(f\) is the sorted function and \(P\) is the property that the elements of the sorted list are monotonically increasing. An example of such a parametrized test is the test_sorted_separate function seen in Figure 3. In line 12, the sorted function is invoked on the input lst. Then, lines 15-16 check the property \(P\) that elements of the sorted listed are increasing by using an _assertion_ statement. \(T\) will return _true_ if \(P(x,f(x))\) holds true, i.e. there is no assertion failure. Generally, if \(P\) were to consist of multiple component properties, it can be represented as a list of assertion statements in \(T\). Finally, to complete the property-based test, we must invoke our generator to sample random inputs and call the parametrized test on the input. This is done using the Hypothesis @given decorator, as seen in line 12. The decorator specifies that the input lst of our parametrized test test_sorted_separate should use generate_lists as the generator. Another style of writing a Hypothesis test is to include the generator inside the parametrized test, as seen in the function test_sorted_combined in Figure 3. At line 20, the @given(data()) decorator provides an object which can be used to sample random input data of unspecified type. Lines 22-24 act as the generator, using the same logic as the generate_lists function to a generate random integer list of with minimum size 1. Lines 25-27 use the method invocation and assertion statements as is in test_sorted_separate. The approach of including the generator in the parametrized test has particular advantages when the method under test has multiple input parameters that have dependencies with each other. In this scenario, each argument can be sequentially generated one at a time using generators that depend on previously generated arguments. While the property-based tests shown in Figures 3 are valid and will properly run, they are not necessarily the _best_ property-based tests for the sorted function. Perhaps the user would like validate the behavior of sorted on the empty list, which is not an input produced by our generator due to the min_size=1 constraint in lines 8 and 24. Similarly, the assertions in lines 15-16 and lines 26-27 do not capture all behavior of the sorted function. For instance, it does not check that lst and sortedlst share the same elements. We discuss these types of challenges more in Section IV. ### _Large Language Models_ Pre-trained large language models (LLMs) [19, 20, 21, 22, 15] are a class of neural networks with a huge number of parameters, trained on large corpora of text data. These models are typically trained in an _autoregressive_ manner--i.e., trained to predict the next token in a sequence--which allows them to be trained on a large volumes of unlabelled text. This extensive pre-training allows them to function as _one-shot_ or _zero-shot_ learners [20]. That is, these models can perform a variety of tasks when given only one example of the task, or a textual instruction of the tasks. The natural-language instructions, along with any additional input data, that are passed to the LLM are called the _prompt_[23]. The practice of creating prompts that allow the LLMs to effectively solve a target task is called _prompt engineering_. Further, a number of LLMs have been trained extensively on code [11, 24, 25]. Codex [11] starts from GPT-3 and was additionally trained on 160GB of code data; StarCoder [25] is a 15.5 billion parameter model trained on 1 trillion code tokens. These models, as well as more general-purpose LLMs, have been used for numerous software engineering tasks, including program synthesis [26, 12], program repair [27, 28, 29], code explanation [30], and test generation [16, 17, 18]. These techniques use the LLMs out-of-the-box, getting them to accomplish the tasks via prompt engineering alone. Like prior work, we use pre-trained language models and adapt them to our tasks only via prompt engineering. We discuss three different methods to construct these prompts in Section III. Fig. 3: Example property-based tests in Hypothesis for the Python sorted function to sort lists. The test_sorted_separate function uses a separate generator, whereas the function test_sorted_combined combines the generator and testing logic into one function. ## III The PBT-GPT Approach To synthesize a property-based test from the LLM, we first design a prompt that includes the API documentation and instructions to write a property-based test for the input method. We divide the process of synthesizing property-based tests into two main components: (1) synthesizing the generator function, and (2) synthesizing the parametrized test containing assertions for the properties. We begin by designing a high-level prompt template composed of the following parts: 1. System-level instructions stating that the LLM is an expert Python programmer. 2. User-level task instructions to review the API documentation and generate a PBT component (e.g. generator, properties, or both) using the Hypothesis library. 3. The input API method documentation, taking directly from the website. 4. The desired output format (e.g. using Hypothesis st.composite or st.data() depending on the task). This prompt template can be tuned for three different tasks: synthesizing the generator function, synthesizing the properties as assertion statements, and synthesizing both into a single parametrized test. An example of a prompt following this design for the _generator_ task for the networkx.find_cycle method can be seen in Figure 4. The user-level tasks instruct to review the API documentation for find_cycle and write a function that generates random values of the networkx.Graph object. The output format uses the st.composite decorator (ref. 4 of Figure 3) and provides the generator function signature. A similarly structured prompt is used to synthesize properties by changing the second task instructions. Rather than instructing the LLM to write a generator function, we instruct it to write property assertions using provided input and output variable names. To synthesize both the generator and properties in a single parametrized test, we include instructions for writing both the generator and properties and specify an output format using the st.data() decorator (ref. line 20 of Figure 3). Using this prompt design for individual PBT components, how can we generate a complete property-based test for an API method? We propose three methods of prompting the LLM to generate a property-based test, each named by the method in which the PBT components are generated: _independently_, _consecutively_, and _together_. Our three prompting approaches are broadly outlined in Figure 5. We next describe each in detail. #### Iii-1 Independently We prompt the language model twice, _independently_, to produce the generator function and property assertions as shown on the left in Figure 5. The generator prompt follows the structure shown in Figure 4, including task instructions the LLM to write a generator function for a specific input object. The properties prompt uses different task instructions to write desired properties as assertions. Once both components are generated, we automatically insert boilerplate for the parametrized test that specifies the LLM-synthesized generator function in the @given decorator and and invokes the API method on the input. We place the LLM-synthesized property assertions after the API method call. This follows the structure of the test shown in lines 12-16 of Figure 3, with a separate generator and parametrized test. #### Iii-2 Consecutively We prompt the language model _consecutively_, as shown on the middle in Figure 5. That is, we first prompt the LLM to produce a generator function for the input object. Then, we _continue the conversation_ with a follow-up prompt that instructs the LLM to write a parametrized test that uses the previously synthesized generator and contains assertions for any desired properties. This provides the LLM with the context of the generator, which may assist in producing meaningful properties. The generated property-based test has the same structure as that of independent prompting. #### Iii-3 Together Finally, we can sample the generator and the properties _together_ in a single test function using one prompt from the LLM, as shown on the right in Figure 5. The prompt includes instructions to write the input data generator and desired property assertions in one test function. The output format uses the st.data() decorator as seen in line 20 of Figure 3 so that the test function can dynamically generate input data and call property assertions on the output. ## IV Assessing and Improving Results Using our PBT-GPT methodology to prompt the LLM to synthesize property-based tests, how do we evaluate quality of these generated tests? While the effectiveness of unit tests has been a well studied and topic for decades [31, 32, 33, 34], this is not the case for property-based tests. One difficulty in conducting these types of evaluations for PBT is the lack of readily available property-based tests for software. Thankfully, LLMs can provide us a method of automatically generating property-based tests for which we can design an evaluation methodology. Fig. 4: An example prompt for synthesizing the generator function of a networkx.Graph object. We propose a PBT evaluation methodology and metrics focusing on (1) the quality of the generator and (2) the quality of the properties. We include examples of inaccurate LLM-synthesized property-based tests for API methods in Python and discuss the issues that impact each of these qualities. All of our examples use the Hypothesis PBT library in Python. ### _Generator Quality_ #### Iv-A1 Generator Validity One type of incorrect behavior in a PBT generator is a simple validity issue in which a run-time error is encountered when the generator function is invoked. An example is seen in the LLM-produced generator for timedelta objects in the Python datetime module is shown in Figure 6. The generator function produces values for the timedelta object that may result in an OverflowError raised by the datetime.timedelta constructor when the magnitude of days exceeds 1,000,000. While this specific generator can still be used in a property-based test, it is possible for an automatically synthesized generator to _always_ result in a run-time error and thus be completely unusable. Thus, we need to be aware of the frequency at which a generator invocation may encounter a run-time error. To measure generator validity, we invoke the generator multiple times and record the percentage of invocations that do not result in any run-time errors. We found that ten GPT-4-synthesized generators for the timedelta object achieved an average of 99% validity across 10,000 executions. #### Iv-A2 Generator Diversity While high generator validity is essential for functional property-based tests, another important quality of a generator is its ability to produce _diverse_ inputs. If a generator can only produce a small subset of the input space, then the API method may not be properly tested. Figure 7 displays an LLM-synthesized generator for the networkx.Graph objects to be used as input to networkx.find_cycle method. We can observe that this generator is only capable of an undirected graph, as seen by the call to nx.Graph() in line 22. While this generator may produce a high number of _unique_ inputs, we are more interested in the input diversity with respect to the API method. The networkx.find_cycle method contains distinct logic to handle _directed_ graphs; since this generator only produces undirected graphs, this logic will not be tested. We thus evaluate generator diversity by measuring _coverage_ of the API method under test when invoked on the generated inputs. We found an average of 87.1% statements covered and 71.1% branches covered on the Fig. 5: Three different methods of generating the property-based test using the LLM based on the combination of the generator and the property assertions. Fig. 6: An example invalid datetime.timedelta generator produced by GPT-4. The datetime.timedelta constructor on line 15 raises an OverflowError when the absolute magnitude of days exceeds 1,000,000. find_cycle method over ten GPT-4-synthesized generators. The main gap in branch coverage was due to the fact that the majority of generators only constructed undirected graphs. ### _Property Quality_ #### Iv-B1 Property Validity We define a property as _invalid_ if it results in a run-time error unrelated to the property assertion. This may occur if the LLM synthesizes erroneous code, such as calling a nonexistent API method. The LLM-synthesized assertion on line 8 of Figure 8 contains an call to networkx.is_undirected_acyclic_graph, which does not exist in the networkx library. We measured the percentage of valid properties across 10 different synthesized property-based tests, and found that GPT-4 achieved 98% property validity for the networkx.find_cycle. #### Iv-B2 Property Soundness LLMs may also synthesize a property that is _unsound_, i.e., there exists an input/output example that violates the property but is valid given the specification. Figure 9 provides an example of an LLM-synthesized property-based test for the numpy.cumsum method that contains an unsound property on line 6. The numpy.cumsum documentation specifies that for a given input array \(a\), the output should have "the same shape as \(a\) if axis is not None or \(a\) is a 1-d array". The synthesized property is unsound because it unconditionally checks whether the output and input shapes match. A randomly generated input of array([[0]] produces an assertion failure when this test is run since the input shape is (1, 1) and the output shape is (1,). If we encounter an assertion failure from a property check during the test, how do we know whether it is due to an unsound property or due to a bug in the API implementation? Given an assertion failure, we assume that the likelihood of the LLM generating an unsound property is higher than the likelihood of a bug. Thus, we can capture the soundness of a property by measuring the frequency at which the assertion fails across multiple generated inputs. If an assertion fails on a large percentage of the inputs, it is most likely an unsound property. To report the soundness of property assertions, we run the property-based test multiple times and the percentage of runs that result in assertion failures from the property. If the assertion fails in over 10% of the runs, then the property is flagged as unsound and we manually inspect the soundness of the property check. We used ran ten GPT-synthesized property-based test 10,000 times for the numpy.cumsum and found that 68% of the properties were sound. 6 of the 10 synthesized property-based tests did not contain any unsound property assertions. The majority of the unsound properties contained an unconditional equality check between cumsum(a)[-1] and np.sum(a), which is not necessarily true for floating point values (ref. Figure 1). #### Iv-B3 Property Strength While property validity and soundness measure the correctness of our generated properties, the _strength_ of the property still needs to be measured. By _strength_, we mean the ability of a property to check interesting behavior of the program. Figure 10 displays a property-based test for the timedelta.total_seconds API method containing a _weak_ property on line 5. This property simply checks that the t.total_seconds() is of float type. Fig. 8: A GPT-4 generated property-based test containing an invalid property assertion containing an API call to a nonexistent method is_undirected_acyclic_graph on line 8. Fig. 7: An example networkx.Graph generator produced by GPT-4 that lacks diversity since it only generates undirected nx.Graph objects and does not generate any directed graphs (e.g. nx.DiGraph). Fig. 9: A GPT-4 generated property-based test containing an unsound property for numpy.cumsum on line 6. While this property is valid and 100% sound, it does not test any of the core logic of the API method. We propose to use mutation score as a metric of measuring the strength of synthesized properties. We record the percentage of mutants that are killed specifically due to the assertion failures of sound properties. This disregards any mutants that are trivially killed by exceptions thrown due the invocation of the API method and focuses specifically on the ability of the synthesized property assertions to detect mutant behavior. The GPT-4-synthesized property checks were able to kill an average of 69% of mutants in the date chine Translation model on a dataset of (test case, oracle) pairs, and uses the trained model to generate oracles on new test cases. The NMT model predicts token sequences, avoiding some basic syntactic validity problems. TOGA [44] avoids the problem of validity by defining a _grammar_ of possible assertions, and having the deep learning model choose a production in this grammar. However, the problems of soundness and strength remain. Unlike regular unit tests, which encode properties of a single input, the PBTs we generate encode properties over arbitrary generator-generated inputs. The problem of effectively searching the input space, so that the PBT shows a bug, is orthogonal to our current work. Our LLM-generated generator could be paired with coverage guidance [8, 6], validity-biased guidance [45], reinforcement-learning guidance [46], or behavioral diversity guidance [47], to produce more "interesting" test inputs. Finally, the problem of _fuzz harness_ or _fuzz driver_ generation bears some similarity to our property generation problem [48, 49]. In fuzz driver generation, the goal is to take unstructured byte data provided by the fuzz tester, and use it to exercise the program under test in a meaningful manner. The generated fuzz drivers resemble the "combined" PBT shown in Figure 3, which consumes random data and uses it directly to construct inputs to exercise the API. Thus, a good fuzz driver should satisfy _generator validity_, _generator diversity_, and _property diversity_. As fuzz testing typically relies only on the crashing oracle (or on crashing oracles provided by instrumentation techniques such as ASAN), these works do not engage with the question of _property soundness_ or _property strength_ in the same manner we do. Nevertheless, the need for reasonable assertions emerges from a desire to reduce false positive bugs. The authors of UTopia [50], which extracts fuzz drivers from unit tests, note that some unit tests assertions (e.g., checking null pointers) must be preserved to maintain property validity. ## VI Conclusion and Future Work In this paper, we explored the potential of LLMs in synthesizing property-based tests. Our preliminary studies show promising results on sample Python library APIs, with synthesized tests containing diverse generators and properties derived from descriptions in the documentation. We believe that LLM-synthesized property-based tests can be used as a great starting point for developers and LLMs to iterate upon. One direction for future work is to incorporate additional property-based testing features into our prompt design. For example, the LLM could synthesize generators that use the hypothesis.assume statements, prioritizing generating inputs with certain features. The use of these statements has shown to improve the strength of generators, as it results in a higher percentage of inputs that execute deeper program behaviors. We also believe that the examples shown in this paper highlight how the use of LLMs can encourage and assist developers in writing property-based tests for their software. An LLM can provide the initial logic for generators and properties, which is often the largest barrier of entry in writing these tests. Additionally, our methodology for evaluating property-based tests can lend itself to a useful workflow that automatically executes the LLM-synthesized output and flags any potential failure modes. In many of our observed LLM-synthesized property tests, a developer would need simple fixes for correcting errors such as invalid generators and unsound assertions. To this end, we are creating a usable platform for Python developers to use LLMs to synthesize high quality property-based tests, and can be found at [https://proptest.ai](https://proptest.ai). While many of the techniques we have discussed can be automated, we believe they work best with a human in the loop. Our intentions are to reduce the barrier of entry for PBT and encourage more developers to incorporate it into their testing methodologies.
2304.12074
Regularity results and optimal velocity control of the convective nonlocal Cahn-Hilliard equation in 3D
In this contribution, we study an optimal control problem for the celebrated nonlocal Cahn-Hilliard equation endowed with the singular Flory-Huggins potential in the three-dimensional setting. The control enters the governing state system in a nonlinear fashion in the form of a prescribed solenoidal, that is a divergence-free, vector field, whereas the cost functional to be minimized is of tracking-type. The novelties of the present paper are twofold: in addition to the control application, the intrinsic difficulties of the optimization problem forced us to first establish new regularity results on the nonlocal Cahn-Hilliard equation that were unknown even without the coupling with a velocity field and are therefore of independent interest. This happens to be shown using the recently proved separation property along with ad hoc H\"older regularities and a bootstrap method. For the control problem, the existence of an optimal strategy as well as first-order necessary conditions are then established.
Andrea Poiatti, Andrea Signori
2023-04-24T13:11:00Z
http://arxiv.org/abs/2304.12074v2
Regularity results and optimal velocity control of the convective nonlocal Cahn-Hilliard equation in 3D ###### Abstract In this contribution, we study an optimal control problem for the celebrated nonlocal Cahn-Hilliard equation endowed with the singular Flory-Huggins potential in the three-dimensional setting. The control enters the governing _state system_ in a nonlinear fashion in the form of a prescribed solenoidal, that is a divergence-free, vector field, whereas the _cost functional_ to be minimized is of tracking-type. The novelties of the present paper are twofold: in addition to the control application, the intrinsic difficulties of the optimization problem forced us to first establish new regularity results on the nonlocal Cahn-Hilliard equation that were unknown even without the coupling with a velocity field and are therefore of independent interest. This happens to be shown using the recently proved separation property along with _ad hoc_ Holder regularities and a bootstrap method. For the control problem, the existence of an optimal strategy as well as first-order necessary conditions are then established. **Keywords:** Convective nonlocal Cahn-Hilliard equation, Flory-Huggins potential, separation property, regularity results, optimal velocity control. **AMS (MOS) Subject Classification:** 35K55, 35K61, 49J20, 49J50, 49K20. ## 1 Introduction Let \(\Omega\subset\mathbb{R}^{3}\) be some open, bounded domain with smooth boundary \(\Gamma:=\partial\Omega\) and the outward unit normal field \(\boldsymbol{n}\). For a prescribed final time \(T>0\), we analyze a suitable optimal control problem, which we are going to present below, for the following initial boundary value problem: \[\partial_{t}\varphi+\nabla\varphi\cdot\boldsymbol{v}-\operatorname{ div}(m(\varphi)\nabla\mu)=0 \quad\text{in }Q:=\Omega\times(0,T), \tag{1.1}\] \[\mu=-\epsilon\,\mathcal{K}*\varphi+\epsilon^{-1}F^{\prime}(\varphi) \quad\text{in }Q,\] (1.2) \[(m(\varphi)\nabla\mu)\cdot\boldsymbol{n}=0 \quad\text{on }\Sigma:=\Gamma\times(0,T),\] (1.3) \[\varphi(0)=\varphi_{0} \quad\text{in }\Omega. \tag{1.4}\] The system represents a convective version of the celebrated nonlocal Cahn-Hilliard equation, where the velocity field \(\boldsymbol{v}\) is prescribed. We briefly describe the primary variables of the system as their complete understanding will be clarified below through the presentation of the connected literature. The variable \(\varphi\) denotes an order parameter known as the _phase field_, \(\mu\) is the associated chemical potential, \(m(\varphi)\) is a mobility function, while \(\epsilon\) is a positive physical constant. Finally, \(\boldsymbol{v}\) stands for a divergence-free vector field that will play the role of control later on, \(\varphi_{0}\) is a suitable initial datum, and \(F^{\prime}\) stands for the derivative of a double-well-shaped nonlinearity. The above system represents a nonlocal version of the (local) Cahn-Hilliard equation that was originally introduced in [2] and [6] to model segregation processes in a binary mixture. Despite its original application related to material science, in the last decades the model has proven to be remarkably flexible in describing plenty of segregation-driven problems related to cell biology [4, 14] and tumor growth [11] (see also [37] and the references therein). Given a binary mixture, we indicate with \(\varphi\) the phase field variable describing the relative mass fraction difference. It is assumed that \(\{\varphi=1\}:=\{x\in\Omega:\varphi(x)=1\}\) and \(\{\varphi=-1\}\) indicate the regions occupied by the pure phases and that \(\varphi\) smoothly transits from \(-1\) to \(1\) in a narrow transition layer, approximating the interface, whose thickness scales as \(\epsilon>0\). If the mixture is isothermal, the system evolves minimizing the Ginzburg-Landau functional reading as \[\mathcal{G}(\varphi):=\frac{\epsilon}{2}\int_{\Omega}|\nabla\varphi|^{2}+ \frac{1}{\epsilon}\int_{\Omega}\Psi(\varphi), \tag{1.5}\] where \(\Psi(\varphi)\) is the Flory-Huggins free energy density \[\Psi(s)=\frac{\theta}{2}((1+s){\rm ln}(1+s)+(1-s){\rm ln}(1-s))-\frac{\theta _{0}}{2}s^{2}=F(s)-\frac{\theta_{0}}{2}s^{2},\quad\forall s\in[-1,1], \tag{1.6}\] with constants related to the mixture temperature such that \(0<\theta<\theta_{0}\). As customary, \(\Psi\) is also called _logarithmic potential_ and it is extended by continuity at \(\pm 1\) and as \(+\infty\) otherwise. The nonlinearity (1.6) is said to be _singular_, since it approaches \(+\infty\) as its argument tends to the pure phases \(\pm 1\). To simplify the model, one often considers a polynomial approximation of \(\Psi\) taking the _regular potential_ defined as \(\Psi_{\rm reg}(s)=\frac{1}{4}(s^{2}-1)^{2}\), \(s\in\mathbb{R}\). It is worth recalling that, in the case of polynomial type potentials, it is not possible to guarantee the existence of physical solutions, that is, solutions for which \(-1\leq\varphi\leq 1\) throughout the evolution. Therefore, to stick with the physics of the model, we will concentrate on the singular choice (1.6). With the above ingredients, the Cahn-Hilliard equation reads as follows: \[\partial_{t}\varphi-\operatorname{div}(m(\varphi)\nabla\mu)=0 \quad\text{in }Q, \tag{1.7}\] \[\mu=-\epsilon\Delta\varphi+\epsilon^{-1}\Psi^{\prime}(\varphi) \quad\text{in }Q,\] (1.8) \[\partial_{n}\varphi=(m(\varphi)\nabla\mu)\cdot\boldsymbol{n}=0 \quad\text{on }\Sigma,\] (1.9) \[\varphi(0)=\varphi_{0} \quad\text{in }\Omega, \tag{1.10}\] where the no-flux condition for the chemical potential \(\mu\) entails that no mass flux occurs at the boundary. Noticing that the free energy \(\mathcal{G}\) introduced in (1.5) only focuses on short range interactions between particles, Giacomin and Lebowitz observed in [30, 31, 32] that a physically more rigorous derivation leads to nonlocal dynamics, formulating the nonlocal Cahn-Hilliard equation. From the modeling, using general approaches of statistical mechanics, the mutual short and long-range interactions between particles are described through convolution integrals weighted by interactions kernels. In this case, the gradient term is replaced by a nonlocal spatial interaction integral and the free energy \(\mathcal{G}\) is replaced by the nonlocal Helmholtz free energy \[\mathcal{E}(\varphi):=-\frac{1}{2}\int_{\Omega\times\Omega}\mathcal{K}(x-y) \varphi(x)\varphi(y)\,\mathrm{dx}\,\mathrm{dy}+\int_{\Omega}F(\varphi), \tag{1.11}\] where \(\mathcal{K}\) is a sufficiently smooth symmetric interaction kernel. This functional is characterized by a competition between the mixing entropy \(F\), a convex function, and a nonlocal demixing term related to \(\mathcal{K}\). As shown in [31] (see also [27, 28, 39] and the references therein), the energy \(\mathcal{G}\) can be seen as an approximation of \(\mathcal{E}\), as long as we redefine \(F\) as \(\widetilde{F}(x,s)=F(s)-\frac{1}{2}(\mathcal{K}*1)(x)s^{2}\), \(x\in\Omega,s\in[-1,1]\). Indeed, we can rewrite \(\mathcal{E}\) as \[\mathcal{E}(\varphi) =\frac{1}{4}\int_{\Omega\times\Omega}\mathcal{K}(x-y)|\varphi(y) -\varphi(x)|^{2}\,\mathrm{dx}\,\mathrm{dy}+\int_{\Omega}F(\varphi)-\frac{1}{ 2}\int_{\Omega}a\varphi^{2}\] \[=\frac{1}{4}\int_{\Omega\times\Omega}\mathcal{K}(x-y)|\varphi(y) -\varphi(x)|^{2}\,\mathrm{dx}\,\mathrm{dy}+\int_{\Omega}\widetilde{F}(\varphi),\] upon setting \(a(x):=(\mathcal{K}*1)(x)\), \(x\in\Omega\). We can formally interpret \(\widetilde{F}\) as the potential \(\Psi\) occurring in (1.5), and observe that the (formal) first-order approximation of the nonlocal interaction is \(\frac{k}{2}|\nabla\varphi|^{2}\), for some \(k>0\), as long as \(\mathcal{K}\) is sufficiently peaked around zero. In that case, it is also possible to study the nonlocal-to-local asymptotics that rigorously justifies the above-sketched intuition: see [12, 13]. As we are not focusing on nonlocal-to-local asymptotics, we select the easier formulation with \(F\) convex and without the term \(a\), being aware that everything can be straightforwardly reformulated for the other case. The resulting nonlocal Cahn-Hilliard equation then reads as (see [27, 28]): \[\partial_{t}\varphi-\mathrm{div}(m(\varphi)\nabla\mu)=0 \text{in }Q, \tag{1.12}\] \[\mu=-\,\mathcal{K}*\varphi+\epsilon^{-1}F^{\prime}(\varphi) \text{in }Q,\] (1.13) \[(m(\varphi)\nabla\mu)\cdot\boldsymbol{n}=0 \text{on }\Sigma,\] (1.14) \[\varphi(0)=\varphi_{0} \text{in }\Omega. \tag{1.15}\] The existence of weak solutions, the uniqueness, and the existence of the connected global attractor were discussed in [18, 20, 21]. For the local case, without the claim of being exhaustive, we refer to [37, 47] and the references therein. In the the local case, few results are known for degenerate mobilities, i.e., that vanish in the pure phases, namely just the existence of weak solutions obtained in [15]. The coupling of the nonlocal Cahn-Hilliard equation with logarithmic potential with suitably degenerate mobility has instead proven very effective. Roughly speaking, the idea is to choose the mobility function in such a way that the two degeneracies, the one of the singular potential and the one of the mobility, compensate providing (1.12)-(1.13) of a parabolic structure. This also opens the path to obtain the existence of strong solutions and continuous dependence results: we refer to [16, 17, 19, 23] for more details. However, in contraposition to the local case, less is known for regularity theory in the nonlocal case with constant mobility and logarithmic potential. Hereby, we aim at filling this gap by providing new regularity theory for the solutions to (1.1)-(1.4) Further results concerning well-posedness and regularity of weak solutions for the nonlocal case are studied in [27], where the validity of the strict separation property in dimension two for system (1.12)-(1.14) with constant mobility and singular potential were established. This means that if the initial state \(\varphi_{0}\) is not a pure phase, i.e., nor \(\varphi_{0}\equiv 1\) nor \(\varphi_{0}\equiv-1\), then the corresponding solution stays away from the pure states in arbitrarily small positive time, uniformly with respect to the initial datum. This property in dimension two was crucial to derive further regularity results as well as the existence of regular finite dimensional attractors, whereas in 3D only the existence of the (possibly infinite-dimensional) global attractor was proven. In the same work, the convergence of a weak solution to a single equilibrium was shown as well. Then, in [28], the same authors propose an alternative argument to prove the strict separation property in dimension two, relying on De Giorgi's iteration scheme. More recently, a similar approach was successfully adopted in [39] by the first author to prove for the first time the validity of the instantaneous strict separation property in dimension three, under weaker assumptions on the singular potential \(F\) (cf. the forthcoming assumption **H3**). In particular, it was shown that it is not necessary to assume the growth condition \[F^{\prime\prime}(s)\leq Ce^{C|F^{\prime}(s)|^{\gamma}},\qquad\forall s\in(-1,1 ),\quad\gamma\in(1,2], \tag{1.16}\] for some constant \(C>0\). This assumption, fulfilled, e.g., by the logarithmic potential, was essential in [27] and [28] for the application of the Trudinger-Moser inequality. Leaning on the result of the separation property, in [39] the author derives extra instantaneous regularization of weak solutions showing that, under suitable assumptions on the interaction kernel \(\mathcal{K}\), from any positive time \(\tau>0\) onward, the solution \(\varphi\) becomes Holder continuous in space-time and it belongs to \(L^{\frac{4}{3}}(\tau,\infty;H^{2}(\Omega))\). Moreover, it is also shown that any weak solution to (1.12)-(1.14) converges to a single equilibrium. Finally, it was proved that, given a sufficiently regular initial datum \(\varphi_{0}\) which is already strictly separated, the solution \(\varphi\) strictly separates for any \(t\geq 0\). To conclude the literature review, we point out [34], where, by means of a slightly refined version of the proof proposed in [39] for the validity of the strict separation property in three dimensions, extra regularity for the associated global attractor is proven. Let us now move to the control application. To perform the associated analysis, two mathematical ingredients are essential concerning the solution \(\varphi\) to (1.1)-(1.4): * the validity of the strict separation property for any time \(t\geq 0\), crucial to deal with the nonlinearity \(F\) and its higher-order derivatives. * Extra regularity properties on \(\varphi\), important when dealing with continuous dependence estimates in stronger norms. Those will be fundamental to show some differentiability properties of the associated solution operator. The former readily follows by [39, Corollary 4.5, Remark 4.7], so that our first aim is to show that, assuming \(\varphi_{0}\) and \(\boldsymbol{v}\) smooth enough, there exists a suitably regular solution \(\varphi\) to (1.1)-(1.4). In particular, extending the 2D result of [26], we show the existence and uniqueness of a weak and a strong solution \(\varphi\), according to the regularity of \(\varphi_{0}\). Then, we establish additional regularity results for the order parameter. Namely, provided the initial data and the velocity field are regular enough, we can guarantee the bound \[\|\varphi\|_{L^{4}(0,T;H^{2}(\Omega))\cap L^{3}(0,T;W^{1,\infty}( \Omega))}\leq C,\] for some \(C>0\) depending on the data of the system. Actually, we also obtained intermediate regularity results by adopting minimal assumptions on the prescribed velocity field. Taking inspiration from [17], this is achieved by performing a delicate bootstrap argument coupled with a maximal regularity result of type \(L^{2}\)-\(L^{p}\) for parabolic evolution equations with coefficients continuous in time (and Holder continuous in space) shown in [40]. As anticipated, the velocity field \(\boldsymbol{v}\) occurring in (1.18) is now considered as a control variable and it is allowed to vary in a suitable set, referred to as the _control-box_, given by \[\boldsymbol{\mathcal{V}}_{\mathrm{ad}}:=\{\boldsymbol{v}\in L^{ \infty}(0,T;L^{\infty}(\Omega;\mathbb{R}^{3}))\cap\boldsymbol{\mathcal{V}}: \boldsymbol{v}_{\mathrm{min}}\leq\boldsymbol{v}\leq\boldsymbol{v}_{\mathrm{ max}}\},\] with \(\boldsymbol{v}_{\mathrm{min}}\) and \(\boldsymbol{v}_{\mathrm{max}}\) given bounded functions, the inequalities being intended component-wise, and where the control space reads as \[\boldsymbol{\mathcal{V}}:=\Big{\{}L^{2}(0,T;L^{2}(\Omega;\mathbb{R}^{3})): \mathrm{div}\,\boldsymbol{v}=0\ \mathrm{in}\ \Omega,\quad\boldsymbol{v}\cdot \boldsymbol{n}=0\ \mathrm{on}\ \Gamma\Big{\}}.\] The _cost functional_ we want to minimize is a quadratic-type cost functional and it is defined as \[\mathcal{J}(\boldsymbol{v};\varphi)=\frac{\gamma_{1}}{2}\int_{0 }^{T}\int_{\Omega}|\varphi-\varphi_{Q}|^{2}+\frac{\gamma_{2}}{2}\int_{\Omega} |\varphi(T)-\varphi_{\Omega}|^{2}+\frac{\gamma_{3}}{2}\int_{0}^{T}\int_{ \Omega}|\boldsymbol{v}|^{2}, \tag{1.17}\] where \(\gamma_{1},\gamma_{2},\) and \(\gamma_{3}\) are nonnegative constants, not all zero, whereas \(\varphi_{Q},\varphi_{\Omega}\) denote some prescribed target functions defined in \(Q\) and \(\Omega\), respectively. Optimal control theory for Cahn-Hilliard type systems is rather flourishing and we refer to [35, 48] for optimal control problems related to the classical Cahn-Hilliard equation, and to [22, 25] for problems connected to the nonlocal convective Cahn-Hilliard equation when coupled with the Navier-Stokes equation for the velocity field. For nonlocal Cahn-Hilliard type systems with application to biology, we mention [24, 41] and the references therein (see also [43]). In all those scenarios, the control variable linearly enters the system as a given source. Finally, we point out the very related work [42], where an optimal velocity control problem for the nonlocal convective Cahn-Hilliard equation with degenerate mobility has been addressed. Here, we consider the case of constant mobility and logarithmic potential for which regularity theory was unknown before. Moreover, we highlight that the control-box \(\boldsymbol{\mathcal{V}}_{\mathrm{ad}}\) we consider does not require the control function to be weakly differentiable nor in space nor in time, which is a much more natural assumption for controls (compare our definition of \(\boldsymbol{\mathcal{V}}_{\mathrm{ad}}\) with [42, (1.7)]). Let us also refer to [8] and [9] for optimal velocity control problems connected to Cahn-Hilliard type systems. As they will not play any role in the forthcoming mathematical analysis, we set for convenience \(\epsilon=1\) and \(m\equiv 1\). Thus, the _state system_ we are going to study reads as \[\partial_{t}\varphi+\nabla\varphi\cdot\boldsymbol{v}-\Delta\mu=0 \quad\text{in }Q, \tag{1.18}\] \[\mu=-\mathcal{K}*\varphi+F^{\prime}(\varphi) \quad\text{in }Q,\] (1.19) \[\partial_{\boldsymbol{n}}\mu=0 \quad\text{on }\Sigma,\] (1.20) \[\varphi(0)=\varphi_{0} \quad\text{in }\Omega. \tag{1.21}\] Hence, the optimal control problem we are going to address consists in solving the following minimization problem: \[(\mathbf{CP})\quad\min_{\boldsymbol{v}\in\mathbf{V}_{\mathrm{ad}}}\mathcal{J} (\boldsymbol{v};\varphi),\quad\text{subject }\varphi\text{ solves (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq For some particular spaces, let us fix specific shorthands \[H :=L^{2}(\Omega),\quad V:=H^{1}(\Omega),\quad W:=\{v\in H^{2}(\Omega): \ \partial_{\boldsymbol{n}}v=0\ \ \text{on}\ \Gamma\},\] \[\boldsymbol{H} :=\boldsymbol{L}^{2}(\Omega),\quad\boldsymbol{V}:=\boldsymbol{H} ^{1}(\Omega)=H^{1}(\Omega;\mathbb{R}^{3}),\] as well as the ones for the solenoidal spaces of the velocity field \[\mathbf{L}^{p}_{\sigma} :=\{\boldsymbol{v}\in\mathbf{L}^{p}(\Omega):\text{div}( \boldsymbol{v})=0\ \text{in}\ \Omega,\quad\boldsymbol{v}\cdot\boldsymbol{n}=0\ \text{on}\ \Gamma\},\quad\text{ for}\ p\geq 2,\quad \boldsymbol{H}_{\sigma}:=\boldsymbol{L}^{2}_{\sigma},\] \[\mathbf{V}_{\sigma} :=\{\boldsymbol{v}\in H^{1}_{0}(\Omega;\mathbb{R}^{3}):\text{ div}(\boldsymbol{v})=0\ \text{in}\ \Omega\}.\] We recall that \(\mathbf{L}^{p}_{\sigma}\) and \(\mathbf{V}_{\sigma}\) correspond to the completion of \(\boldsymbol{C}^{\infty}_{0,\sigma}(\Omega)=C^{\infty}_{0,\sigma}(\Omega; \mathbb{R}^{3})\), namely the space of divergence-free vector fields in \(\boldsymbol{C}^{\infty}_{0}(\Omega)=C^{\infty}_{0}(\Omega;\mathbb{R}^{3})\), in the norm of \(\mathbf{L}^{p}(\Omega)\) and \(\mathbf{V}\), respectively. The above spaces are endowed by \(\|\cdot\|:=\|\cdot\|_{H}=\|\cdot\|_{\boldsymbol{H}},\|\cdot\|_{V}\), \(\|\cdot\|_{W}\), and \(\|\cdot\|_{\boldsymbol{L}^{p}},\|\cdot\|_{\boldsymbol{V}_{\sigma}}\), respectively. Moreover, we denote the duality product in \(V^{*}\) by \(\langle\cdot,\cdot\rangle\). We also indicate by \(\boldsymbol{P}_{\sigma}:\boldsymbol{H}\to\boldsymbol{H}_{\sigma}\) the standard Leray \(\mathbf{L}^{2}\)-projector onto \(\boldsymbol{H}_{\sigma}\). In conclusion, we denote by \(C^{\alpha}(\overline{\Omega})\), \(\alpha\in(0,1)\), the spaces of \(\alpha\)-Holder continuous functions in \(\overline{\Omega}\), whereas by \(C^{\beta,\gamma}(\overline{Q})\), \(\beta,\gamma\in(0,1)\), we refer to the functions which are \(\beta\)-Holder continuous in space and \(\gamma\)-Holder continuous in time, respectively. Next, for \(v\in V^{*}\) we set its generalized mean value by \[v_{\Omega}:=\frac{1}{|\Omega|}\langle v,1\rangle,\] where the symbol \(1\) denotes the constant function in \(\Omega\) that assumes the constant value \(1\). This allows us to define \(V_{(0)}\) (\(H_{(0)}\), respectively) as the space of functions \(f\in V\) (\(f\in H\), respectively) such that \(v_{\Omega}=0\) (notice that, being \(v\) at least in \(H\), \(v_{\Omega}\) is the usual integral mean value), whereas with \(V^{*}_{(0)}\) we denote the space of \(f\in V^{*}\) such that \(f_{\Omega}=\langle f,1\rangle=0\). Finally, we recall that \(H\) will be identified with its dual as usual. Namely, we have the continuous, dense, and compact embeddings: \[W\hookrightarrow V\hookrightarrow H\hookrightarrow V^{*}\] along with the identification \[\langle u,v\rangle=\int_{\Omega}uv\quad\forall u\in H,v\in V.\] The Laplace operator \[\mathcal{A}_{0}:V_{(0)}\to V^{*}_{(0)}\quad\text{defined by}\quad\left\langle \mathcal{A}_{0}u,v\right\rangle_{V^{*}_{(0)},V_{(0)}}=\int_{\Omega}\nabla u \cdot\nabla v,\quad v\in V_{(0)},\] is a bijective map between \(V_{(0)}\) and \(V^{*}_{(0)}\). We denote its inverse by \(\mathcal{N}:=\mathcal{A}_{0}^{-1}:V^{*}_{(0)}\to V_{(0)}\). As a consequence, for any \(v^{*}\in V^{*}_{(0)}\), we set \(\|v^{*}\|_{*}:=\|\nabla\mathcal{N}v^{*}\|\), which yields a norm in \(V^{*}_{(0)}\), that is equivalent to the canonical dual norm. In turn, \(v\mapsto\left(\|v-v_{\Omega}\|_{*}^{2}+|v_{\Omega}|^{2}\right)^{\frac{1}{2}}\) defines a norm in \(V^{*}\), that is equivalent to the standard dual norm in \(V^{*}\). Moreover, by the regularity theory for the Laplace operator with homogeneous Neumann boundary conditions, there exists a constant \(C>0\) such that \[\|\nabla\mathcal{N}v\|_{V}\leq C\|v\|,\quad\forall\,v\in H_{(0)}. \tag{2.1}\] In conclusion, we introduce the Besov spaces (see, e.g., [1] and [46] for more details), as the following (real) interpolation spaces \[\mathcal{B}^{s}_{p,q}(\Omega):=\left(L^{p}(\Omega),W^{m,p}(\Omega)\right)_{ \lambda,q},\] where \(s=\lambda m\), \(m\in\mathbb{N}\), \(\lambda\in(0,1)\), \(p,q\in[1,\infty)\). In particular, we recall that \[\mathcal{B}^{2-\frac{2}{q}}_{p,q}(\Omega)=\left(L^{p}(\Omega),W^{2,p}(\Omega) \right)_{1-\frac{1}{q},q},\] for any \(p,q>1\). ### Assumptions and main results Here, we collect our main results and the related hypotheses. The following structural assumptions will be in order: * The spatial kernel is such that \(\mathcal{K}\in W^{1,1}_{\rm loc}(\mathbb{R}^{3})\), with \(\mathcal{K}(x)=\mathcal{K}(-x)\), \(x\in\Omega\). * The double-well potential fulfills \(F\in C^{0}([-1,1])\cap C^{3}(-1,1)\) and \[\lim_{s\to-1^{+}}F^{\prime}(s)=-\infty,\quad\lim_{s\to 1^{-}}F^{\prime}(s)=+ \infty,\quad F^{\prime\prime}(s)\geq\alpha,\quad\forall\ s\in(-1,1).\] As usual, we extend \(F(s)=+\infty\) for any \(s\notin[-1,1]\). Without loss of generality, we require \(F(0)=0\) and \(F^{\prime}(0)=0\). In particular, those entail that \(F(s)\geq 0\) for any \(s\in[-1,1]\). * As \(\delta\to 0^{+}\), we assume \[\frac{1}{F^{\prime}(1-2\delta)}=\mathcal{O}\left(\frac{1}{|\ln(\delta)\,|} \right),\quad\frac{1}{F^{\prime\prime}(1-2\delta)}=\mathcal{O}(\delta),\] (2.2) and analogously that \[\frac{1}{|F^{\prime}(-1+2\delta)|}=\mathcal{O}\left(\frac{1}{|\ln(\delta)\,|} \right),\qquad\frac{1}{F^{\prime\prime}(-1+2\delta)}=\mathcal{O}\left(\delta \right).\] (2.3) * Either \(\mathcal{K}\in W^{2,1}(B_{\rho})\), where \(B_{\rho}:=\{x\in\mathbb{R}^{3}:|x|<\rho\}\), with \(\rho\sim\mathrm{diam}(\Omega)\) such that \(\overline{\Omega}\subset B_{\rho}\) or \(\mathcal{K}\) is admissible in the sense of [3, Def.1]. **Remark 2.1**.: _As remarked in [3], we observe that Newtonian and Bessel potentials do satisfy assumption_ **H4**_._ **Theorem 2.2**.: _Let the assumptions_ **H1**_-_**H2** _be fulfilled. Assume that \(\textbf{v}\in L^{4}(0,T;\textbf{L}^{6}_{\rho})\), \(\varphi_{0}\in H\) with \(F(\varphi_{0})\in L^{1}(\Omega)\) and \(|(\varphi_{0})_{\Omega}|<1\). Then, there exists a unique weak solution \((\varphi,\mu)\) to (1.18)-(1.21) in the sense that_ \[\varphi\in H^{1}(0,T;V^{*})\cap C^{0}([0,T];H)\cap L^{2}(0,T;V), \tag{2.4}\] \[\varphi\in L^{\infty}(Q):\quad|\varphi|<1\text{ a.e. in }Q,\] (2.5) \[\mu\in L^{2}(0,T;V),\quad F^{\prime}(\varphi)\in L^{2}(0,T;V), \tag{2.6}\] _and it satisfies_ \[\langle\partial_{t}\varphi,v\rangle-\int_{\Omega}\varphi\mathbf{v}\cdot \nabla v+\int_{\Omega}\nabla\mu\cdot\nabla v=0 \quad\text{for every $v\in V$, and a.e. in $(0,T)$}, \tag{2.7}\] \[\mu=-\mathcal{K}\ast\varphi+F^{\prime}(\varphi) \text{a.e. in $Q$}, \tag{2.8}\] _and \(\varphi(0)=\varphi_{0}\) almost everywhere in \(\Omega\). The weak solution fulfills the energy identity_ \[\mathcal{E}(\varphi(t))+\int_{Q_{t}}\varphi\mathbf{v}\cdot\nabla\mu+\int_{Q_{t}}| \nabla\mu|^{2}=\mathcal{E}(\varphi_{0}),\quad\forall\,t\in[0,T]. \tag{2.9}\] _In addition, given two weak solutions \(\varphi_{1}\) and \(\varphi_{2}\) corresponding to the initial data \(\varphi_{0}^{1}\) and \(\varphi_{0}^{2}\) assumed to fulfill the same conditions as above, and a prescribed velocity field \(\mathbf{v}\in L^{4}(0,T;\mathbf{L}_{\sigma}^{6})\), it holds that_ \[\|\varphi_{1}-\varphi_{2}\|_{C^{0}([0,T];V^{*})\cap L^{2}(0,T;H)}\] \[\quad\leq\left(\left\|\varphi_{0}^{1}-\varphi_{0}^{2}\right\|_{V ^{*}}+\left|(\varphi_{0}^{1})_{\Omega}-(\varphi_{0}^{2})_{\Omega}\right|^{ \frac{1}{2}}\|\Lambda\|_{L^{1}(0,T)}^{\frac{1}{2}}+C\left|(\varphi_{0}^{1})_{ \Omega}-(\varphi_{0}^{2})_{\Omega}\right|\right)\times\] \[\qquad\times\exp\left(C\left(1+\left\|\mathbf{v}\right\|_{L^{4}(0,T; \mathbf{L}_{\sigma}^{6})}^{4}\right)\right), \tag{2.10}\] _where \(\Lambda=2(\|F^{\prime}(\varphi_{1})\|_{1}+\|F^{\prime}\left(\varphi_{2}\right) \|_{1})\) and \(C\) only depends on \(\alpha\), \(\mathcal{K}\), \(T\), and \(\Omega\)._ _Furthermore, the following regularity results hold:_ * _If also_ \(\varphi_{0}\in V\) _and it is such that_ \(F^{\prime}(\varphi_{0})\in H\) _and_ \(F^{\prime\prime}(\varphi_{0})\nabla\varphi_{0}\in\mathbf{H}\)_, then, additionally,_ \[\varphi\in L^{\infty}(0,T;V)\cap L^{q}(0,T;W^{1,p}(\Omega)),\quad q=\frac{4p} {3(p-2)},\quad\forall\,p\in(2,\infty),\] (2.11) \[\partial_{t}\varphi\in L^{4}(0,T;V^{*})\cap L^{2}(0,T;H),\] (2.12) \[\mu\in L^{\infty}(0,T;V)\cap L^{2}(0,T;W),\] (2.13) \[F^{\prime}(\varphi)\in L^{\infty}(0,T;V).\] (2.14) _and_ \(\partial_{\mathbf{n}}\mu=0\) _almost everywhere on_ \(\Sigma\)_. Moreover, if_ \(\mathbf{v}\in L^{\infty}(0,T;\mathbf{H}_{\sigma})\)_, we also have_ \(\partial_{t}\varphi\in L^{\infty}(0,T;V^{*})\)_._ * _Let the assumptions of_ \((i)\) _hold, together with assumptions_ \(\mathbf{H3}\) _and_ \(\mathbf{H4}\)_. Suppose also that_ \(\varphi_{0}\in L^{\infty}(\Omega)\) _with_ \(\|\varphi_{0}\|_{\infty}\leq 1-\delta_{0},\) _for some_ \(\delta_{0}\in(0,1)\)_. Then, there exists_ \(\delta\in(0,\delta_{0}]\) _such that_ \[\sup_{t\in[0,T]}\|\varphi(t)\|_{\infty}\leq 1-\delta.\] (2.15) _As a consequence, we also have that_ \(\mu\in H^{1}(0,T;H)\cap C^{0}([0,T];V)\)_._ * _Under the same assumptions of_ \((ii)\)_, assume additionally that there exists_ \(\beta_{0}\in(0,1]\) _such that_ \(\varphi_{0}\in C^{\beta_{0}}(\overline{\Omega})\)_. Then there exists_ \(\beta\in(0,\beta_{0}]\) _such that_ \[\varphi\in C^{\beta,\frac{\beta}{2}}(\overline{Q}),\] (2.16) \[\mu\in C^{\beta,\frac{\beta}{2}}(\overline{Q}),\] (2.17) _where_ \(\beta\) _also depends on the_ \(L^{4}(0,T;\mathbf{L}_{\sigma}^{6})\)_-norm of_ \(\mathbf{v}\) _._ * _Let the assumptions in point_ \((iii)\) _and_ **H4** _be fulfilled. Then there exists_ \(C>0\)_, depending also on the constant_ \(\delta\) _appearing in (_2.15_) and T, such that_ \[\|\varphi\|_{L^{2}(0,T;H^{2}(\Omega))}\leq C.\] (2.18) _Moreover, let us set_ \[\mu_{0}:=-\mathcal{K}*\varphi_{0}+F^{\prime}(\varphi_{0}).\] (2.19) _Then, if_ \(\mu_{0}\in\mathcal{B}^{1}_{3,2}(\Omega)\)_, there exists_ \(C>0\) _depending on structural data of the system, such that_ \[\|\mu\|_{H^{1}(0,T;L^{3}(\Omega))\cap L^{2}(0,T;W^{2,3}(\Omega))}+\| \varphi\|_{H^{1}(0,T;L^{3}(\Omega))\cap L^{4}(0,T;W^{1,6}(\Omega))}\leq C.\] (2.20) * _Let the assumptions in point_ \((iv)\) _be fulfilled,_ \(\mathbf{v}\in L^{4}(0,T;\mathbf{L}^{\infty})\)_, and_ \(\mu_{0}\in\mathcal{B}^{1}_{6,2}(\Omega)\)_. Then, there exists_ \(C>0\)_, depending on structural data of the system, such that_ \[\|\mu\|_{H^{1}(0,T;L^{6}(\Omega))\cap L^{3}(0,T;W^{1,\infty}( \Omega))\cap L^{2}(0,T;W^{2,6}(\Omega))}\] \[\quad+\|\varphi\|_{H^{1}(0,T;L^{6}(\Omega))\cap L^{3}(0,T;W^{1, \infty}(\Omega))}\leq C.\] (2.21) _Moreover, let instead assume_ \(\mathbf{v}\in L^{\infty}(0,T;\mathbf{L}^{\infty}_{\sigma})\) _and_ \(\mu_{0}\in\mathcal{B}^{\frac{3}{2}}_{2,4}(\Omega)\)_, together with the assumptions in point_ \((iv)\)_. Then, there exists_ \(C>0\)_, depending on structural data of the system, such that_ \[\|\mu\|_{W^{1,4}(0,T;H)\cap L^{8}(0,T;W^{1,4}(\Omega))\cap L^{4} (0,T;H^{2}(\Omega))}\] \[\quad+\|\varphi\|_{L^{8}(0,T;W^{1,4}(\Omega))\cap L^{4}(0,T;H^{2} (\Omega))}\leq C.\] (2.22) **Remark 2.3**.: _We remark that the assumption \(F^{\prime}(\varphi_{0})\in L^{1}(\Omega)\) already implies that \(\varphi_{0}\in L^{\infty}(\Omega)\) with \(\|\varphi_{0}\|_{\infty}\leq 1\), due to assumption_ **H2**_. Furthermore, observe that if \(\varphi_{0}\in V\) and it is strictly separated, i.e., \(\|\varphi_{0}\|_{\infty}\leq 1-\delta_{0},\) for some \(\delta_{0}\in(0,1)\), this directly implies \(F^{\prime}(\varphi_{0})\in H\) and \(F^{\prime\prime}(\varphi_{0})\nabla\varphi_{0}\in\mathbf{H}\)._ **Remark 2.4**.: _We notice that [26, Remark 4.4] still holds also in the three-dimensional setting, i.e., any weak solution \(\varphi\) satisfying (2.4)-(2.5) is instantaneously strictly separated from pure phases. Moreover, under the assumption of the above theorem, part (iii), thanks to the Holder regularity in (2.16), the inequality (2.15) reduces to_ \[\max_{(x,t)\in\overline{Q}}\|\varphi(x,t)\|_{C^{0}(\overline{Q})}\leq 1-\delta.\] **Remark 2.5**.: _The technical assumption on the initial condition of point \((iv)\) can be avoided by requiring, for instance, that_ \[\mu_{0}\in W^{2,3}(\Omega)\hookrightarrow\mathcal{B}^{1}_{3,2}(\Omega),\] _which is nevertheless more restrictive. Indeed, from [1] we have the embedding_ \[\mathcal{B}^{1}_{q,2}(\Omega)\hookrightarrow W^{1,q}(\Omega) \hookrightarrow\mathcal{B}^{1}_{q,q}(\Omega)\quad\forall q\geq 2,\] _and thus the space \(\mathcal{B}^{1}_{3,2}(\Omega)\) is actually not so far from \(W^{1,3}(\Omega)\): Besov spaces set somehow a finer scale with respect to classical Sobolev spaces._ **Remark 2.6**.: _As it will be clear from the proof, points \((i)\) and \((ii)\) of the above theorem can be shown by arguing along the same lines of arguments as in [26] which analyzes the same system in the two-dimensional setting. The extension to the 3D case we perform has been made possible by the recent result of the validity of the strict separation property proven in [39]._ **Remark 2.7**.: _Observe that, in the case of point \((iv)\), thanks to the regularity in (2.4)-(2.5), it holds \(\varphi\in C^{0}([0,T];V)\), the function \(t\mapsto\|\nabla\varphi(t)\|^{2}\) is \(AC([0,T])\), and that_ \[\frac{1}{2}\frac{d}{dt}\|\nabla\varphi(t)\|^{2}=-\int_{\Omega}\partial_{t} \varphi(t)\Delta\varphi(t)\quad\text{ for almost every $t\in(0,T)$.}\] **Remark 2.8**.: _Note that, up to point (iv) it suffices that \(\mathbf{v}\in L^{4}(0,T;\mathbf{L}_{\sigma}^{6})\) as long as we assume suitably regular initial data. This is the minimal summability to get well-posedness of strong solutions and it is enough to deduce (2.20). Furthermore, using the property \(\varphi\in L^{4}(0,T;W^{1,6}(\Omega))\) specified in (2.20), one may establish further regularity results by formally differentiating (2.7) in time and testing it by \(\partial_{t}\varphi\) as in the 2D analogue (see, e.g., [23, Thm.2] and [27, Lemma 5.7]). That will readily prove that \(\partial_{t}\varphi\in L^{\infty}(0,T;H)\) and \(\mu\in L^{\infty}(0,T;H^{2}(\Omega))\). Nevertheless, some extra regularity on the time derivative \(\partial_{t}\mathbf{v}\) and on the initial data are required. Being the velocity field \(\mathbf{v}\) our control variable, we do not want to assume \(\partial_{t}\mathbf{v}\in L^{2}(Q)\). On the other hand, if one includes that condition in \(\mathbf{\mathcal{V}}_{\rm ad}\) as done in [42], the above regularities can be easily shown._ **Theorem 2.9**.: _Suppose that_ **H1**_-_**H4** _hold._ * _Let_ \(\mathbf{v}_{1}\) _and_ \(\mathbf{v}_{2}\) _be two given velocity fields such that_ \[\mathbf{v}_{i}\in L^{4}(0,T;\mathbf{L}_{\sigma}^{6}),\quad i=1,2.\] _Denote by_ \((\varphi_{i},\mu_{i})\)_,_ \(i=1,2\)_, the two corresponding solutions to (_1.18_)-(_1.21_) related to initial data_ \(\varphi_{0}^{i}\) _which fulfill the assumptions of_ \((i)\) _in Theorem_ 2.2 _and_ \[|(\varphi_{0}^{i})_{\Omega}|<1,\quad\|\varphi_{0}^{i}\|_{\infty}\leq 1-\delta_{0} ^{i}\quad\text{with $\delta_{0}^{i}\in(0,1)$,}\quad i=1,2,\] (2.23) _exists_ \(\beta_{0,2}\in(0,1]:\varphi_{0}^{2}\in C^{\beta_{0,2}}(\overline{\Omega})\)_, and_ \(\mu_{0}^{2}:=-\mathcal{K}\ast\varphi_{0}^{2}+F^{\prime}(\varphi_{0}^{2})\in \mathcal{B}_{3,2}^{1}(\Omega)\)_. The two solutions_ \(\varphi_{1}\) _and_ \(\varphi_{2}\) _are then intended in the sense of points_ \((ii)\) _and_ \((iv)\) _of Theorem_ 2.2_, respectively. Then, there exists a positive constant_ \(C\) _such that_ \[\|\varphi_{1}-\varphi_{2}\|_{L^{\infty}(0,T;H)\cap L^{2}(0,T;V)}\leq C(\|\mathbf{ v}_{1}-\mathbf{v}_{2}\|_{L^{2}(0,T;\mathbf{H}_{\sigma})}+\|\varphi_{0}^{1}-\varphi_{0}^{2} \|),\] (2.24) _where_ \(C\) _depends only on the structure of the system._ * _Moreover, suppose that, additionally,_ \[\mathbf{v}_{2}\in L^{\infty}(0,T;\mathbf{L}_{\sigma}^{4})\cap L^{4}(0,T;\mathbf{L}^{\infty }),\quad\mu_{0}^{1}\in\mathcal{B}_{3,2}^{1}(\Omega),\quad\mu_{0}^{2}\in \mathcal{B}_{2,4}^{\frac{3}{2}}(\Omega)\cap\mathcal{B}_{6,2}^{1}(\Omega),\] _with_ \(\mu_{0}^{i}:=-\mathcal{K}\ast\varphi_{0}^{i}+F^{\prime}(\varphi_{0}^{i})\)_,_ \(i=1,2\)_, and exists_ \(\beta_{0,1}\in(0,1]:\varphi_{0}^{1}\in C^{\beta_{0,1}}(\overline{\Omega})\)_. Then, the two solutions_ \(\varphi_{1}\) _and_ \(\varphi_{2}\) _are then intended in the sense of points_ \((iv)\) _and_ \((v)\) _of Theorem_ 2.2_, respectively, and they fulfill_ \[\|\varphi_{1}-\varphi_{2}\|_{L^{\infty}(0,T;V)\cap L^{2}(0,T;W)}\leq C(\|\mathbf{ v}_{1}-\mathbf{v}_{2}\|_{L^{6}(0,T;\mathbf{H}_{\sigma})}+\|\varphi_{0}^{1}-\varphi_{0}^{2} \|_{V})\] (2.25) _for a positive constant_ \(C\) _which depends only on the structure of the system._ Once the above analytical properties on the solutions of (1.18)-(1.21) have been derived, we can address the optimal control problem **(CP)**. For such a problem, we postulate the following assumptions: * The spatial kernel \(\mathcal{K}\) fulfills **H1** and **H4**. * The initial data fulfill (recall (2.19) and Remark 2.3) \[\varphi_{0}\in V\cap L^{\infty}(\Omega),\quad|(\varphi_{0})_{ \Omega}|<1,\quad\|\varphi_{0}\|_{\infty}\leq 1-\delta_{0}\quad\text{with }\delta_{0}\in(0,1),\] exists \(\beta_{0}\in(0,1]:\varphi_{0}\in C^{\beta_{0}}(\overline{\Omega}),\quad\mu_{0 }\in\mathcal{B}^{\frac{3}{2}}_{2,4}(\Omega)\cap\mathcal{B}^{1}_{6,2}(\Omega)\). * The constants \(\gamma_{1},\gamma_{2},\) and \(\gamma_{3}\) in (1.17) are nonnegative, but not all zero. * The target functions \(\varphi_{Q}\) and \(\varphi_{\Omega}\) are such that \(\varphi_{Q}\in L^{2}(Q)\), and \(\varphi_{\Omega}\in V\). * The prescribed functions \(\mathbf{v}_{\min}\) and \(\mathbf{v}_{\max}\) are such that \(\mathbf{v}_{\min},\mathbf{v}_{\max}\in\mathbf{L}^{\infty}(\Omega)\) and \(\mathbf{v}_{\min}\leq\mathbf{v}_{\max}\) componentwise. * In addition to **H2** and **H3**, the double-well potential is such that \(F\in C^{4}(-1,1)\). We now present the two main results on **(CP)**. First, we state the existence of an optimal strategy, and then the first-order optimality conditions for minimizers. **Theorem 2.10**.: _Assume that_ **C1**_-_**C5** _are in force. Then, the optimization problem_ **(CP)** _admits at least one solution._ In the formulation of the corresponding optimality conditions, we refer to the adjoint variables \(p\) an \(q\). Those are the unique solutions to a system, referred to as the _adjoint system_ related to (1.18)-(1.21) (cf. (4.21)-(4.24)). To keep the presentation as essential as possible, we defer to later their proper introduction (cf. Section 4). **Theorem 2.11**.: _Assume that_ **C1**_-_**C6** _are in force. Let \(\overline{\mathbf{v}}\) be an optimal control with corresponding state \((\overline{\varphi},\overline{\mu})\) and adjoint variables \((p,q)\). Then, it necessarily fulfills the variational inequality_ \[\int_{Q}\left(-\mathbf{P}_{\sigma}(p\nabla\overline{\varphi})+\gamma_{3}\overline {\mathbf{v}}\right)\cdot(\mathbf{v}-\overline{\mathbf{v}})\geq 0\quad\forall\mathbf{v}\in \mathcal{V}_{\rm ad}. \tag{2.26}\] _Moreover, whenever \(\gamma_{3}\neq 0\), the optimal control \(\overline{\mathbf{v}}\) reduces to the \(L^{2}\)-orthogonal projection of \(\gamma_{3}^{-1}\mathbf{P}_{\sigma}(p\nabla\overline{\varphi})\) onto the convex set \(\mathcal{V}_{\rm ad}\)._ **Remark 2.12**.: _Observe that, when \(\gamma_{3}\neq 0\), (2.26) shows that to identify the optimal \(\overline{\mathbf{v}}\) is enough to have access, for almost any \(t\in(0,T)\), to the divergence-free projection of \(p\nabla\overline{\varphi}(t)\), i.e., its \(\mathbf{L}^{2}\)-projection onto \(\mathbf{H}_{\sigma}\), whereas its orthogonal complement can be neglected. We also remark that the standard pointwise characterization of the projection as a suitable bang-bang control involving the bounds \(\mathbf{v}_{\min}\) and \(\mathbf{v}_{\max}\) does not work here, since the result cannot be guaranteed to be divergence-free and thus it may not belong to \(\mathcal{V}_{\rm ad}\)._ Without further reference later on, in the forthcoming estimates we are going to perform, the capital letter \(C\) will denote a generic positive constant that depends only on the structural data of the problem. For this reason, its meaning may change from line to line and even within the same chain of computations. When it depends on an additional constant \(\varepsilon\) whose value has to be chosen just at the end of some computations, we use \(C_{\varepsilon}\) to stress that dependence. ## 3 Mathematical Analysis of the State System ### Proof of Theorem 2.2 The proof of the theorem can be mutated in part from [26, Thm. 4.1] by adapting some crucial estimates to the three-dimensional case. #### 3.1.1 Uniqueness and continuous dependence estimate Let us consider two weak solutions \(\varphi_{1}\) and \(\varphi_{2}\) satisfying (2.4)-(2.5) and originating from two initial data \(\varphi_{0}^{1}\) and \(\varphi_{0}^{2}\), where possibly \((\varphi_{0}^{1})_{\Omega}\neq(\varphi_{0}^{2})_{\Omega}\). Setting \[\varphi=\varphi_{1}-\varphi_{2},\quad\mu=-\mathcal{K}\ast\varphi+F^{\prime}( \varphi_{1})-F^{\prime}(\varphi_{2}),\] we have \[\left\langle\partial_{t}\varphi,v\right\rangle-\int_{\Omega}\varphi\mathbf{v} \cdot\nabla v+\int_{\Omega}\nabla\mu\cdot\nabla v=0,\quad\forall\,v\in V,\text{ a.e. in }(0,T). \tag{3.1}\] Taking \(v=\mathcal{N}(\varphi-\varphi_{\Omega})\), we find \[\frac{1}{2}\frac{d}{dt}\left\|\varphi-\varphi_{\Omega}\right\|_{\ast}^{2}+ \int_{\Omega}\varphi\mathbf{v}\cdot\nabla\mathcal{N}\left(\varphi-\varphi_{\Omega }\right)+\int_{\Omega}\mu(\varphi-\varphi_{\Omega})=0.\] By Young's inequality, arguing as in [26, (4.13)], we have \[\int_{\Omega} \mu(\varphi-\varphi_{\Omega})\geq-\int_{\Omega}(\mathcal{K}\ast \varphi)(\varphi-\varphi_{\Omega})+\alpha\|\varphi\|^{2}-\int_{\Omega}\left(F ^{\prime}\left(\varphi_{1}\right)-F^{\prime}\left(\varphi_{2}\right)\right) \varphi_{\Omega}\] \[=-\int_{\Omega}\nabla(\mathcal{K}\ast\varphi)\cdot\nabla \mathcal{N}\left(\varphi-\varphi_{\Omega}\right)+\alpha\|\varphi\|^{2}-\int_{ \Omega}\left(F^{\prime}\left(\varphi_{1}\right)-F^{\prime}\left(\varphi_{2} \right)\right)\varphi_{\Omega}\] \[\geq-\|\mathcal{K}\|_{W^{1,1}(B_{M})}\|\varphi\|\left\|\varphi- \varphi_{\Omega}\right\|_{\ast}+\alpha\|\varphi\|^{2}-|(\varphi_{1})_{\Omega} -(\varphi_{2})_{\Omega}|\left(\|F^{\prime}(\varphi_{1})\|_{1}+\|F^{\prime}( \varphi_{2})\|_{1}\right)\] \[\geq\frac{3\alpha}{4}\|\varphi\|^{2}-C\|\varphi-\varphi_{\Omega} \|_{\ast}^{2}-|(\varphi_{1})_{\Omega}-(\varphi_{2})_{\Omega}|\left(\|F^{\prime }(\varphi_{1})\|_{1}+\|F^{\prime}(\varphi_{2})\|_{1}\right), \tag{3.2}\] where \(B_{M}\) is a sufficiently large ball containing \(\overline{\Omega}\). Concerning the convective term, by Sobolev-Gagliardo-Nirenberg's inequality, we obtain \[\left|\int_{\Omega}\varphi\mathbf{v}\cdot\nabla\mathcal{N}\left( \varphi-\varphi_{\Omega}\right)\right| \leq\|\mathbf{v}\|_{6}\|\varphi\|\left\|\nabla\mathcal{N}\left( \varphi-\varphi_{\Omega}\right)\right\|_{3}\] \[\leq C\|\mathbf{v}\|_{6}\|\varphi\|\left\|\nabla\mathcal{N}\left( \varphi-\varphi_{\Omega}\right)\right\|^{\frac{1}{2}}\|\varphi-\varphi_{ \Omega}\|^{\frac{1}{2}}\] \[\leq\frac{\alpha}{8}\|\varphi\|^{2}+C\|\mathbf{v}\|_{6}^{2}\left\| \varphi-\varphi_{\Omega}\right\|_{\ast}\left(\|\varphi\|+C\left|\varphi_{\Omega }\right|\right)\] \[\leq\frac{\alpha}{4}\|\varphi\|^{2}+C\|\mathbf{v}\|_{6}^{4}\left\| \varphi-\varphi_{\Omega}\right\|_{\ast}^{2}+C\left|\varphi_{\Omega}\right|^{2}. \tag{3.3}\] Then, recalling the conservation of mass, i.e., \(\varphi_{\Omega}^{i}(t)=(\varphi_{0}^{i})_{\Omega}\) for all \(t\in[0,T]\) and \(i=1,2\), we are led to \[\frac{d}{dt}\|\varphi\|_{V^{*}}^{2}+\alpha\|\varphi\|^{2}\leq C\left(1+\| \boldsymbol{v}\|_{6}^{4}\right)\|\varphi\|_{V^{*}}^{2}+\Lambda\left|\varphi_{ \Omega}(0)\right|+C\left|\varphi_{\Omega}(0)\right|^{2},\] where \(\Lambda=2(\|F^{\prime}\left(\varphi_{1}\right)\|_{1}+\|F^{\prime}\left( \varphi_{2}\right)\|_{1})\). Therefore, an application of Gronwall's Lemma implies (2.10), which, in particular, entails the uniqueness of the weak solutions \(\varphi\). Concerning the uniqueness for the corresponding chemical potential \(\mu\), it readily follows upon noticing that \(\varphi\) is uniquely determined in \(L^{2}(Q)\), hence almost everywhere in \(Q\), along with the fact that \(F^{\prime}\) is single valued. #### 3.1.2 Existence of weak solutions The proof of the existence of weak solutions can be proven exactly as in [26, Thm.4.1], since all the estimates do not depend on the dimension of the domain and are thus valid also for the three-dimensional case. #### 3.1.3 Existence of strong solutions: parts (i)-(ii) To derive the existence of strong solutions, we adapt the proof of [26, Thm.4.1]. Let us consider a sequence of velocity fields \(\{\boldsymbol{v}^{k}\}\subset C_{0}^{\infty}((0,T);\boldsymbol{C}_{0,\sigma} ^{\infty}(\Omega))\) such that \(\boldsymbol{v}^{k}\to\boldsymbol{v}\) strongly in \(L^{4}(0,T;\mathbf{L}_{\sigma}^{6})\). For any \(k\in\mathbb{N}\), we introduce the Lipschitz continuous truncation \(h_{k}\) given by \[h_{k}:\mathbb{R}\to\mathbb{R},\quad h_{k}(s)=\begin{cases}-1+\frac{1}{k},&s<-1 +\frac{1}{k},\\ s,&-1+\frac{1}{k}\leq s\leq 1-\frac{1}{k},\\ 1-\frac{1}{k},&s>1-\frac{1}{k},\end{cases}\] and set \(\varphi_{0}^{k}:=h_{k}(\varphi_{0})\). It readily follows that \[\varphi_{0}^{k}\in V\cap L^{\infty}(\Omega)\text{ and that }\nabla\varphi_{0}^{k}= \nabla\varphi_{0}\raisebox{0.0pt}{\scalebox{1.2}{$\chi$}}_{[-1+\frac{1}{k},1- \frac{1}{k}]}(\varphi_{0})\text{ almost everywhere in }\Omega,\] where \(\raisebox{0.0pt}{\scalebox{1.2}{$\chi$}}_{A}(\cdot)\) denotes the indicator function of a measurable set \(A\). By definition, we have \[\left|\varphi_{0}^{k}\right|\leq\left|\varphi_{0}\right|,\quad\left|\nabla \varphi_{0}^{k}\right|\leq\left|\nabla\varphi_{0}\right|\quad\text{a.e. in }\Omega, \tag{3.4}\] whence \(\varphi_{0}^{k}\to\varphi_{0}\) strongly in \(V\) as well as \(\left|(\varphi_{0}^{k})_{\Omega}\right|\to\left|(\varphi_{0})_{\Omega}\right|\) as \(k\to\infty\). Then, there exist positive constants \(\varpi>0\) and \(\overline{k}>0\) such that \[\left|(\varphi_{0}^{k})_{\Omega}\right|\leq 1-\varpi,\quad\forall\,k> \overline{k}. \tag{3.5}\] We now notice that [26, Thm. A.1] can be slightly modified to be valid in the three dimensional-case. In particular, it can be easily seen that there exists a sequence of functions \(\{(\varphi^{k},\mu^{k})\}\) satisfying \[\varphi^{k} \in L^{\infty}(0,T;V\cap L^{\infty}(\Omega)):\quad\sup_{t\in[0,T]} \|\varphi^{k}(t)\|_{\infty}\leq 1-\delta_{k}, \tag{3.6}\] \[\varphi^{k} \in L^{q}(0,T;W^{1,p}(\Omega)),\quad q=\frac{4p}{3(p-2)},\quad \forall\,p\in(2,\infty),\] (3.7) \[\partial_{t}\varphi^{k} \in L^{\infty}(0,T;V^{*})\cap L^{2}(0,T;H),\] (3.8) \[\mu^{k} \in H^{1}(0,T;H)\cap C^{0}([0,T];V)\cap L^{2}(0,T;W), \tag{3.9}\] where \(\delta_{k}\in(0,1)\) depends on \(k\). The solutions satisfy \[\partial_{t}\varphi^{k}+\nabla\varphi^{k}\cdot\mathbf{v}^{k}-\Delta\mu^{k}=0,\quad \mu^{k}=-\mathcal{K}\ast\varphi^{k}+F^{\prime}(\varphi^{k})\quad\text{ in }Q. \tag{3.10}\] In addition, \(\partial_{\mathbf{n}}\mu^{k}=0\) almost everywhere on \(\Sigma\) and \(\varphi^{k}(0)=\varphi_{0}^{k}\) almost everywhere in \(\Omega\). Notice that the differences compared to [26, Thm. A.1] are only related to the regularity (3.7), which comes directly from the fact that (see, e.g., [26, (A.27), (4.48)]), \[\|\nabla\varphi^{k}\|_{p}\leq C^{\frac{1}{p}}\left(1+\|\nabla\mu^{k}\|_{p} \right)\quad\forall p\geq 2, \tag{3.11}\] together with the interpolation inequality \[\|\nabla\mu^{k}\|_{L^{q}(0,T;\mathbf{L}^{p})}\leq C\|\nabla\mu^{k}\|_{L^{ \infty}(0,T;\mathbf{H})}\|\mu^{k}\|_{L^{2}(0,T;W)},\] where \(q=\frac{4p}{3(p-2)}\) and \(p\in(2,\infty)\) and \(C>0\) is independent of \(k\). Indeed, thanks to the recent result in [39] (see [39, Corollary 4.5, Remark 4.7]), the strict separation property for \(\varphi^{k}\) entails the existence of \(\delta_{k}>0\) such that \[\sup_{t\in[0,T]}\|\varphi^{k}(t)\|_{\infty}\leq 1-\delta_{k}. \tag{3.12}\] Now it is immediate to show (see also [26, (4.22), (4.24), (4.27)]) that \[\int_{Q}|\nabla\mu^{k}|^{2}+\int_{Q}|\nabla\varphi^{k}|^{2}+\int_{0}^{T}\| \partial_{t}\varphi^{k}\|_{V^{*}}^{2}\leq C(1+T)+C\int_{Q}|\mathbf{v}^{k}|^{2}\leq C, \tag{3.13}\] where \(C>0\) does not depend on \(k\), since \(\mathbf{v}^{k}\to\mathbf{v}\) strongly in \(L^{4}(0,T;\mathbf{L}_{\sigma}^{6})\). Observe that the regularity of the approximated solutions \(\{(\varphi^{k},\mu^{k})\}\) in (3.6) allows us to compute the time and the spatial derivatives of the second equation in (3.10), which gives \[\partial_{t}\mu^{k}=-\mathcal{K}\ast\partial_{t}\varphi^{k}+F^{\prime\prime} (\varphi^{k})\partial_{t}\varphi^{k},\quad\nabla\mu^{k}=-\mathcal{K}\ast \varphi^{k}+F^{\prime\prime}(\varphi^{k})\nabla\varphi^{k}\quad\text{ in }Q. \tag{3.14}\] In addition, the map \(t\mapsto\|\nabla\mu(t)\|^{2}\) belongs to \(AC([0,T])\) and the chain rule \(\frac{1}{2}\frac{d}{dt}\|\nabla\mu^{k}\|^{2}=\int_{\Omega}\partial_{t}\mu \Delta\mu\) holds almost everywhere in \((0,T)\). Thus, testing the first equation in (3.10) by \(\partial_{t}\mu^{k}\), integrating over \(\Omega\), and exploiting (3.14), we obtain \[\frac{1}{2}\frac{d}{dt}\|\nabla\mu^{k}\|^{2}+\int_{\Omega}F^{\prime\prime}( \varphi^{k})|\partial_{t}\varphi^{k}|^{2}\,=-\int_{\Omega}\nabla\varphi^{k} \cdot\mathbf{v}^{k}\,\partial_{t}\mu^{k}+\int_{\Omega}\mathcal{K}\ast\partial_{t} \varphi^{k}\,\partial_{t}\varphi^{k}. \tag{3.15}\] We rewrite the key term \(\int_{\Omega}\mathbf{v}^{k}\cdot\nabla\varphi^{k}\partial_{t}\mu^{k}\) as in the proof of [26, Thm. 4.1]. By using (3.14) and the properties of \(\mathbf{v}^{k}\) in \(C^{\infty}_{0}((0,T);\mathbf{C}^{\infty}_{0,\sigma}(\Omega))\), we observe that \[\int_{\Omega}\nabla\varphi^{k}\cdot\mathbf{v}^{k}\,\partial_{t}\mu^{k} = -\int_{\Omega}\left(\nabla\varphi^{k}\cdot\mathbf{v}^{k}\right) \mathcal{K}\ast\partial_{t}\varphi^{k}+\int_{\Omega}\left(\nabla\varphi^{k} \cdot\mathbf{v}^{k}\right)F^{\prime\prime}(\varphi^{k})\,\partial_{t}\varphi^{k} \tag{3.16}\] \[= -\int_{\Omega}\left(\nabla\varphi^{k}\cdot\mathbf{v}^{k}\right) \mathcal{K}\ast\partial_{t}\varphi^{k}+\int_{\Omega}\nabla\left(F^{\prime}( \varphi^{k})\right)\cdot\mathbf{v}^{k}\partial_{t}\varphi^{k}\] \[= -\int_{\Omega}\left(\nabla\varphi^{k}\cdot\mathbf{v}^{k}\right) \mathcal{K}\ast\partial_{t}\varphi^{k}+\int_{\Omega}\left(\nabla\mu^{k}\cdot \mathbf{v}^{k}\right)\partial_{t}\varphi^{k}\] \[-\int_{\Omega}\left(\left(\nabla\mathcal{K}\ast\varphi^{k} \right)\cdot\mathbf{v}^{k}\right)\partial_{t}\varphi^{k}\] \[= \int_{\Omega}\left(\nabla(\mathcal{K}\ast\partial_{t}\varphi^{k}) \cdot\mathbf{v}^{k}\right)\varphi^{k}\,+\int_{\Omega}\left(\nabla\mu^{k}\cdot\mathbf{v} ^{k}\right)\partial_{t}\varphi^{k}\] \[-\int_{\Omega}\left(\left(\nabla\mathcal{K}\ast\varphi^{k}\right) \cdot\mathbf{v}^{k}\right)\partial_{t}\varphi^{k}\,.\] By exploiting the uniform \(L^{\infty}\)-bound of \(\varphi^{k}\), we have, by standard Young's inequality for convolutions, \[\left|\int_{\Omega}\left(\left(\nabla\mathcal{K}\ast\varphi^{k} \right)\cdot\boldsymbol{v}^{k}\right)\partial_{t}\varphi^{k}\,\right|\leq\| \nabla\mathcal{K}\ast\varphi^{k}\|_{\infty}\|\boldsymbol{v}^{k}\|\|\partial_{t }\varphi^{k}\|\] \[\quad\leq\|\mathcal{K}\|_{W^{1,1}(B_{M})}\|\varphi^{k}\|_{\infty} \|\boldsymbol{v}^{k}\|\|\partial_{t}\varphi^{k}\|\leq\frac{\alpha}{8}\|\partial _{t}\varphi^{k}\|^{2}+C\|\boldsymbol{v}^{k}\|^{2}. \tag{3.17}\] Similarly, we also find \[\left|\int_{\Omega}\left(\nabla(\mathcal{K}\ast\partial_{t} \varphi^{k})\cdot\boldsymbol{v}^{k}\right)\varphi^{k}\,\right|\leq\|\nabla \mathcal{K}\ast\partial_{t}\varphi^{k}\|\|\boldsymbol{v}^{k}\|\|\varphi^{k}\|_ {\infty}\] \[\quad\leq\|\mathcal{K}\|_{W^{1,1}(B_{M})}\|\partial_{t}\varphi^{ k}\|\|\boldsymbol{v}^{k}\|\|\varphi^{k}\|_{\infty}\leq\frac{\alpha}{8}\| \partial_{t}\varphi^{k}\|^{2}+C\|\boldsymbol{v}^{k}\|^{2}. \tag{3.18}\] To bound the third term on the right-hand side in (3.16), we need a preliminary estimate of the \(V\)-norm of \(\nabla\mu^{k}\). To this end, let us first observe from (3.10) that \(\mu^{k}-(\mu^{k})_{\Omega}=\mathcal{N}(\partial_{t}\varphi^{k}+\nabla\varphi^ {k}\cdot\boldsymbol{v}^{k})\) noticing that \((\partial_{t}\varphi^{k}+\nabla\varphi^{k}\cdot\boldsymbol{v}^{k})_{\Omega}=0\). Then, we find, by elliptic regularity, \[\|\nabla\mu^{k}\|_{V}\leq C\left(\|\partial_{t}\varphi^{k}\|+\|\nabla\varphi^ {k}\cdot\boldsymbol{v}^{k}\|\right). \tag{3.19}\] In order to estimate the second term on the right-hand side in (3.19), we deduce from the second in (3.14) that \[\nabla\varphi^{k}\cdot\boldsymbol{v}^{k}=\frac{1}{F^{\prime\prime}(\varphi^{ k})}\left(\nabla\mu^{k}\cdot\boldsymbol{v}^{k}+(\nabla\mathcal{K}\ast\varphi^{k}) \cdot\boldsymbol{v}^{k}\right)\quad\text{in }Q. \tag{3.20}\] By the strict convexity of \(F\), we notice that \(F^{\prime\prime}(s)^{-1}\leq\alpha^{-1}\) for any \(s\in(-1,1)\). Thus, using Sobolev-Gagliardo-Nirenbeg's inequality and the uniform \(L^{\infty}\)-bound of \(\varphi^{k}\), we obtain \[\|\nabla\varphi^{k}\cdot\boldsymbol{v}^{k}\| \leq\frac{2}{\alpha}\left(\|\nabla\mu^{k}\cdot\boldsymbol{v}^{k} \|+\|\big{(}\nabla\mathcal{K}\ast\varphi^{k}\big{)}\cdot\boldsymbol{v}^{k}\|\right)\] \[\leq C\|\nabla\mu^{k}\|_{3}\|\boldsymbol{v}^{k}\|_{6}+C\|\nabla \mathcal{K}\ast\varphi^{k}\|_{\infty}\|\boldsymbol{v}^{k}\|\] \[\leq C\|\nabla\mu^{k}\|^{\frac{1}{2}}\|\nabla\mu^{k}\|_{\widetilde {V}}^{\frac{1}{2}}\|\boldsymbol{v}^{k}\|_{6}+C\|\mathcal{K}\|_{W^{1,1}(B_{M})} \|\varphi^{k}\|_{\infty}\|\boldsymbol{v}^{k}\|\] \[\leq C\|\nabla\mu^{k}\|^{\frac{1}{2}}\|\nabla\mu^{k}\|_{\widetilde {V}}^{\frac{1}{2}}\|\boldsymbol{v}^{k}\|_{6}+C\|\boldsymbol{v}^{k}\|. \tag{3.21}\] Then, by exploiting (3.19) and (3.21), we infer that \[\|\nabla\mu^{k}\|_{V}\leq C\left(\|\partial_{t}\varphi^{k}\|+\|\nabla\mu^{k} \|\|\boldsymbol{v}^{k}\|_{6}^{2}+\|\boldsymbol{v}^{k}\|\right). \tag{3.22}\] Now, again by Sobolev-Gagliardo-Nirenberg's inequality and (3.22), we find \[\left|\int_{\Omega}\left(\nabla\mu^{k}\cdot\boldsymbol{v}^{k} \right)\partial_{t}\varphi^{k}\,\right| \leq\|\nabla\mu^{k}\|^{\frac{1}{2}}\|\nabla\mu^{k}\|_{\widetilde {V}}^{\frac{1}{2}}\|\boldsymbol{v}^{k}\|_{6}\|\partial_{t}\varphi^{k}\|\] \[\leq\|\nabla\mu^{k}\|^{\frac{1}{2}}\|\partial_{t}\varphi^{k}\|^{ \frac{3}{2}}\|\boldsymbol{v}^{k}\|_{6}+\|\nabla\mu^{k}\|\|\partial_{t} \varphi^{k}\|\|\boldsymbol{v}^{k}\|_{6}^{2}\] \[\quad+\|\nabla\mu^{k}\|^{\frac{1}{2}}\|\boldsymbol{v}^{k}\|^{ \frac{1}{2}}\|\boldsymbol{v}^{k}\|_{6}\|\partial_{t}\varphi^{k}\|\] \[\leq\frac{\alpha}{8}\|\partial_{t}\varphi^{k}\|^{2}+C\|\nabla\mu^{ k}\|^{2}\|\boldsymbol{v}^{k}\|_{6}^{4}+C\|\boldsymbol{v}^{k}\|^{2}. \tag{3.23}\] Concerning the last term in (3.15), we get \[\int_{\Omega}\mathcal{K}*\partial_{t}\varphi^{k}\,\partial_{t}\varphi^{k}\leq\| \nabla\mathcal{K}*\partial_{t}\varphi^{k}\|\|\nabla\mathcal{N}\partial_{t} \varphi^{k}\|\leq\frac{\alpha}{8}\|\partial_{t}\varphi^{k}\|^{2}+C\left(\| \boldsymbol{v}^{k}\|^{2}+\|\nabla\mu^{k}\|^{2}\right). \tag{3.24}\] Indeed, from the first in (3.10), it holds that \[\|\nabla\mathcal{N}\partial_{t}\varphi^{k}\|=\|\partial_{t}\varphi^{k}\|_{*}= \|\partial_{t}\varphi^{k}\|_{V^{*}}\leq C\left(\|\boldsymbol{v}^{k}\|+\| \nabla\mu^{k}\|\right). \tag{3.25}\] Inserting the estimates (3.17), (3.18), (3.23) and (3.24) in (3.15), we end up with \[\frac{1}{2}\frac{d}{dt}\|\nabla\mu^{k}\|^{2}+\frac{\alpha}{2}\|\partial_{t} \varphi^{k}\|^{2}\leq C\left(1+\|\boldsymbol{v}^{k}\|_{6}^{4}\right)\|\nabla \mu^{k}\|^{2}+C\|\boldsymbol{v}^{k}\|^{2}. \tag{3.26}\] Arguing as in [26, (4.49)], we then obtain \[\|\nabla\mu^{k}(0)\|\to\|-\nabla\mathcal{K}*\varphi_{0}+F^{\prime\prime}( \varphi_{0})\nabla\varphi_{0}\|\quad\text{as }k\to\infty. \tag{3.27}\] Therefore, since we know that \(\boldsymbol{v}^{k}\to\boldsymbol{v}\) strongly in \(L^{4}(0,T;\mathbf{L}_{\sigma}^{6})\) and thus it is bounded in the same space, Gronwall's lemma then entails \[\|\nabla\mu^{k}\|_{L^{\infty}(0,T;\boldsymbol{H})}+\|\partial_{t}\varphi^{k} \|_{L^{2}(0,T;H)}\leq C, \tag{3.28}\] from which, recalling that it is standard to obtain \(\|\mu^{k}\|_{V}\leq C(1+\|\nabla\mu^{k}\|)\), from (3.22), we also deduce \[\|\mu^{k}\|_{L^{\infty}(0,T;V)\cap L^{2}(0,T;W)}\leq C, \tag{3.29}\] uniformly in \(k\). Concerning the concentration \(\varphi^{k}\), we deduce from (3.11), (3.29) and interpolation, that \[\|\varphi^{k}\|_{L^{\infty}(Q)}\leq 1,\quad\|\varphi^{k}\|_{L^{\infty}(0,T;V) \cap L^{q}(0,T;W^{1,p}(\Omega))}\leq C,\quad q=\frac{4p}{3(p-2)}. \tag{3.30}\] In a similar fashion, by comparison in (3.10), we are led to \[\|F^{\prime}(\varphi^{k})\|_{L^{\infty}(0,T;V)\cap L^{q}(0,T;W^{1,p}(\Omega) )}\leq C, \tag{3.31}\] with the same \(q\) as above. Furthermore, recalling (3.25), we obtain from \(\boldsymbol{v}\in L^{4}(0,T;\mathbf{L}_{\sigma}^{6}(\Omega))\) and (3.29) that \[\|\partial_{t}\varphi^{k}\|_{L^{4}(0,T;V^{*})}\leq C. \tag{3.32}\] Exploiting the above uniform estimates, by a standard compactness argument, we pass to the limit in a suitable weak form of (3.10) as \(k\to\infty\) obtaining that the limit function \(\varphi\) is a strong solution to (1.18)-(1.21). In particular, (1.18) and (1.19) hold almost everywhere in \(Q\), and (1.20) holds almost everywhere on \(\Sigma\). Finally, if we also assume \(\boldsymbol{v}\in L^{\infty}(0,T;\mathbf{H}_{\sigma})\) (cf. (3.25)) it is easily seen that \(\partial_{t}\varphi\in L^{\infty}(0,T;V^{*})\). This concludes the proof of part \((i)\). Concerning part \((ii)\), as already noticed for the approximating sequence, it is a consequence of [39, Corollary 4.5, Remark 4.7]. Indeed, if the initial datum is strictly separated from pure phases, then the strict separation property for \(\varphi\) holds as well, thanks to the regularity of the solution \(\varphi\) and the divergence-free property of the advective vector field \(\boldsymbol{v}\), so that there exists \(\delta>0\), depending on \(\varphi_{0}\), such that \[\sup_{t\in[0,T]}\|\varphi(t)\|_{\infty}\leq 1-\delta. \tag{3.33}\] Arguing by means of the difference quotients (see [26, Thm. 4.1, part (ii)]), it is then easy to show that \(\partial_{t}\mu\in L^{2}(0,T;H)\). In turn, thanks to \(\mu\in L^{2}(0,T;W)\), we also obtain \(\mu\in C^{0}([0,T];V)\) via compact embeddings. #### 3.1.4 Extra regularity results for separated initial data: part (iii) We perform the proof in the same spirit of [18, Lemma 2]. Let \(\kappa\in[-1+\delta,1-\delta]\) and \(\eta=\eta(x,t)\in[0,1]\) be a continuous piecewise-smooth function which is supported on the space-time cylinders \(Q_{t_{0},t_{0}+\tau}(\rho):=B_{\rho}(x_{0})\times(t_{0},t_{0}+\tau)\), where \(B_{\rho}(x_{0})\) denotes the ball centered at \(x_{0}\) of radius \(\rho>0\), and \(\tau>0\) is given. According to the type of Holder regularity we consider (interior or boundary regularity), we set \(x_{0}\in\Omega\) or \(x_{0}\in\Gamma\) and then by standard compactness arguments we gain the validity for any ball of \(\overline{\Omega}\). We thus test (2.7) by \(\eta^{2}\varphi_{\kappa}^{+}\), where \(\varphi_{\kappa}^{+}:=\max\{0,\varphi-\kappa\}\), integrate the resulting identity over \(Q_{t_{0},t}:=\Omega\times(t_{0},t)\), where \(0\leq t_{0}<t<t_{0}+\tau\leq T\), to infer that, for any \(s\leq t\), \[\frac{1}{2}\|\eta\varphi_{\kappa}^{+}(s)\|^{2}+\int_{Q_{t_{0},s}} F^{\prime\prime}(\varphi)\nabla\varphi_{\kappa}^{+}\cdot\nabla(\eta^{2} \varphi_{\kappa}^{+})=\frac{1}{2}\|\eta\varphi_{\kappa}^{+}(t_{0})\|^{2}+ \int_{Q_{t_{0},s}}(\nabla\mathcal{K}*\varphi)\cdot\nabla(\eta^{2}\varphi_{ \kappa}^{+})\] \[\quad+\int_{Q_{t_{0},s}}\varphi_{\kappa}^{+}\boldsymbol{v}\cdot \nabla(\varphi_{\kappa}^{+}\eta^{2})+\int_{Q_{t_{0},s}}(\varphi_{\kappa}^{+}) ^{2}\eta\partial_{t}\eta. \tag{3.34}\] In the above identity, we exploited some properties of the positive part function. For instance, we used that \[\int_{Q_{t_{0},s}}F^{\prime\prime}(\varphi)\nabla\varphi\cdot \nabla(\eta^{2}\varphi_{\kappa}^{+}) =\int_{\{(x,r)\in Q_{t_{0},s}:\ \varphi(x,r)\geq\kappa\}}F^{\prime\prime}(\varphi)\nabla \varphi\cdot\nabla(\eta^{2}(\varphi-\kappa))\] \[=\int_{Q_{t_{0},s}}F^{\prime\prime}(\varphi)\nabla\varphi_{ \kappa}^{+}\cdot\nabla(\eta^{2}\varphi_{\kappa}^{+}),\] and that \[\int_{Q_{t_{0},s}}\partial_{t}\varphi\,\varphi_{\kappa}^{+}\eta^ {2} =\int_{\{(x,r)\in Q_{t_{0},s}:\ \varphi(x,r)\geq\kappa\}}\partial_{t} \varphi(\varphi-\kappa)\eta^{2}=\int_{Q_{t_{0},s}}\frac{1}{2}\partial_{t}( \varphi_{\kappa}^{+})^{2}\eta^{2}\] \[=\frac{1}{2}\|\eta\varphi_{\kappa}^{+}(s)\|^{2}-\frac{1}{2}\|\eta \varphi_{\kappa}^{+}(t_{0})\|^{2}-\int_{Q_{t_{0},s}}(\varphi_{\kappa}^{+})^{2 }\eta\partial_{t}\eta.\] Now, we point out the identity \[\nabla\varphi_{\kappa}^{+}\cdot\nabla(\eta^{2}\varphi_{\kappa}^{+})=|\nabla (\varphi_{\kappa}^{+}\eta)|^{2}-|\nabla\eta\varphi_{\kappa}^{+}|^{2},\] whence (3.34) entails, using again **H2**, that \[\frac{1}{2}\sup_{s\in[t_{0},t]}\|\eta\varphi_{\kappa}^{+}(s)\|^{2}+ \alpha\int_{Q_{t_{0},t}}|\nabla(\varphi_{\kappa}^{+}\eta)|^{2}\\ \leq\frac{1}{2}\|\eta\varphi_{\kappa}^{+}(t_{0})\|^{2}+\int_{Q_{t _{0},t}}\big{|}(\nabla\mathcal{K}*\varphi)\cdot\nabla(\eta^{2}\varphi_{\kappa}^ {+})\big{|}\\ +\sup_{s\in[t_{0},t]}\left|\int_{Q_{t_{0},s}}\varphi_{\kappa}^{+} \boldsymbol{v}\cdot\nabla(\varphi_{\kappa}^{+}\eta^{2})\right|+\int_{Q_{t_{0},t }}(\varphi_{\kappa}^{+})^{2}|\eta\partial_{t}\eta|+\int_{Q_{t_{0},t}}F^{\prime \prime}(\varphi)|\nabla\eta\varphi_{\kappa}^{+}|^{2}. \tag{3.35}\] Using the separation property (3.33), it holds that \(\|F^{\prime\prime}(\varphi)\|_{L^{\infty}(Q)}\leq C\). Thus, \[\int_{Q_{t_{0},t}}F^{\prime\prime}(\varphi)|\nabla\eta\varphi_{\kappa}^{+}|^ {2}\leq C\int_{Q_{t_{0},t}}|\nabla\eta|^{2}(\varphi_{\kappa}^{+})^{2}.\] Moreover, we readily deduce that \[\int_{Q_{t_{0},t}}\big{|}(\nabla\mathcal{K}*\varphi)\cdot\nabla( \eta\varphi_{\kappa}^{+}\eta)\big{|}\\ \leq\|\nabla\mathcal{K}*\varphi\|_{L^{\infty}(Q)}\left(\int_{Q_{t _{0},t}}|\nabla\eta|\varphi_{\kappa}^{+}\eta+\int_{Q_{t_{0},t}}\eta|\nabla( \varphi_{\kappa}^{+}\eta)|\right)\\ \leq\frac{\alpha}{4}\int_{Q_{t_{0},t}}|\nabla(\eta\varphi_{\kappa} ^{+})|^{2}+C\int_{Q_{t_{0},t}}\eta^{2}+C\int_{Q_{t_{0},t}}|\nabla\eta|^{2}( \varphi_{\kappa}^{+})^{2}.\] In conclusion, we observe that \[\int_{Q_{t_{0},s}}\varphi_{\kappa}^{+}\boldsymbol{v}\cdot\nabla(\varphi_{ \kappa}^{+}\eta^{2})=-\int_{Q_{t_{0},s}}\nabla\Big{(}\frac{(\varphi_{\kappa}^ {+})^{2}}{2}\Big{)}\cdot\boldsymbol{v}\eta^{2}=\int_{Q_{t_{0},s}}\eta(\varphi_ {\kappa}^{+})^{2}\boldsymbol{v}\cdot\nabla\eta,\] and, by the assumption \(\|\boldsymbol{v}\|_{L^{4}(0,T;\mathbf{I}^{6})}\leq C\) and standard inequalities, we get \[\sup_{s\in[t_{0},t]}\left|\int_{Q_{t_{0},s}}\varphi_{\kappa}^{+} \boldsymbol{v}\cdot\nabla(\varphi_{\kappa}^{+}\eta^{2})\right|\\ \leq\int_{Q_{t_{0},t}}\big{|}\eta(\varphi_{\kappa}^{+})^{2} \boldsymbol{v}\cdot\nabla\eta\big{|}\\ \leq\|\varphi_{\kappa}^{+}\|_{L^{4}(t_{0},t;L^{3}(\Omega))}\| \varphi_{\kappa}^{+}\nabla\eta\|_{L^{2}(Q_{t_{0},t})}\|\boldsymbol{v}\|_{L^{4 }(0,T;\mathbf{I}^{6})}\\ \leq C\left[\int_{t_{0}}^{t}\|\eta\varphi_{\kappa}^{+}\|^{2} \left(\|\eta\varphi_{\kappa}^{+}\|^{2}+\|\nabla(\eta\varphi_{\kappa}^{+})\|^{ 2}\right)\right]^{\frac{1}{4}}\|\varphi_{\kappa}^{+}\nabla\eta\|_{L^{2}(Q_{t _{0},t})}\\ \leq\frac{\alpha}{4}\int_{Q_{t_{0},t}}|\nabla(\eta\varphi_{\kappa }^{+})|^{2}+\frac{1}{4}\sup_{s\in[t_{0},t]}\|\eta\varphi_{\kappa}^{+}(s)\|^{ 2}+C\int_{Q_{t_{0},t}}|\nabla\eta|^{2}(\varphi_{\kappa}^{+})^{2}.\] Summarizing, putting everything together in (3.35), we deduce \[\frac{1}{4}\sup_{s\in[t_{0},t]}\|\eta\varphi_{\kappa}^{+}(s)\|^{ 2}+\frac{\alpha}{2}\int_{Q_{t_{0},t}}|\nabla(\eta\varphi_{\kappa}^{+})|^{2}\\ \leq\frac{1}{2}\|\eta\varphi_{\kappa}^{+}(t_{0})\|^{2}+C\int_{Q_{ t_{0},t}}|\nabla\eta|^{2}(\varphi_{\kappa}^{+})^{2}+C\int_{Q_{t_{0},t}}\eta^{2}+ \int_{Q_{t_{0},t}}(\varphi_{\kappa}^{+})^{2}|\eta\partial_{t}\eta|. \tag{3.36}\] Arguing in a similar way, inequality (3.36) also holds with \(\varphi\) when replaced by \(-\varphi\) leading us to consider \(\varphi_{\kappa}^{-}=(-\varphi-\kappa)^{+}\). In particular, for any fixed \(\varepsilon>0\) such inequalities imply that \(\varphi\) is an element of \(\mathfrak{B}_{2}(\overline{Q},1-\delta,\gamma,\omega,\varepsilon,\chi)\) in the sense of [36, Ch. II, Sec. 7], for some \(\gamma,\omega,\chi>0\) possibly depending on \(\varepsilon\) (cf., in particular, the inequalities in [36, Sec. V, (1.12)-(1.13)]). Therefore, on account of [36, Ch. V, Thm 1.1], the Holder regularity (2.16) holds. Then, by the regularity of \(F\) and by (3.33), we immediately deduce the same result (2.17) for the chemical potential \(\mu\), concluding the proof. #### 3.1.5 \(L_{t}^{4}W_{x}^{1,6}\) regularity on \(\varphi\) for separated initial data: part (iv) Let us now prove point \((iv)\) of Theorem 2.2. The proof can be carried out by carefully combining a version of a maximal regularity theorem in [40] with a bootstrap argument, based upon the Holder regularity in (2.16)-(2.17). Being the equations (1.18)-(1.19) satisfied by our unique solution \((\varphi,\mu)\) almost everywhere in \(Q\), and being \(\partial_{t}\mu\in L^{2}(Q)\) as a consequence of point \((ii)\), we may rewrite (1.18)-(1.21) in the form \[\partial_{t}\mu-F^{\prime\prime}(\varphi)\Delta\mu=-\mathcal{K}* \partial_{t}\varphi-F^{\prime\prime}(\varphi)\nabla\varphi\cdot\boldsymbol{v }=:f\quad\text{in }Q,\] \[\partial_{\mathbf{n}}\mu=0\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad where the usefulness of the auxiliary sequence \(\{\widetilde{q}_{n}\}\) will be clarified below. Letting \(n\to\infty\), we find that \[q_{n}\to\frac{6}{1-\vartheta}>6\quad\text{and}\quad\widetilde{q}_{n}\to\frac{6} {2-\vartheta}>3.\] Besides, being \(\vartheta\in(0,1)\), it holds that \(4<q_{n}\leq\frac{6}{1-\vartheta}\) for every \(n\geq 1\) and it is monotone increasing: whence it possesses a limit that we claim to be finite. If that is not the case, meaning that \(q_{n}\to+\infty\) as \(n\to\infty\), we would get a contradiction since by construction \(\widetilde{q}_{n}\to 6\) and \(q_{n}\to\frac{12}{1-\vartheta}<\infty\). Therefore, the finite value \(\overline{q}:=\frac{6}{1-\vartheta}\) is the limit of the sequence and we also notice that \(\widetilde{q}_{n}\to\frac{6}{2-\vartheta}\) as \(n\to\infty\). From (3.39)-(3.41), along with the regularity of \(\mu\) in (3.37), we thus get \[\|\mu\|_{L^{4}(0,T;W^{1,q_{1}}(\Omega))}^{4}\leq C\|\mu\|_{L^{\infty}(0,T;W^{ \vartheta,s}(\Omega))}^{2}\|\mu\|_{L^{2}(0,T;H^{2}(\Omega))}^{2}\leq C, \tag{3.43}\] so that, from (3.38), we deduce \[\|\varphi\|_{L^{4}(0,T;W^{1,q_{1}}(\Omega))}\leq C. \tag{3.44}\] Following the same arguments as in the proof of [39, Lemma 5.4], being now \(\varphi\) bounded in \(L^{4}(0,T;W^{1,4}(\Omega))\) since \(q_{1}>4\), we immediately infer (2.18) without extra assumptions on the initial data \(\varphi_{0},\mu_{0}\) with respect to point \((iii)\). Furthermore, if \(\mu_{0}\in\mathbb{B}^{1}_{3,2}(\Omega)\), we can apply a regularity result presented in [40, Sec.4]. Namely, recalling (2.16), we have, with the notation of the quoted paper, \[a_{kl} :=\delta_{kl}F^{\prime\prime}(\varphi)\in C^{\beta,\frac{ \beta}{2}}(\overline{Q}),\quad a_{k}=0,\quad b_{0}=0,\quad b_{k}=n_{k},\quad k,l=1,2,3\] \[f :=-\mathcal{K}*\partial_{t}\varphi-F^{\prime\prime}(\varphi) \nabla\varphi\cdot\mathbf{v},\] where \(n_{k}\) is the \(k\)-th component of the outward unit normal \(\mathbf{n}\), which is smooth by assumption, and \(\delta_{kl}\) are the Kronecker delta symbols. Clearly, \(\{a_{kl}\}_{kl}\) is uniformly elliptic since \(F^{\prime\prime}\geq\alpha>0\) by **H2**. Observe now that, for any \(p\leq 6\), we have \[\|\mathcal{K}*\partial_{t}\varphi\|_{L^{2}(0,T;L^{p}(\Omega))}\leq C\|\mathcal{ K}*\partial_{t}\varphi\|_{L^{2}(0,T;V)}\leq C\|\partial_{t}\varphi\|_{L^{2}(Q)} \leq C, \tag{3.45}\] by previous bounds. Furthermore, we have from the separation property (2.15) that \(\|F^{{}^{\prime\prime}}(\varphi)\|_{\infty}\leq C\), and so for any \(n\geq 2\) \[\|F^{\prime\prime}(\varphi)\nabla\varphi\cdot\mathbf{v}\|_{L^{2}(0,T;L^{\widetilde {n}}(\Omega))}\leq C\|\nabla\varphi\|_{L^{4}(0,T;\mathbf{L}^{q_{n-1}}(\Omega))} \|\mathbf{v}\|_{L^{4}(0,T;\mathbf{L}^{6}_{\sigma})}, \tag{3.46}\] where \(\widetilde{q}_{n}\) is the \(n\)-th element of the sequence introduced above. Consider \(n=2\): we have \[\|F^{\prime\prime}(\varphi)\nabla\varphi\cdot\mathbf{v}\|_{L^{2}(0,T;L^{ \widetilde{q}_{2}}(\Omega))}\leq C\|\nabla\varphi\|_{L^{4}(0,T;\mathbf{L}^{q_ {1}}(\Omega))}\|\mathbf{v}\|_{L^{4}(0,T;\mathbf{L}^{6}_{\sigma})}\leq C, \tag{3.47}\] and thus, together with (3.45), it yields that \[\|f\|_{L^{2}(0,T;L^{\widetilde{q}_{2}}(\Omega))}\leq C.\] Then, recalling that \(\mu_{0}\in\mathcal{B}^{1}_{3,2}(\Omega)\hookrightarrow\mathcal{B}^{1}_{q,2}(\Omega)\), for any \(q\leq 3\) (as long as \(\widetilde{q}_{n}<3\), otherwise we would be already done), applying the same result as above (cf. [40, Sec.4]), we get \[\|\mu\|_{H^{1}(0,T;L^{\widetilde{q}_{2}}(\Omega))\cap L^{2}(0,T;W^{2}, \widetilde{q}_{2}(\Omega))}\leq C. \tag{3.48}\] Coming back to (3.41), holding with \(\gamma=\widetilde{q}_{2}\in[2,\frac{6}{\vartheta})\), we now infer \[\|\mu\|_{L^{4}(0,T;W^{1,q_{2}}(\Omega))}^{4}\leq C\|\mu\|_{L^{\infty}(0,T;W^{ \vartheta,s}(\Omega))}^{2}\|\mu\|_{L^{2}(0,T;W^{2,\widetilde{q}_{2}}(\Omega)) }^{2}\leq C, \tag{3.49}\] which then entails, by (3.38), that \[\|\varphi\|_{L^{4}(0,T;W^{1,q_{2}}(\Omega))}\leq C. \tag{3.50}\] Then, we are essentially ready to iterate. Namely, by the same arguments as above, we infer that \[\|f\|_{L^{2}(0,T;L^{\widetilde{q}_{3}}(\Omega))}\leq C,\] so that by [40, Sec.4], (3.38) and (3.41) (as long as \(\widetilde{q}_{3}<3\), otherwise we would be already done), we get \[\|\mu\|_{H^{1}(0,T;L^{\widetilde{q}_{3}}(\Omega))\cap L^{2}(0,T;W^{2, \widetilde{q}_{3}}(\Omega))}+\|\varphi\|_{L^{4}(0,T;W^{1,q_{3}}(\Omega))} \leq C. \tag{3.51}\] Finally, by extending this bootstrap argument, at a general step \(n\) for which \(q_{n}<6\) and \(\widetilde{q}_{n}<3\), we obtain that \[\|\mu\|_{H^{1}(0,T;L^{\widetilde{q}_{n}}(\Omega))\cap L^{2}(0,T;W^{2, \widetilde{q}_{n}}(\Omega))}+\|\varphi\|_{L^{4}(0,T;W^{1,q_{n}}(\Omega))}\leq C. \tag{3.52}\] According to (3.42), we have \(q_{n}\to\frac{6}{1-\vartheta}>6\) and \(\widetilde{q}_{n}\to\frac{6}{2-\vartheta}>3\), so that there exists \(\overline{n}>0\) such that \[\lfloor\mathpzc{q}_{\overline{n}}\rfloor=6,\quad\lfloor\widetilde{\mathpzc{q} }_{\overline{n}}\rfloor=3,\] where \(\lfloor\cdot\rfloor\) denotes the floor function. Therefore, we can iterate the arguments leading to (3.52) starting from step \(\overline{n}-1\), with the above two quantities in the resulting summability exponents, recall that \(\mu_{0}\in\mathcal{B}^{1}_{3,2}(\Omega)\), and thus prove (2.20). This concludes the proof of the theorem since by comparison we immediately deduce the desired regularity on \(\partial_{t}\varphi\) as well. 1.6 \(L^{4}_{t}h^{2}_{x}\cap L^{3}_{t}W^{1,\infty}_{x}\) regularity on \(\varphi\) for separated initial data: part (v) We are left to prove point \((v)\) of Theorem 2.2. Recall that we are additionally assuming \(\boldsymbol{v}\in L^{4}(0,T;\boldsymbol{L}^{\infty})\) as well as \(\mu_{0}\in\mathcal{B}^{1}_{6,2}(\Omega)\). Then, by exploiting the results of the previous section, the regularity in (2.20) is fulfilled and we easily reach \[\|F^{\prime\prime}(\varphi)\nabla\varphi\cdot\boldsymbol{v}\|_{L^{2}(0,T;L^{ 6}(\Omega))}\leq C\|\nabla\varphi\|_{L^{4}(0,T;\boldsymbol{\mathrm{I}}^{6}( \Omega))}\|\boldsymbol{v}\|_{L^{4}(0,T;\boldsymbol{\mathrm{L}}^{\infty}_{ \sigma})}\leq C.\] Going back to (3.45), with the same notation as the previous section, we infer that \[f=-\mathcal{K}\ast\partial_{t}\varphi-F^{\prime\prime}(\varphi)\nabla\varphi \cdot\boldsymbol{v}\in L^{2}(0,T;L^{6}(\Omega)),\] so that, owing to the same result as above ([40, Sec.4]), we immediately infer that \[\|\mu\|_{H^{1}(0,T;L^{6}(\Omega))\cap L^{2}(0,T;W^{2,\widetilde{q}}(\Omega))} \leq C. \tag{3.53}\] We now recall (see, e.g., [5]) that it holds the inequality \[\|\mu\|_{W^{1,\infty}}^{3}\leq C\|\mu\|_{W^{\vartheta,s}}\|\mu\|_{W^{2,6}}^{2}, \tag{3.54}\] where, as in the previous section, \(\theta\in(0,\beta)\) with \(\beta\) being the Holder exponent in (2.17), and \(s=\frac{3}{\theta}\). This entails, using (3.37) and (3.53), that \[\|\mu\|_{L^{3}(0,T;W^{1,\infty}(\Omega))}\leq C,\] so that, by letting \(p\to\infty\) in (3.38), this implies that \[\|\varphi\|_{L^{3}(0,T;W^{1,\infty}(\Omega))}\leq C.\] Then, by comparison, we immediately deduce the required regularity on \(\partial_{t}\varphi\) as well. Next, assume \(\boldsymbol{v}\in L^{\infty}(0,T;\boldsymbol{L}^{4}_{\sigma})\cap L^{4}(0,T; \boldsymbol{L}^{6}_{\sigma})\) and \(\mu_{0}\in\mathcal{B}^{\frac{3}{2}}_{2,4}(\Omega)\). In this case we have \[\|F^{\prime\prime}(\varphi)\nabla\varphi\cdot\boldsymbol{v}\|_{L^{4}(0,T;H)} \leq C\|\nabla\varphi\|_{L^{4}(0,T;\boldsymbol{L}^{4}(\Omega))}\|\boldsymbol {v}\|_{L^{\infty}(0,T;\boldsymbol{L}^{4}(\Omega))}\leq C,\] again thanks to (2.20). Furthermore, being \(\mathcal{K}\) symmetric, for any \(\psi\in V\), \[|\langle\mathcal{K}*\partial_{t}\varphi,\psi\rangle|=\Big{|}\int_{\Omega} \mathcal{K}*\partial_{t}\varphi\psi\Big{|}=\Big{|}\int_{\Omega}\partial_{t} \varphi\mathcal{K}*\psi\Big{|}\leq C\|\partial_{t}\varphi\|_{*}\|\mathcal{K}* \psi\|_{V}\leq C\|\partial_{t}\varphi\|_{*}\|\psi\|_{V},\] entailing that \[\|\mathcal{K}*\partial_{t}\varphi\|_{L^{\infty}(0,T;V^{*})}\leq C\|\partial_ {t}\varphi\|_{L^{\infty}(0,T;V^{*})}.\] Therefore, by standard interpolation, recalling the regularity of \(\mathcal{K}\) and Young's inequality for convolutions, \[\|\mathcal{K}*\partial_{t}\varphi\|_{L^{4}(0,T;H)} \leq C\|\mathcal{K}*\partial_{t}\varphi\|_{L^{\infty}(0,T;V^{*})} ^{\frac{1}{2}}\|\mathcal{K}*\partial_{t}\varphi\|_{L^{2}(0,T;V)}^{\frac{1}{2} }\] \[\leq C\|\partial_{t}\varphi\|_{L^{\infty}(0,T;V^{*})}^{\frac{1}{2} }\|\partial_{t}\varphi\|_{L^{2}(0,T;H)}^{\frac{1}{2}}\leq C,\] where the last constant \(C>0\) appears exploiting the regularity given in part (i), since \(\boldsymbol{v}\in L^{\infty}(0,T;\boldsymbol{L}^{4}_{\sigma})\hookrightarrow L ^{\infty}(0,T;\boldsymbol{H}_{\sigma})\). We thus have \[f=-\mathcal{K}*\partial_{t}\varphi-F^{\prime\prime}(\varphi)\nabla\varphi\cdot \boldsymbol{v}\in L^{4}(0,T;H),\] so that, by the result in [40, Sec.4] as above, we immediately infer \[\|\mu\|_{W^{1,4}(0,T;H)\cap L^{4}(0,T;H^{2}(\Omega))}\leq C. \tag{3.55}\] Now it holds (see, e.g., [5]) that \[\|\mu\|_{W^{1,4}}^{8}\leq C\|\mu\|_{W^{\vartheta,s}}^{4}\|\mu\|_{H^{2}}^{4}, \tag{3.56}\] where \(\vartheta\in(0,\beta)\) and \(s=\frac{3}{\vartheta}\), so that by (3.37) and (3.55), \(\mu\in L^{8}(0,T;W^{1,4}(\Omega))\) and thus, by (3.38), also \(\varphi\in L^{8}(0,T;W^{1,4}(\Omega))\). This allows to argue as in the proof [39, Lemma 5.4], to deduce that \[\|\varphi\|_{L^{4}(0,T;H^{2}(\Omega))}\leq C,\] concluding the proof of the theorem. ### Continuous dependence results In this subsection, we aim at proving the continuous dependence results stated in Theorem 2.9 with respect to the velocity field \(\mathbf{v}\). This will be crucial to establish some differentiability properties of the control-to-state mapping associated to the control problem that will allow us to identify the first-order optimality conditions for minimizers. Proof of Theorem 2.9.: To begin with, we recall the notation of the statement and set \[\mathbf{v}:=\mathbf{v}_{1}-\mathbf{v}_{2},\quad\varphi:=\varphi_{1}-\varphi_{2},\quad\mu:= \mu_{1}-\mu_{2},\quad\varphi_{0}:=\varphi_{0}^{1}-\varphi_{0}^{2},\] and write the system of the differences using (1.18)-(1.21). This leads us to \[\partial_{t}\varphi+\nabla\varphi\cdot\mathbf{v}_{1}+\nabla\varphi_{2 }\cdot\mathbf{v}-\Delta\mu=0 \text{in }Q, \tag{3.57}\] \[\mu=-\mathcal{K}*\varphi+(F^{\prime}(\varphi_{1})-F^{\prime}( \varphi_{2})) \text{in }Q,\] (3.58) \[\partial_{\mathbf{n}}\mu=0 \text{on }\Sigma,\] (3.59) \[\varphi(0)=\varphi_{0} \text{in }\Omega. \tag{3.60}\] #### First estimate: part (i) We now observe that, thanks to the regularity of the two solutions under consideration, the following estimates are actually rigorous (see also Remark 2.7). In particular, by the assumptions, we have, for some \(\delta>0\), \[\|\varphi_{i}\|_{\infty}\leq 1-\delta,\ i=1,2,\quad\text{and}\quad\|\varphi_{2} \|_{L^{4}(0,T;W^{1,6}(\Omega))}\leq C, \tag{3.61}\] thanks to (2.20), holding for \(\varphi_{2}\),. We test (3.57) by \(\varphi\), take the gradient of (3.58) and test it by \(\nabla\varphi\). In those computations, let us highlight that \[\nabla(F^{\prime}(\varphi_{1})-F^{\prime}(\varphi_{2})) =F^{\prime\prime}(\varphi_{1})\nabla\varphi_{1}-F^{\prime\prime}( \varphi_{2})\nabla\varphi_{2}\] \[=(F^{\prime\prime}(\varphi_{1})-F^{\prime\prime}(\varphi_{2})) \nabla\varphi_{2}+F^{\prime\prime}(\varphi_{1})\nabla\varphi.\] Then, adding the resulting identities and rearranging the terms, we obtain \[\frac{1}{2}\frac{d}{dt}\|\varphi\|^{2}+\int_{\Omega}F^{\prime \prime}(\varphi_{1})|\nabla\varphi|^{2}=-\int_{\Omega}(\nabla\varphi_{2}\cdot \mathbf{v})\varphi+\int_{\Omega}(\nabla\mathcal{K}*\varphi)\cdot\nabla\varphi\] \[\quad-\int_{\Omega}(F^{\prime\prime}(\varphi_{1})-F^{\prime \prime}(\varphi_{2}))\nabla\varphi_{2}\cdot\nabla\varphi.\] Here, we also account of the fact that one term vanishes as \[\int_{\Omega}\nabla\varphi\cdot\mathbf{v}_{1}\varphi=\int_{\Omega}\nabla\Big{(} \frac{\varphi^{2}}{2}\Big{)}\cdot\mathbf{v}_{1}=-\int_{\Omega}\Big{(}\frac{\varphi ^{2}}{2}\Big{)}\operatorname{div}\mathbf{v}_{1}+\int_{\Gamma}\Big{(}\frac{\varphi ^{2}}{2}\Big{)}\mathbf{v}_{1}\cdot\mathbf{n}=0.\] Moreover, we owe to **H2** to infer that \[\int_{\Omega}F^{\prime\prime}(\varphi_{1})|\nabla\varphi|^{2}\geq\alpha\| \nabla\varphi\|^{2}.\] Then, for a positive constant \(\varepsilon\) yet to be selected, we have \[-\int_{\Omega}(\nabla\varphi_{2}\cdot\mathbf{v})\varphi =\int_{\Omega}\varphi_{2}\mathbf{v}\cdot\nabla\varphi\leq\|\varphi_{2} \|_{\infty}\|\mathbf{v}\|\|\nabla\varphi\|\leq\varepsilon\|\nabla\varphi\|^{2}+C_{ \varepsilon}\|\mathbf{v}\|^{2},\] \[\int_{\Omega}(\nabla\mathcal{K}\ast\varphi)\cdot\nabla\varphi \leq\|\mathcal{K}\|_{W^{1,1}(B_{M})}\|\varphi\|\|\nabla\varphi \|\leq\varepsilon\|\nabla\varphi\|^{2}+C_{\varepsilon}\|\varphi\|^{2},\] \[-\int_{\Omega}(F^{\prime\prime}(\varphi_{1})-F^{\prime\prime}( \varphi_{2}))\nabla\varphi_{2}\cdot\nabla\varphi \leq\|\varphi\|_{3}\|\nabla\varphi_{2}\|_{6}\|\nabla\varphi\|\] \[\leq\varepsilon\|\nabla\varphi\|^{2}+C_{\varepsilon}\|\varphi\| \|\nabla\varphi\|\|\varphi_{2}\|_{W^{1,6}}^{2}\] \[\leq\varepsilon\|\nabla\varphi\|^{2}+C_{\varepsilon}\|\varphi\| ^{2}\|\varphi_{2}\|_{W^{1,6}}^{4}.\] In the above computations, we employed integration by parts, the Holder, Young and Gagliardo-Nirenberg inequalities, as well as the Lipschitz continuity of \(F^{\prime\prime}\) which follows from the separation property (2.15). We then collect the above estimates, selecting \(\varepsilon\) small enough and integrate over time. Using the regularity (3.61) and applying the Gronwall lemma, we obtain the first continuous dependence result (2.24). ### Second estimate: part (ii) We now move to the second continuous dependence estimate. Due to Theorem 2.2, it follows that \[t \mapsto\|\varphi_{1}(t)\|_{W^{1,6}}^{4}+\|\varphi_{2}(t)\|_{W^{1,6}}^{4}\in L^{1}(0,T),\quad t\mapsto\|\varphi_{2}(t)\|_{H^{2}}^{4}\in L^{1}( 0,T), \tag{3.62}\] \[t \mapsto\|\varphi_{2}(t)\|_{W^{1,\infty}}^{3}\in L^{1}(0,T). \tag{3.63}\] Then, we take the gradient of (3.57) and the laplacian of (3.58) to obtain the corresponding identities \[\nabla(\partial_{t}\varphi)+\nabla(\nabla\varphi\cdot\mathbf{v}_{1}) +\nabla(\nabla\varphi_{2}\cdot\mathbf{v})-\nabla\Delta\mu=0 \text{in }Q,\] \[\Delta\mu=-\operatorname{div}(\nabla\mathcal{K}\ast\varphi)+F^{ \prime\prime}(\varphi_{1})\Delta\varphi+(F^{\prime\prime}(\varphi_{1})-F^{ \prime\prime}(\varphi_{2}))\Delta\varphi_{2}\] \[\qquad\qquad+((F^{(3)}(\varphi_{1})-F^{(3)}(\varphi_{2}))|\nabla \varphi_{2}|^{2}+F^{(3)}(\varphi_{1})\nabla\varphi\cdot(\nabla\varphi_{1}+ \nabla\varphi_{2})\quad\text{in }Q.\] We test the first one by \(\nabla\varphi\), the second one by \(-\Delta\varphi\), and add the resulting equalities leading to a cancellation to infer that \[\frac{1}{2}\frac{d}{dt}\|\nabla\varphi\|^{2}+\int_{\Omega}F^{ \prime\prime}(\varphi_{1})|\Delta\varphi|^{2}=\int_{\Omega}\nabla\varphi\cdot \mathbf{v}_{1}\Delta\varphi+\int_{\Omega}\nabla\varphi_{2}\cdot\mathbf{v}\Delta\varphi +\int_{\Omega}\operatorname{div}(\nabla\mathcal{K}\ast\varphi)\Delta\varphi\] \[\qquad-\int_{\Omega}(F^{\prime\prime}(\varphi_{1})-F^{\prime \prime}(\varphi_{2}))\Delta\varphi_{2}\Delta\varphi-\int_{\Omega}((F^{(3)}( \varphi_{1})-F^{(3)}(\varphi_{2}))|\nabla\varphi_{2}|^{2}\Delta\varphi\] \[\qquad-\int_{\Omega}F^{(3)}(\varphi_{1})\nabla\varphi\cdot( \nabla\varphi_{1}+\nabla\varphi_{2})\Delta\varphi=\sum_{i=1}^{6}I_{i}.\] Notice that this estimate is indeed rigorous thanks to Remark 2.20. Using the Holder, Young and Agmon inequalities, for a positive constant \(\varepsilon\) yet to be selected, we infer that \[I_{1}+I_{2} \leq\|\nabla\varphi\|_{3}\|\boldsymbol{v}_{1}\|_{6}\|\Delta\varphi \|+\|\nabla\varphi_{2}\|_{\infty}\|\boldsymbol{v}\|\|\Delta\varphi\|\] \[\leq\varepsilon\|\Delta\varphi\|^{2}+C_{\varepsilon}(\|\nabla \varphi\|^{2}\|\boldsymbol{v}_{1}\|_{6}^{4}+\|\varphi_{2}\|_{W^{1,\infty}}^{2} \|\boldsymbol{v}\|^{2}),\] \[I_{3} \leq C\|\varphi\|\|\Delta\varphi\|\leq\varepsilon\|\Delta\varphi \|^{2}+C_{\varepsilon}\|\varphi\|^{2},\] \[I_{4} \leq C\|\varphi\|_{\infty}\|\Delta\varphi_{2}\|\|\Delta\varphi\| \leq\varepsilon\|\Delta\varphi\|^{2}+C_{\varepsilon}\|\varphi\|_{V}\|\varphi\| _{H^{2}}\|\varphi_{2}\|_{H^{2}}^{2}\] \[\leq\varepsilon\|\Delta\varphi\|^{2}+C_{\varepsilon}(1+\|\varphi \|_{V}^{2})\|\varphi_{2}\|_{H^{2}}^{4},\] \[I_{5} \leq C\|\varphi\|_{6}\|\nabla\varphi_{2}\|_{6}^{2}\|\Delta\varphi \|\leq\varepsilon\|\Delta\varphi\|^{2}+C_{\varepsilon}\|\varphi_{2}\|_{W^{1,6} }^{4}\|\varphi\|_{V}^{2},\] \[I_{6} \leq\|F^{(3)}(\varphi_{1})\|_{\infty}(\|\nabla\varphi_{1}\|_{6}+ \|\nabla\varphi_{2}\|_{6})\|\nabla\varphi\|_{3}\|\Delta\varphi\|\] \[\leq\varepsilon\|\Delta\varphi\|^{2}+C_{\varepsilon}(\|\varphi_{1 }\|_{W^{1,6}}^{2}+\|\varphi_{2}\|_{W^{1,6}}^{2})\|\nabla\varphi\|\|\Delta \varphi\|\] \[\leq 2\varepsilon\|\Delta\varphi\|^{2}+C_{\varepsilon}(\|\varphi_{1 }\|_{W^{1,6}}^{4}+\|\varphi_{2}\|_{W^{1,6}}^{4})\|\nabla\varphi\|^{2}.\] Observe that in the first estimate we exploited the fact that \[\|\nabla\varphi\|\leq C\|\Delta\varphi\|,\] whereas, to handle \(I_{3}\), we have used the inequality \[\|\mathrm{div}(\nabla\mathcal{K}*\varphi)\|\leq C\|\varphi\|,\] which is valid recalling assumption **H4** and [3, Lemma 2]. Collecting all the estimates above, recalling assumption **H2**, applying Poincare's inequality where necessary, and choosing a consequently small \(\varepsilon>0\), we end up, after integrating over time for an arbitrary \(t\in[0,T)\), with \[\frac{1}{2}\int_{\Omega}|\nabla\varphi(t)|^{2}+\frac{\alpha}{2} \int_{Q_{t}}|\Delta\varphi|^{2}\leq\frac{1}{2}\int_{\Omega}|\nabla\varphi_{0}| ^{2}+C\int_{0}^{t}\!\|\nabla\varphi\|^{2}+C\int_{0}^{t}\!\|\varphi_{2}\|_{W^{1, \infty}}^{2}\|\boldsymbol{v}\|^{2}\] \[\quad+\int_{0}^{t}\!(1+\|\nabla\varphi\|^{2})\|\varphi_{2}\|_{H^ {2}}^{4}+C\int_{0}^{t}(\|\varphi_{1}\|_{W^{1,6}}^{4}+\|\varphi_{2}\|_{W^{1,6} }^{4}+\|\boldsymbol{v}_{1}\|_{6}^{4})\|\nabla\varphi\|^{2}. \tag{3.64}\] All the terms are in the usual form in order to apply Gronwall's lemma but the following one that can be controlled by Holder's inequality as \[\int_{0}^{t}\!\|\varphi_{2}\|_{W^{1,\infty}}^{2}\|\boldsymbol{v}\|^{2}\leq\int _{0}^{t}\left(\|\varphi_{2}\|_{W^{1,\infty}}^{2}\int_{\Omega}|\boldsymbol{v}_ {1}-\boldsymbol{v}_{2}|^{2}\right)\leq C\|\varphi_{2}\|_{L^{3}(0,t;W^{1,\infty} (\Omega))}^{2}\|\boldsymbol{v}\|_{L^{6}(0,t;\boldsymbol{H}_{\sigma})}^{2}.\] We then, use the regularity (3.62)-(3.63) together with the assumptions on \(\boldsymbol{v}_{1},\boldsymbol{v}_{2}\), and apply the Gronwall lemma to deduce (2.25) concluding the proof. ## 4 The Control Problem This section is devoted to the analysis of the optimal control **(CP)**. As mentioned, the control variable consists of a prescribed solenoidal velocity flow occurring in equation (1.18). From the mathematical properties of the state system (1.18)-(1.21) addressed in Section 3, the associated _control-to-state_ operator \(\mathscr{S}\), also referred to as the solution operator, is well-defined and continuous between suitable Banach spaces. For convenience, let us repeat here some notation. First, the tracking-type cost functional we aim at minimizing is defined by (1.17) subject to admissible controls \(\mathbf{v}\) belonging to \[\mathbf{\mathcal{V}}_{\mathrm{ad}}:=\{\mathbf{v}\in L^{\infty}(0,T;\mathbf{L}^{\infty}) \cap L^{2}(0,T;\mathbf{H}_{\sigma}):\mathbf{v}_{\min}\leq\mathbf{v}\leq\mathbf{v}_{\max}\}. \tag{4.1}\] The specific assumptions on the constants and target functions in (1.17) and (4.1) are expressed by **C4-C5**. From now on, as the velocity \(\mathbf{v}\) will be constrained in \(\mathbf{\mathcal{V}}_{\mathrm{ad}}\), notice that all the technical requirements in Theorem 2.2 and 2.9 are fulfilled. Besides, as we will address some differentiability properties of \(\mathscr{S}\), let us take an open ball in the \(L^{\infty}\)-topology containing the set of admissible controls \(\mathbf{\mathcal{V}}_{\mathrm{ad}}\). Namely, we fix \(R>0\) such that \[\mathbf{\mathcal{V}}_{\mathrm{ad}}\subset\mathbf{\mathcal{V}}_{R}:=\Big{\{}\mathbf{v}\in L ^{\infty}(0,T;\mathbf{L}^{\infty}(\Omega))\cap L^{2}(0,T;\mathbf{H}_{\sigma}):\,\| \mathbf{v}\|_{L^{\infty}(0,T;\mathbf{L}^{\infty})}<R\Big{\}}.\] Then, the control-to-state operator \(\mathscr{S}\) is the well-posed map \[\mathscr{S}:\mathbf{\mathcal{V}}_{R}\to\mathscr{Y},\quad\mathscr{S}:\mathbf{v}\mapsto( \varphi,\mu)=(\mathscr{S}_{1}(\mathbf{v}),\mathscr{S}_{2}(\mathbf{v})),\] where \((\varphi,\mu)\) is the unique solution to (1.18)-(1.21) corresponding to \(\mathbf{v}\) and \(\mathscr{Y}\) indicates the _state space_ which arises from Theorem 2.2 and it is defined as \[\mathscr{Y}=\mathscr{Y}_{1}\times\mathscr{Y}_{2}:=\Big{(}H^{1}(0, T;L^{6}(\Omega))\cap L^{\infty}(0,T;V)\cap L^{4}(0,T;W)\cap L^{3}(0,T;W^{1, \infty}(\Omega))\Big{)}\] \[\qquad\times\Big{(}W^{1,4}(0,T;H)\cap C^{0}([0,T];V)\cap L^{8}(0,T;W^{1,4}(\Omega))\cap L^{4}(0,T;W)\Big{)}. \tag{4.2}\] Besides, as a consequence of Theorem 2.9, \(\mathscr{S}\) is also Lipschitz continuous in the sense expressed in the theorem and we also recall that \(\varphi=\mathscr{S}_{1}(\mathbf{v})\) enjoys the separation property (2.15) The solution operator given above allows us to reduce the optimization problem **(CP)** in the usual manner via the _reduced cost functional_ \[\mathscr{J}_{\mathrm{red}}(\mathbf{v}):=\mathscr{J}(\mathbf{v};\mathscr{S}_{1}(\mathbf{v} )), \tag{4.3}\] leading to the minimization problem \[\min_{\mathbf{v}\in\mathbf{\mathcal{V}}_{\mathrm{ad}}}\mathscr{J}_{\mathrm{red}}(\mathbf{v }).\] ### Existence of optimal controls The first step consists in proving Theorem 2.10. This can be done by a straightforward application of the direct method of calculus of variations. Proof of Theorem 2.10.: First, we notice that \(\mathscr{J}\) is bounded from below as it is nonnegative. Let \(\{\mathbf{v}_{n}\}_{n}\subset\mathbf{\mathcal{V}}_{\mathrm{ad}}\) be a minimizing sequence for \(\mathscr{J}_{\mathrm{red}}\) and let \((\varphi_{n},\mu_{n}):=\mathscr{S}(\mathbf{v}_{n})\) denote the sequence of the corresponding states, \(n\in\mathbb{N}\). Then, up to nonrelabeled subsequences, there exists \(\overline{\mathbf{v}}\in\mathbf{\mathcal{V}}_{\mathrm{ad}}\) such that \[\mathbf{v}_{n}\to\overline{\mathbf{v}}\quad\text{weakly* in }L^{\infty}(0,T;\mathbf{L}^{ \infty})\cap L^{2}(0,T;\mathbf{H}_{\sigma}). \tag{4.4}\] By the results of Theorem 2.2, since the sequence \(\{(\varphi_{n},\mu_{n})\}\) is associated to the same initial datum \(\varphi_{0}\) and \(\mathbf{v}_{n}\) is uniformly bounded, we immediately infer the following uniform bounds \[\|\varphi_{n}\|_{y_{1}\cap L^{\infty}(Q)}+\|\mu_{n}\|_{y_{2}}\leq C.\] This implies, by standard weak and weak\({}^{*}\) compactness arguments, that there exists \((\overline{\varphi},\overline{\mu})\) such that, up to subsequences, \[\varphi_{n}\to\overline{\varphi}\quad\text{weakly* in }\mathcal{Y}_{1}\cap L ^{\infty}(Q),\quad\mu_{n}\to\overline{\mu}\quad\text{weakly* in }\mathcal{Y}_{2},\] which imply, by the Aubin-Lions-Simon lemma, that \[\varphi_{n}\to\overline{\varphi}\quad\text{strongly in }C^{0}([0,T];H^{r}( \Omega))\cap L^{4}(0,T;W^{s,2}(\Omega)),\quad\forall s\in[0,2),r\in[0,1).\] Note now that, for any \(n\in\mathbb{N}\), \((\varphi_{n},\mu_{n})\) satisfies (2.7)-(2.8) with \(\mathbf{v}:=\mathbf{v}_{n}\). From these convergences and (4.4) we can easily pass to the limit as \(n\to\infty\) in the weak formulation (2.7)-(2.8) and deduce that \((\overline{\varphi},\overline{\mu})\) is a weak solution according to the definition of Theorem 2.2 with \(\mathbf{v}:=\overline{\mathbf{v}}\). Therefore, by the uniqueness of weak solutions, we immediately infer \(\mathcal{S}(\overline{\mathbf{v}})=(\overline{\varphi},\overline{\mu})\), i.e., the pair \((\overline{\mathbf{v}},\overline{\varphi})\) is admissible for the minimization problem (**CP**). By weak lower sequential semicontinuity of norms, it readily follows that \((\overline{\mathbf{v}},\overline{\varphi})\) is optimal for \(\mathcal{J}\). Indeed, recall that the above convergences also imply \(\varphi_{n}(T)\to\overline{\varphi}(T)\) strongly in \(H\) as \(n\to\infty\). Therefore, \((\overline{\mathbf{v}},\overline{\varphi})\) yields a solution to (**CP**) and the proof is concluded. ### Differentiability properties of the solution operator The natural subsequent step is providing some optimality conditions for the minimizers of **(CP)**. Since the set of admissible controls \(\mathbf{\mathcal{V}}_{\rm ad}\) is convex, it is well-known that the first-order optimality conditions are characterized by a suitable variational inequality of the form \[\langle D\mathcal{J}_{\rm red}(\overline{\mathbf{v}}),\mathbf{v}-\overline{\mathbf{v}} \rangle\geq 0\quad\forall\mathbf{v}\in\mathcal{V}_{\rm ad}, \tag{4.5}\] where \(\mathcal{J}_{\rm red}\) is the reduced cost functional introduced above, and \(D\mathcal{J}_{\rm red}\) stands for its Frechet derivative in a suitable mathematical framework. The aim of this section is to set the ground to rigorously justify the above formula. In this direction, the first step is to obtain some differentiability properties of the control-to-state operator \(\mathcal{S}\). We then fix a control \(\overline{\mathbf{v}}\in\mathbf{\mathcal{V}}_{R}\) and denote by \((\overline{\varphi},\overline{\mu}):=\mathcal{S}(\overline{\mathbf{v}})\) the corresponding state. Then, the linearized system to (1.18)-(1.21) at \(\overline{\mathbf{v}}\), for any \(\mathbf{w}\in L^{2}(0,T;\mathbf{H}_{\sigma})\), reads (in strong form) as \[\partial_{t}\xi+\nabla\xi\cdot\overline{\mathbf{v}}+\nabla\overline{ \varphi}\cdot\mathbf{w}-\Delta\eta=0 \text{in }Q, \tag{4.6}\] \[\eta=-\mathcal{K}*\xi+F^{\prime\prime}(\overline{\varphi})\xi \text{in }Q,\] (4.7) \[\partial_{\mathbf{n}}\eta=0 \text{on }\Sigma,\] (4.8) \[\xi(0)=0 \text{in }\Omega. \tag{4.9}\] The weak well-posedness of the above system follows. **Theorem 4.1**.: _Suppose that \(\mathbf{H2}\)-\(\mathbf{H3}\) and \(\mathbf{C1}\)-\(\mathbf{C2}\) and \(\mathbf{C5}\) are fulfilled. Then, for every \(\boldsymbol{w}\in L^{2}(0,T;\boldsymbol{H}_{\sigma})\), there exists a unique solution \((\xi,\vartheta)\) to the linearized system (4.6)-(4.9) in the sense that_ \[\xi\in H^{1}(0,T;V^{*})\cap C^{0}([0,T];H)\cap L^{2}(0,T;V),\quad\vartheta\in L ^{2}(0,T;V),\] _and it fulfills_ \[\langle\partial_{t}\xi,v\rangle-\int_{\Omega}\xi\,\overline{ \boldsymbol{v}}\cdot\nabla v+\int_{\Omega}\nabla\overline{\varphi}\cdot \boldsymbol{w}v+\int_{\Omega}\nabla\eta\cdot\nabla v=0\] \[\quad\text{for every $v\in V$, and a.e. in $(0,T)$}, \tag{4.10}\] \[\int_{\Omega}\eta v=-\int_{\Omega}\mathcal{K}*\xi v+\int_{\Omega} F^{\prime\prime}(\overline{\varphi})\xi v\quad\text{for every $v\in V$, and a.e. in $(0,T)$}, \tag{4.11}\] _as well as the initial condition_ \[\xi(0)=0\quad\text{in $\Omega$}.\] Proof.: The proof of existence is standard and can be performed by approximation argument, for instance, using a Faedo-Galerkin scheme. For this reason, we proceed formally by avoiding the introduction of any approximation scheme and limiting ourselves to provide formal estimates. First estimate.We test (4.6) by \(\mathcal{N}\xi\) (observe that \(\xi_{\Omega}=0\)), (4.7) by \(-\xi\) and add the resulting identities to obtain \[\frac{1}{2}\frac{d}{dt}\|\xi\|_{*}^{2}+\int_{\Omega}F^{\prime\prime}( \overline{\varphi})|\xi|^{2}=\int_{\Omega}\xi\,\overline{\boldsymbol{v}}\cdot \nabla\mathcal{N}\xi-\int_{\Omega}(\nabla\overline{\varphi}\cdot\boldsymbol{w })\mathcal{N}\xi+\int_{\Omega}(\mathcal{K}*\xi)\xi.\] From \(\mathbf{H2}\), we readily get \[\int_{\Omega}F^{\prime\prime}(\overline{\varphi})|\xi|^{2}\geq\alpha\|\xi\|^{ 2}.\] Let us now move to bound the terms on the right-hand side. In this direction, we use the definition of \(\mathcal{N}\) introduced in Section 2.1, the Holder and Young inequalities, and the regularity of \(\overline{\varphi}\) as solution to (1.18)-(1.21) in the sense of Theorem 2.2. Recalling that \(\overline{\varphi}\in[-1,1]\), we have that \[\int_{\Omega}\xi\overline{\boldsymbol{v}}\cdot\nabla\mathcal{N}\xi \leq\|\xi\|\|\overline{\boldsymbol{v}}\|_{\infty}\|\xi\|_{*}\leq \frac{\alpha}{6}\|\xi\|^{2}+C\|\xi\|_{*}^{2},\] \[-\int_{\Omega}(\nabla\overline{\varphi}\cdot\boldsymbol{w}) \mathcal{N}\xi =\int_{\Omega}\overline{\varphi}(\boldsymbol{w}\cdot\nabla \mathcal{N}\xi)\leq\|\overline{\varphi}\|_{\infty}\|\boldsymbol{w}\|\|\nabla \mathcal{N}\xi\|\leq C\|\boldsymbol{w}\|^{2}+C\|\xi\|_{*}^{2},\] \[\int_{\Omega}(\mathcal{K}*\xi)\xi =\langle\mathcal{K}*\xi,\xi\rangle\leq\|\mathcal{K}*\xi\|_{V}\| \xi\|_{*}\leq C\|\xi\|\|\xi\|_{*}\leq\frac{\alpha}{6}\|\xi\|^{2}+C\|\xi\|_{*}^{ 2},\] where we also owe to \[\|\mathcal{K}*\xi\|_{V}\leq C(\|\mathcal{K}*\xi\|+\|\nabla\mathcal{K}*\xi\|) \leq C\|\xi\|,\] which follows from Young's inequality for convolutions and \(\mathbf{C1}\). Integrating over time and employing Gronwall's lemma, recalling the regularity on \(\boldsymbol{w}\), yield \[\|\xi\|_{L^{\infty}(0,T;V^{\star})\cap L^{2}(0,T;H)}\leq C\| \boldsymbol{w}\|_{L^{2}(0,T;\boldsymbol{H}_{\sigma})}\leq C. \tag{4.12}\] Second estimate.Using the above estimate (4.12), it is not difficult to realize, from a comparison argument in (4.7), that also \[\|\eta\|_{L^{2}(0,T;H)}\leq C\|\boldsymbol{w}\|_{L^{2}(0,T; \boldsymbol{H}_{\sigma})}\leq C.\] Third estimate.Next, we test (4.6) by \(\xi\). Then, we consider the gradient of equation (4.7) and test it by \(-\nabla\xi\). Adding the identities leads us to \[\frac{1}{2}\frac{d}{dt}\|\xi\|^{2}+\int_{\Omega}F^{\prime\prime}( \overline{\varphi})|\nabla\xi|^{2}=-\int_{\Omega}\nabla\xi\cdot\overline{ \boldsymbol{v}}\xi-\int_{\Omega}\nabla\overline{\varphi}\cdot\boldsymbol{w}\xi\] \[\quad-\int_{\Omega}(\nabla\mathcal{K}\ast\xi)\cdot\nabla\xi- \int_{\Omega}F^{(3)}(\overline{\varphi})\nabla\overline{\varphi}\,\xi\cdot \nabla\xi.\] The second term on the left-hand side is bounded from below, again, due to \(\mathbf{H2}\). Next, we observe that, being \(\overline{\boldsymbol{v}}\in\boldsymbol{\mathcal{V}}_{R}\) divergence-free, \[-\int_{\Omega}\nabla\xi\cdot\overline{\boldsymbol{v}}\xi=-\int_{ \Omega}\nabla\Big{(}\frac{\xi^{2}}{2}\Big{)}\cdot\overline{\boldsymbol{v}}= \int_{\Omega}\Big{(}\frac{\xi^{2}}{2}\Big{)}\operatorname{div}\overline{ \boldsymbol{v}}-\int_{\Gamma}\frac{\xi^{2}}{2}\,\overline{\boldsymbol{v}}\cdot \boldsymbol{n}=0.\] To bound the other terms, we owe to the Holder, Gagliardo-Nirenberg and Young inequalities. Namely, we find that \[-\int_{\Omega}(\nabla\mathcal{K}\ast\xi)\cdot\nabla\xi \leq\|\mathcal{K}\|_{W^{1,1}(B_{M})}\|\xi\|\|\nabla\xi\|\leq\frac {\alpha}{8}\|\nabla\xi\|^{2}+C\|\xi\|^{2},\] \[\quad\quad-\int_{\Omega}\nabla\overline{\varphi}\cdot \boldsymbol{w}\xi =\int_{\Omega}\overline{\varphi}\boldsymbol{w}\cdot\nabla\xi\leq \|\overline{\varphi}\|_{\infty}\|\boldsymbol{w}\|\|\nabla\xi\|\leq\frac{ \alpha}{8}\|\nabla\xi\|^{2}+C\|\boldsymbol{w}\|^{2},\] \[-\int_{\Omega}F^{(3)}(\overline{\varphi})\nabla\overline{\varphi} \,\xi\cdot\nabla\xi \leq\|F^{(3)}(\overline{\varphi})\|_{\infty}\|\nabla\overline{ \varphi}\|_{6}\|\xi\|_{3}\|\nabla\xi\|\] \[\leq\frac{\alpha}{8}\|\nabla\xi\|^{2}+C\|\overline{\varphi}\|_{W^ {1,6}}^{2}\|\xi\|\|\nabla\xi\|\] \[\leq\frac{\alpha}{4}\|\nabla\xi\|^{2}+C\|\overline{\varphi}\|_{W^ {1,6}}^{4}\|\xi\|^{2}.\] We then integrate over time, recalling also that \(t\mapsto\|\overline{\varphi}(t)\|_{W^{1,6}}^{4}\in L^{1}(0,T)\) and \(\boldsymbol{w}\in L^{2}(0,T;\boldsymbol{H}_{\sigma})\), and apply the Gronwall's lemma to conclude that \[\|\xi\|_{L^{\infty}(0,T;H)\cap L^{2}(0,T;V)}\leq C\|\boldsymbol{w} \|_{L^{2}(0,T;\boldsymbol{H}_{\sigma})}\leq C.\] As before, comparison in (4.7) readily produces \[\|\eta\|_{L^{2}(0,T;V)}\leq C\|\boldsymbol{w}\|_{L^{2}(0,T; \boldsymbol{H}_{\sigma})}\leq C.\] Fourth estimate.From a comparison argument in (4.6) it is a standard matter to infer that \[\|\partial_{t}\xi\|_{L^{2}(0,T;V^{\star})}\leq C\|\boldsymbol{w} \|_{L^{2}(0,T;\boldsymbol{H}_{\sigma})}\leq C.\] Finally, the continuity property of \(\xi\) can be easily deduced due to the continuous embedding \(H^{1}(0,T;V^{*})\cap L^{2}(0,T;V)\hookrightarrow C^{0}([0,T];H)\). As far as uniqueness is concerned, we notice that system (4.6)-(4.9) is linear. Thus, the above estimates readily entail uniqueness. Indeed, for two special solutions \((\xi_{i},\eta_{i})\), \(i=1,2\), we set \((\xi,\eta):=(\xi_{1}-\xi_{2},\eta_{1}-\eta_{2})\) and observe that the above estimates hold for \((\xi,\eta)\) with \(C=0\). Hence, the claimed uniqueness follows. It is now naturally expected that, provided we select the correct Banach spaces, the linearized system captures the behavior of the Frechet derivative of the solution operator, that is, the identity \(D\mathcal{S}(\overline{\boldsymbol{v}})[\boldsymbol{w}]=(\xi,\eta)\) holds in a suitable mathematical setting. This is rigorously stated in the following result, where the following Banach space appears \[\mathcal{X}:=\big{(}H^{1}(0,T;V^{*})\cap C^{0}([0,T];H)\cap L^{2}(0,T;V)\big{)} \times L^{2}(0,T;V).\] **Theorem 4.2**.: _Suppose that_ **H2**_-_**H3** _and_ **C1**_-_**C2** _and_ **C5** _are in force. Then, the solution mapping \(\mathcal{S}\) is Frechet differentiable at every \(\overline{\boldsymbol{v}}\in\mathcal{V}_{R}\) as a mapping from \(L^{6}(0,T;\boldsymbol{H}_{\sigma})\) into \(\mathcal{X}\). In addition, it holds that_ \[D\mathcal{S}(\overline{\boldsymbol{v}})\in\mathcal{L}(L^{6}(0,T;\boldsymbol{H }_{\sigma}),\mathcal{X})\quad\text{and}\quad D\mathcal{S}(\overline{ \boldsymbol{v}})[\boldsymbol{w}]=(\xi,\eta)\] _with \((\xi,\eta)\) being the unique solution to the linearized system (4.6)-(4.9) corresponding to \(\boldsymbol{w}\) as given by Theorem 4.1._ Proof of Theorem 4.2.: Suppose the identity \(D\mathcal{S}(\overline{\boldsymbol{v}})[\boldsymbol{w}]=(\xi,\eta)\) has been proven. Then, it readily follows from Theorem 4.1 that \(D\mathcal{S}(\overline{\boldsymbol{v}})\in\mathcal{L}(L^{6}(0,T;\boldsymbol{H }_{\sigma}),\mathcal{X})\) as \(\boldsymbol{w}\mapsto(\xi,\eta)\) is linear and continuous from \(L^{2}(0,T;\boldsymbol{H}_{\sigma})\) to \(\mathcal{X}\). Let us then show that \(D\mathcal{S}(\overline{\boldsymbol{v}})[\boldsymbol{w}]=(\xi,\eta)\) by checking that \[\frac{\|\mathcal{S}(\overline{\boldsymbol{v}}+\boldsymbol{w})-\mathcal{S}( \overline{\boldsymbol{v}})-(\xi,\eta)\|_{\mathcal{X}}}{\|\boldsymbol{w}\|_{L^ {6}(0,T;\boldsymbol{H}_{\sigma})}}\to 0\quad\text{as }\|\boldsymbol{w}\|_{L^{6}(0,T; \boldsymbol{H}_{\sigma})}\to 0. \tag{4.13}\] As a matter of fact, what we are going to prove is a specific estimate that will imply the above property. Namely, we aim at showing that \[\|\mathcal{S}(\overline{\boldsymbol{v}}+\boldsymbol{w})-\mathcal{S}( \overline{\boldsymbol{v}})-(\xi,\eta)\|_{\mathcal{X}}\leq C\|\boldsymbol{w}\| _{L^{6}(0,T;\boldsymbol{H}_{\sigma})}^{2}\] for a suitable positive constant \(C>0\). Without loss of generality, we tacitly assume from now on that the norm of the increment \(\boldsymbol{w}\) is small enough so that \(\overline{\boldsymbol{v}}+\boldsymbol{w}\) remains in the open set \(\mathcal{V}_{R}\). Upon setting \[(\widehat{\varphi},\widehat{\mu}):=\mathcal{S}(\overline{\boldsymbol{v}}+ \boldsymbol{w}),\quad(\overline{\varphi},\overline{\mu}):=\mathcal{S}( \overline{\boldsymbol{v}}),\quad\psi:=\widehat{\varphi}-\overline{\varphi}- \xi,\quad\vartheta:=\widehat{\mu}-\overline{\mu}-\eta,\] the above inequality (4.13) amounts proving the existence of a constant \(C>0\) such that \[\|(\psi,\vartheta)\|_{\mathcal{X}}\leq C\|\boldsymbol{w}\|_{L^{6}(0,T; \boldsymbol{H}_{\sigma})}^{2}.\] In the direction of checking the above estimate, we write the system solved by \((\psi,\vartheta)\), noticing that, by the previous regularity results, we have \((\psi,\vartheta)\in\mathcal{X}\). Namely, we consider the difference between (1.18)-(1.21) considered at \(\overline{\boldsymbol{v}}+\boldsymbol{w}\) and at \(\overline{\boldsymbol{v}}\), together with (4.6)-(4.9). We thus infer that the pair \((\psi,\vartheta)\) is a (weak) solution to \[\partial_{t}\psi+\nabla\psi\cdot\overline{\boldsymbol{v}}+\nabla( \widehat{\varphi}-\overline{\varphi})\cdot\boldsymbol{w}-\Delta\vartheta=0 \qquad\text{ in }Q, \tag{4.14}\] \[\vartheta=-\mathcal{K}\ast\psi+[F^{\prime}(\widehat{\varphi})-F ^{\prime}(\overline{\varphi})-F^{\prime\prime}(\overline{\varphi})\xi] \qquad\text{ in }Q,\] (4.15) \[\partial_{\boldsymbol{n}}\vartheta=0 \qquad\text{ on }\Sigma,\] (4.16) \[\psi(0)=0 \qquad\text{ in }\Omega. \tag{4.17}\] Before proceeding, it is worth noticing that, as a consequence of Theorem 2.2, recall \(\overline{\boldsymbol{v}}\in\boldsymbol{\mathcal{V}}_{R}\) and **C1**- **C2** are in force, we have that \[\exists\,\beta\in(0,1):\quad\varphi\in C^{\beta,\frac{\beta}{2}}(\overline{Q} ),\quad\text{and}\quad\|\varphi\|_{L^{8}(0,T;W^{1,4}(\Omega))\cap L^{4}(0,T; H^{2}(\Omega))\cap L^{3}(0,T;W^{1,\infty}(\Omega))}\leq C,\] with \(\varphi\in\{\widehat{\varphi},\overline{\varphi}\}\), as well as the validity of the strict separation property \[\exists\,\delta>0:\quad\max\{|\widehat{\varphi}(x,t)|,|\overline{\varphi}(x, t)|\}<1-\delta\quad\forall(x,t)\in\overline{Q}.\] Thus, using also **C6**, there exists a positive constant \(C\) such that \[\big{|}F^{(k)}(\widehat{\varphi}(x,t))-F^{(k)}(\overline{\varphi}(x,t))\big{|} \leq C|\widehat{\varphi}(x,t)-\overline{\varphi}(x,t)|\quad\forall(x,t)\in \overline{Q},\quad k=0,...,3.\] Besides, from Theorem 2.9, it holds the continuous dependence result \[\|\widehat{\varphi}-\overline{\varphi}\|_{L^{\infty}(0,T;V)\cap L^{2}(0,T;W) }\leq C\|\boldsymbol{w}\|_{L^{6}(0,T;\boldsymbol{H}_{\sigma})}. \tag{4.18}\] Finally, we recall Taylor's formula with integral remainder for a generic regular function \(f\in C^{2}(\mathbb{R})\): \[f(\widehat{\varphi})-f(\overline{\varphi})-f^{\prime}(\overline{\varphi}) \xi\,=\,f^{\prime}(\overline{\varphi})\psi+\mathcal{R}(\widehat{\varphi}- \overline{\varphi})^{2}, \tag{4.19}\] where the remainder \(\mathcal{R}\) is defined as \[\mathcal{R}:=\int_{0}^{1}f^{\prime\prime}(\overline{\varphi}+s(\widehat{ \varphi}-\overline{\varphi}))(1-s)\,\mathrm{ds}.\] For our purposes, \(f=F^{(k)}\) with \(k=1,2\) and we abuse notation by denoting the corresponding remainders with the same symbol \(\mathcal{R}\). Due to the regularity assumed in **H2**, along with the separation property (3.33), it holds that the associated remainders are uniformly bounded. We are now ready to show the claimed estimate. First estimate.Below, we are going to consider the gradient of equation (4.15). Hence, we highlight the identity \[\nabla[F^{\prime}(\widehat{\varphi})-F^{\prime}(\overline{\varphi })-F^{\prime\prime}(\overline{\varphi})\xi]=F^{\prime\prime}(\widehat{ \varphi})\nabla\widehat{\varphi}-F^{\prime\prime}(\overline{\varphi})\nabla \overline{\varphi}-F^{(3)}(\overline{\varphi})\nabla\overline{\varphi}\,\xi-F^ {\prime\prime}(\overline{\varphi})\nabla\xi\] \[\qquad=[F^{\prime\prime}(\widehat{\varphi})-F^{\prime\prime}( \overline{\varphi})-F^{(3)}(\overline{\varphi})\xi]\,\nabla\overline{\varphi}\] \[\qquad\qquad+(F^{\prime\prime}(\widehat{\varphi})-F^{\prime \prime}(\overline{\varphi}))\nabla(\overline{\varphi}-\overline{\varphi})+F^{ \prime\prime}(\overline{\varphi})\nabla\psi=:\Theta+F^{\prime\prime}(\overline {\varphi})\nabla\psi.\] Next, we test (4.14) by \(\mathcal{N}\psi+\psi\) (observe that \(\psi_{\Omega}=0\)), (4.15) by \(\psi\), the gradient of (4.15) by \(\nabla\psi\), and add the resulting equalities to obtain that \[\frac{1}{2}\frac{d}{dt} (\|\psi\|_{*}^{2}+\|\psi\|^{2})+\int_{\Omega}F^{\prime\prime}( \overline{\varphi})|\nabla\psi|^{2}\] \[=-\int_{\Omega}\nabla\psi\cdot\overline{\mathbf{v}}\mathcal{N}\psi- \int_{\Omega}\nabla(\widehat{\varphi}-\overline{\varphi})\cdot\mathbf{w}( \mathcal{N}\psi+\psi)+\int_{\Omega}(\mathcal{K}*\psi)\psi\] \[+\int_{\Omega}[F^{\prime}(\widehat{\varphi})-F^{\prime}( \overline{\varphi})-F^{\prime\prime}(\overline{\varphi})\xi]\psi+\int_{ \Omega}(\nabla\mathcal{K}*\psi)\cdot\nabla\psi-\int_{\Omega}\Theta\cdot\nabla \psi=\sum_{i=1}^{6}I_{i},\] where we recall that, arguing as previously done, \[\int_{\Omega}(\nabla\psi\cdot\overline{\mathbf{v}})\psi=0.\] Using similar computations as in the other proofs, for a positive constant \(\varepsilon\) yet to be chosen, we infer that \[I_{1}+I_{3}+ I_{5} \leq\|\psi\|\|\overline{\mathbf{v}}\|_{\infty}\|\nabla\mathcal{N} \psi\|+C\|\mathcal{K}\|_{W^{1,1}(B_{M})}\|\psi\|(\|\psi\|+\|\nabla\psi\|)\] \[\leq\varepsilon\|\nabla\psi\|^{2}+C_{\varepsilon}(\|\psi\|_{*}^{ 2}+\|\psi\|^{2}),\] \[I_{2} \leq\|\nabla(\widehat{\varphi}-\overline{\varphi})\|_{3}\|\mathbf{w} \|(\|\mathcal{N}\psi\|_{6}+\|\psi\|_{6})\leq\varepsilon\|\nabla\psi\|^{2}+C_{ \varepsilon}\|\nabla(\widehat{\varphi}-\overline{\varphi})\|_{3}^{2}\|\mathbf{w} \|^{2}\] \[\quad+C_{\varepsilon}(\|\psi\|_{*}^{2}+\|\psi\|^{2})\] \[\leq\varepsilon\|\nabla\psi\|^{2}+C_{\varepsilon}\|\nabla( \widehat{\varphi}-\overline{\varphi})\|\|\Delta(\widehat{\varphi}-\overline{ \varphi})\|\|\mathbf{w}\|^{2}+C_{\varepsilon}(\|\psi\|_{*}^{2}+\|\psi\|)^{2}\] \[\leq\varepsilon\|\nabla\psi\|^{2}+\|\mathbf{w}\|^{4}+C_{\varepsilon }\|\widehat{\varphi}-\overline{\varphi}\|_{L^{\infty}(0,T;V)}^{2}\|\Delta( \widehat{\varphi}-\overline{\varphi})\|^{2}+C_{\varepsilon}(\|\psi\|_{*}^{2}+ \|\psi\|^{2}),\] \[I_{4} \leq\|F^{\prime\prime}(\overline{\varphi})\|_{\infty}\|\psi\|^{2 }+\|\mathcal{K}\|_{\infty}\|\widehat{\varphi}-\overline{\varphi}\|_{4}^{2}\| \psi\|\leq C(\|\psi\|^{2}+\|\widehat{\varphi}-\overline{\varphi}\|_{V}^{4}),\] where we applied also Poincare's inequality and Taylor's formula (4.19) with \(f=F^{\prime}\) and integrated by parts the first integral \(I_{1}\). As for \(I_{6}\), using the above definition of \(\Theta\), we infer that \[I_{6} \leq\|\Theta\|\|\nabla\psi\|\leq\varepsilon\|\nabla\psi\|^{2}+C_ {\varepsilon}\|\Theta\|^{2}\] \[\leq\varepsilon\|\nabla\psi\|^{2}+C_{\varepsilon}\|F^{(3)}( \overline{\varphi})\|_{\infty}^{2}\|\psi\|_{3}^{2}\|\nabla\overline{\varphi} \|_{6}^{2}+C_{\varepsilon}\|\mathcal{K}\|_{\infty}^{2}\|\widehat{\varphi}- \overline{\varphi}\|_{6}^{4}\|\nabla\overline{\varphi}\|_{6}^{2}\] \[\leq\varepsilon\|\nabla\psi\|^{2}+C_{\varepsilon}\|\psi\|_{3}^{2 }\|\overline{\varphi}\|_{W^{1,6}}^{2}+C_{\varepsilon}\|\widehat{\varphi}- \overline{\varphi}\|_{V}^{4}\|\overline{\varphi}\|_{W^{1,6}}^{2}\] \[\leq 2\varepsilon\|\nabla\psi\|^{2}+C_{\varepsilon}\|\psi\|^{2}\| \overline{\varphi}\|_{W^{1,6}}^{4}+C_{\varepsilon}\|\widehat{\varphi}- \overline{\varphi}\|_{V}^{4}\|\overline{\varphi}\|_{W^{1,6}}^{2},\] owing to Taylor's formula (4.19) with \(f=F^{\prime\prime}\). Collecting all the above estimates, recalling **H2** and choosing \(\varepsilon\) suitably small, we get \[\frac{1}{2}\frac{d}{dt}(\|\psi\|_{*}^{2}+\|\psi\|^{2})+\alpha\| \nabla\psi\|^{2} \leq C(\|\psi\|_{*}^{2}+\|\psi\|^{2})+\|\widehat{\varphi}-\overline{ \varphi}\|_{V}^{4}+\|\mathbf{w}\|^{4}\] \[\quad+C\|\widehat{\varphi}-\overline{\varphi}\|_{L^{\infty}(0,T;V) }^{2}\|\Delta(\widehat{\varphi}-\overline{\varphi})\|^{2}.\] We now apply Gronwall's lemma, noticing that \(t\mapsto\|\overline{\varphi}(t)\|_{W^{1,6}}^{4}\in L^{1}(0,T)\) and recalling the stability estimate (4.18) and obtain that \[\|\psi\|_{L^{\infty}(0,T;H)\cap L^{2}(0,T;V)}\leq C\|\mathbf{w}\|_{L^{6}(0,T;H_{ \sigma})}^{2}.\] Second estimate.From the above estimate it readily follows, from a comparison argument in (4.15), that \[\|\vartheta\|_{L^{2}(0,T;V)}\leq C\|\boldsymbol{w}\|_{L^{6}(0,T; \boldsymbol{H}_{\sigma})}^{2}.\] Third estimate.We finally go back to equation (4.6) to infer by comparison, using the above bounds, that \[\|\partial_{t}\psi\|_{L^{2}(0,T;V^{*})}\leq C\|\boldsymbol{w}\|_{L^{6}(0,T; \boldsymbol{H}_{\sigma})}^{2}.\] This latter also entails \(\|\psi\|_{C^{0}([0,T];H)}\leq C\|\boldsymbol{w}\|_{L^{6}(0,T;\boldsymbol{H}_{ \sigma})}^{2}\) due to standard compactness embeddings. This concludes the proof as the above estimates imply (4.13), whence the claim. ### Optimality conditions The final step consists in proving Theorem 2.11, but first we point out the following intermediate result. **Theorem 4.3**.: _Assume that_ **C1**_-_**C6** _are in force. Let \(\overline{\boldsymbol{v}}\in\mathcal{V}_{\mathrm{ad}}\) be an optimal control with corresponding state \((\overline{\varphi},\overline{\mu})=\mathcal{S}(\overline{\boldsymbol{v}})\). Then, it holds that_ \[\gamma_{1}\int_{Q}(\overline{\varphi}-\varphi_{Q})\xi+\gamma_{2} \int_{\Omega}(\overline{\varphi}(T)-\varphi_{\Omega})\xi(T)+\gamma_{3}\int_{ Q}\overline{\boldsymbol{v}}\cdot(\boldsymbol{v}-\overline{\boldsymbol{v}})\geq 0 \quad\forall\boldsymbol{v}\in\mathcal{V}_{\mathrm{ad}}, \tag{4.20}\] _where \((\xi,\eta)\) stands for the unique linearized variables associated to \(\boldsymbol{w}=\boldsymbol{v}-\overline{\boldsymbol{v}}\) as given by Theorem 4.1._ Proof of Theorem 4.3.: This is a straightforward consequence of the abstract result (4.5) along with Theorem 4.1, the definition of the cost functional \(\mathcal{J}\), and the chain rule. As customary, (4.20) is not helpful in numeric schemes and have to be somehow simplified. This is done by the help of an additional problem related to (1.18)-(1.21) called adjoint system. It consists of a backward-in-time parabolic system in the variables \((p,q)\) and, in its strong form, it reads as \[-\partial_{t}p-\mathcal{K}*q+F^{\prime\prime}(\overline{\varphi} )q-\nabla p\cdot\overline{\boldsymbol{v}}=\gamma_{1}(\overline{\varphi}- \varphi_{Q}) \text{in }Q, \tag{4.21}\] \[q=-\Delta p \text{in }Q,\] (4.22) \[\partial_{\boldsymbol{n}}p=0 \text{on }\Sigma,\] (4.23) \[p(T)=\gamma_{2}(\overline{\varphi}(T)-\varphi_{\Omega}) \text{in }\Omega. \tag{4.24}\] **Theorem 4.4**.: _Suppose that_ **C1**_-_**C6** _hold. Let \(\overline{\boldsymbol{v}}\) be an optimal control with corresponding state \((\overline{\varphi},\overline{\mu}):=\mathcal{S}(\overline{\boldsymbol{v}})\). Then, there exists a unique solution \((p,q)\) to the adjoint system (4.21)-(4.24) in the sense that_ \[p\in H^{1}(0,T;V^{*})\cap C^{0}([0,T];V)\cap L^{2}(0,T;W),\quad q \in L^{2}(0,T;H),\] _and it solves_ \[-\langle\partial_{t}p,v\rangle-\int_{\Omega}(\mathcal{K}*q)v+\int_{ \Omega}F^{\prime\prime}(\overline{\varphi})qv-\int_{\Omega}(\nabla p\cdot \overline{\mathbf{v}})v=\int_{\Omega}\gamma_{1}(\overline{\varphi}-\varphi_{Q})v\] \[\qquad\text{for every $v\in V$, and a.e. in $(0,T)$}, \tag{4.25}\] \[\int_{\Omega}qv=\int_{\Omega}\nabla p\cdot\nabla v\quad\text{for every $v\in V$, and a.e. in $(0,T)$}, \tag{4.26}\] _as well as the terminal condition_ \[p(T)=\gamma_{2}(\overline{\varphi}(T)-\varphi_{\Omega})\quad\text{in $ \Omega$}.\] Proof of Theorem 4.4.: Again, we proceed formally. The rigorous proof can be performed, e.g., by using a Faedo-Galerkin approximation scheme. First estimate.We test (4.21) by \(q\), (4.22) by \(\partial_{t}p\) and add the resulting identities to infer that \[-\frac{1}{2}\frac{d}{dt}\|\nabla p\|^{2}+\int_{\Omega}F^{\prime \prime}(\overline{\varphi})|q|^{2}=\int_{\Omega}(\mathcal{K}*q)q+\int_{ \Omega}(\nabla p\cdot\overline{\mathbf{v}})q+\int_{\Omega}\gamma_{1}(\overline{ \varphi}-\varphi_{Q})q.\] As done several times, we owe to the convexity of \(F\) in **H2** to derive that the second term on the left-hand side is bounded from below by \(\alpha\|q\|^{2}\). As for the terms on the right-hand side, we use basic computations by means of Young's inequality for convolutions to obtain \[\int_{\Omega}(\mathcal{K}*q)q=\langle\mathcal{K}*q,q\rangle \leq\|\mathcal{K}*q\|_{V}\|q\|_{*}\leq\frac{\alpha}{6}\|q\|^{2}+ C\|q\|_{*}^{2}\leq\frac{\alpha}{6}\|q\|^{2}+C\|\nabla p\|^{2},\] \[\int_{\Omega}(\nabla p\cdot\overline{\mathbf{v}})q+\int_{\Omega} \gamma_{1}(\overline{\varphi}-\varphi_{Q})q \leq\|\nabla p\|\|\overline{\mathbf{v}}\|_{\infty}\|q\|+C(\| \overline{\varphi}\|_{\infty}+\|\varphi_{Q}\|)\|q\|\] \[\leq\frac{\alpha}{3}\|q\|^{2}+C(1+\|\nabla p\|^{2}+\|\varphi_{Q} \|^{2}).\] Observe that, exploiting (4.26), in the first estimate we have used the bound \[\|q\|_{*}\leq\|\nabla p\|.\] Next, we integrate over \((t,T)\) for an arbitrary \(t\in[0,T)\) to infer that \[\|\nabla p(t)\|^{2}+\alpha\|q\|^{2}\leq\|\nabla p(T)\|^{2}+C\int_{t}^{T}| \nabla p|^{2}+C.\] Then, we recall (4.24), **C4** and the fact that \(\overline{\varphi}\in C^{0}([0,T];V)\) as strong solution in the sense of Theorem 2.2, whence the terminal condition on the right-hand side of the above inequality can be bounded. The backward-in-time Gronwall lemma then entails that \[\|p\|_{L^{\infty}(0,T;V)}+\|q\|_{L^{2}(0,T;H)}\leq C.\] Second estimate.A comparison argument in (4.22) and elliptic regularity theory produce \[\|p\|_{L^{2}(0,T;W)}\leq C.\] Third estimate.Finally, it is a standard matter to derive from (4.21) that \[\|\partial_{t}p\|_{L^{2}(0,T;V^{*})}\leq C.\] In conclusion, we recall that (4.21)-(4.24) is linear: thus, the above computations already entail uniqueness. Proof of Theorem 2.11.: To prove the theorem, we compare the variational inequalities (2.26) with (4.20). We then realize that it suffices to check that \[-\int_{Q}\boldsymbol{P}_{\sigma}(p\nabla\overline{\varphi})\cdot(\boldsymbol {v}-\overline{\boldsymbol{v}})=\gamma_{1}\int_{Q}(\overline{\varphi}-\varphi _{Q})\xi+\gamma_{2}\int_{\Omega}(\overline{\varphi}(T)-\varphi_{\Omega})\xi(T) \tag{4.27}\] with \(\xi\) being the first component of the unique solution to (4.6)-(4.9) associated to \(\boldsymbol{w}=\boldsymbol{v}-\overline{\boldsymbol{v}}\). In this direction, we multiply (4.10) by \(p\) with \(\boldsymbol{w}=\boldsymbol{v}-\overline{\boldsymbol{v}}\), (4.11) by \(-q\) and integrate over \((0,T)\). We then owe to the well-known integration-by-parts formula for functions belonging to \(H^{1}(0,T;V^{*})\cap L^{2}(0,T;H)\) as well as to (4.25) and (4.26) to obtain that \[0 =\int_{0}^{t}\langle\partial_{t}\xi,p\rangle-\int_{Q}\xi\overline {\boldsymbol{v}}\cdot\nabla p+\int_{Q}\nabla\overline{\varphi}\cdot( \boldsymbol{v}-\overline{\boldsymbol{v}})p+\int_{Q}\nabla\xi\cdot\nabla p\] \[\quad+\int_{Q}[-\eta-\mathcal{K}*\xi+F^{\prime\prime}(\overline {\varphi})\xi]q\] \[=-\int_{0}^{t}\langle\partial_{t}p,\xi\rangle+\int_{Q}\xi[- \mathcal{K}*q+F^{\prime\prime}(\overline{\varphi})q-\nabla p\cdot\overline{ \boldsymbol{v}}]+\int_{Q}\eta[-q-\Delta p]\] \[\quad+\int_{\Omega}p(T)\xi(T)-\int_{\Omega}p(0)\xi(0)+\int_{Q}p \nabla\overline{\varphi}\cdot(\boldsymbol{v}-\overline{\boldsymbol{v}}).\] Using the initial and terminal conditions in (4.9) and (4.24), we are led to (4.27). In particular, we notice that, being \(\boldsymbol{w}=\boldsymbol{v}-\overline{\boldsymbol{v}}\in\boldsymbol{H}_{\sigma}\) and the projector \(\boldsymbol{P}_{\sigma}\) being selfadjoint, it holds that \[\int_{Q}p\nabla\overline{\varphi}\cdot(\boldsymbol{v}-\overline{\boldsymbol {v}})=\int_{Q}\boldsymbol{P}_{\sigma}(p\nabla\overline{\varphi})\cdot( \boldsymbol{v}-\overline{\boldsymbol{v}}),\] concluding the proof. ## Acknowledgments Partial support from the MIUR-PRIN Grant 2020F3NCPX "Mathematics for industry 4.0 (Math4I4)" is gratefully acknowledged. Besides, the authors are affiliated to the GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica).
2310.10924
Resonance fluorescence in $Λ$, $V$ and $Ξ$ -- type three-level configurations
We theoretically study the resonance fluorescence spectra of the lambda ($\Lambda$), vee ($V$) and cascade ($\Xi$) type three-level configurations. It is shown that each system with two detuning frequencies can be modelled using the $SU(3)$ symmetry group to derive a generalized optical Bloch equation. For each configuration, this equation is solved to calculate the two-time correlation function by invoking the quantum regression theorem. The incoherent part of the power spectra gives the characteristic multi-peak fluorescence profiles which are different for different configurations. We also discuss how the dressed-state structure of such system can explain the origin of quintuplet profile of the fluorescent spectrum.
Surajit Sen, Tushar Kanti Dey, Bimalendu Deb
2023-10-17T01:41:22Z
http://arxiv.org/abs/2310.10924v1
# Resonance fluorescence in \(\Lambda\), \(V\) and \(\Xi\) - type three-level configurations ###### Abstract We theoretically study the resonance fluorescence spectra of the lambda (\(\Lambda\)), vee (\(V\)) and cascade (\(\Xi\)) type three-level configurations. It is shown that each system with two detuning frequencies can be modelled using the \(SU(3)\) symmetry group to derive a generalized optical Bloch equation. For each configuration, this equation is solved to calculate the two-time correlation function by invoking the quantum regression theorem. The incoherent part of the power spectra gives the characteristic multi-peak fluorescence profiles which are different for different configurations. We also discuss how the dressed-state structure of such system can explain the origin of quintuplet profile of the fluorescent spectrum. _Keywords_: Resonance Fluorescence, Three-level System, SU(3) Group, Optical Bloch Equation, Power Spectrum ## 1 Introduction The behavior of two-, three-, or multi-level systems when subjected to an intense electromagnetic field gives rise to numerous intriguing spectral features that may find a wide range of applications in quantum optics and quantum metrology. Historically, a two-level system driven by a strong resonant field was first investigated by Mollow who showed the existence of two symmetrical sidebands in the both sides of the central Rayleigh peak [1, 2]. In the three-peak spectrum, popularly known as Mollow triplet, the sidebands are located at a shift proportional to the Rabi frequency of the system with their intensity equal to one-third that of the central peak. This phenomenon is known as Resonance Fluorescence (RF) which arises due to the transitions between the dressed states of the effective system formed under the action of an intense external field [3]. The predecessors of this light-matter interaction model are, Wigner-Weisskopf model of the spontaneous emission [4] which leads to Lorentzian spectra, Autler-Townes (AT) effect, where a doublet spectral feature is observed if a two-level system is resonantly driven by a strong oscillatory electromagnetic field [5] and the Fano model [6] which is characterized by an asymmetric spectrum resulting due to the presence of a continuum state among the states involved. The experimental observation of the anti-bunching of photons in the intensity-intensity correlation is an important outcome of the strongly coupled light-matter quantum dynamics [7, 8]. The generalization of the two-level system to the three-level one is quite nontrivial due to three distinct categories of transitions known as, the lambda (\(\Lambda\)), vee (\(V\)) and cascade (\(\Xi\)) type configurations shown in Fig.1. In the recent past the three-level system has drawn considerable attention because it exhibits wide range of quantum-optical phenomena such as two photon coherence [9], double resonance process [10], three-level super-radiance [11], resonance Raman scattering [12], population trapping [2], tri-level echoes [13], STIRAP [14], quantum jump [15], quantum zeno effect [16], three-level Fano effect [17], electromagnetically induced transparency (EIT) [18, 19, 20] etc. In context with the resonance fluorescence, it is revealed that Mollow-like sideband is not limited to the two-level systems alone, but is also displayed by the multilevel system, including three-level systems driven by two intense driving fields [21, 22, 23]. In the eighties, the RF in a two-level system gained prominence due to its inherent significance, particularly with the introduction of damping induced by the squeezed vacuum [24, 25]. Since then, the RF in three-level system is being addressed in context with the squeezed vacuum using the \(V\)[26, 27, 28] and \(\Xi\) type of three-level configurations were studied [29]. Apart from that, it was further investigated by considering the emission spectrum of two-photon resonant excitation using the \(\Xi\) model [30], in the nano-particle system [31] and in trapped condition using \(\Lambda\) configuration [32]. In the realm of three-level configurations, the topology of each configuration is fundamentally different from one another due to their distinct characteristics. Therefore, despite the aforesaid works, it is crucial to adopt a comprehensive approach to study the RF for all three-level configurations particularly to decipher the distinct structure of the dressed states of each system. Operationally, a three-level configuration consists of two coupled two-level systems where two dipole transitions are involved shown in Fig.1. Such a system with multiple Lindblad terms takes into account the effects of spontaneous emission and dephasing caused by the vacuum fluctuations in two distinct transition pathways. Consequently, the solution to such a model becomes quite complex to handle theoretically. Recently, the SU(3) group has been shown to have the potential to unveil a wide range of quantum optical phenomena involving three-level systems [33, 34, 35, 20, 17]. It is therefore interesting to explore the RF of all three-level configurations using this group theoretic method and give a comprehensive comparison of this quantum phenomenon. Our method provides an elegant approach to solve a set of Optical Bloch equations (OBE) and subsequently give the correlation function for fluctuations of the atomic variables around the steady state. The remaining sections of the paper are organized as follows: In Section II, using the Gell-Mann matrices of \(SU(3)\) group, we have revisited the construction of the dissipative \(\Lambda\), \(V\) and \(\Xi\) configurations where the dissipation is taken care of by the Lindblad term. Then in Section-III, the OBE of each configuration is derived. The two-time correlation function using the quantum regression theorem is presented in Section-IV. The power spectra of all configurations are compared and the structure of the dressed states of three-level system is enunciated in Section V to understand the the quintuplet spectrum of resonance fluorescence. In the final Section, we highlight the main results of the paper and discuss the outlook. ## 2 The Models The dissipative process in the \(\Lambda,V\) and \(\Xi\) type three-level configurations can be described by the Liouville equation with Lindblad term (\(A=\Lambda,V,\Xi\)), \[\frac{d\rho^{A}}{dt}=-i\left[H^{A},\rho^{A}\right]+\mathfrak{L}_{D}^{A}, \tag{1}\] where the density matrix is given by \[\rho^{A}=\left(\begin{array}{ccc}\rho_{33}&\rho_{32}&\rho_{31}\\ \rho_{23}&\rho_{22}&\rho_{21}\\ \rho_{13}&\rho_{12}&\rho_{11}\end{array}\right) \tag{2}\] written in the the atomic basis states are \(|1\rangle=(0,0,1)^{T}\), \(|2\rangle=(0,1,0)^{T}\) and \(|3\rangle=(1,0,0)^{T}\) with \(\mathfrak{L}_{D}^{A}\) as the Lindblad term. Using Gellmann matrices of the \(SU(3)\) representation, the Hamiltonian and Lindblad terms of these configurations are given by [34, 35], \[H^{\Lambda}=\left(\begin{array}{ccc}\frac{1}{3}\big{(}\Delta_{13}^{\Lambda}+ \Delta_{23}^{\Lambda}\big{)}&g_{23}&g_{13}\\ g_{23}&\frac{1}{3}\big{(}\Delta_{13}^{\Lambda}-2\Delta_{23}^{\Lambda}\big{)}&0 \\ g_{13}&0&-\frac{1}{3}\big{(}2\Delta_{13}^{\Lambda}-\Delta_{23}^{\Lambda}\big{)},\end{array}\right), \tag{3a}\] \[\mathfrak{L}_{D}^{\Lambda} =\Gamma_{31}^{\Lambda}(V_{+}\rho^{\Lambda}V_{-}-\frac{1}{2}V_{- }V_{+}\rho^{\Lambda}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+})\] \[+\Gamma_{32}^{\Lambda}(T_{+}\rho^{\Lambda}T_{-}-\frac{1}{2}T_{-}T _{+}\rho^{\Lambda}-\frac{1}{2}\rho^{\Lambda}T_{-}T_{+}), \tag{3b}\] for the \(\Lambda\) system with the detuning offsets, \(\Delta_{13}^{\Lambda}=2\omega_{31}+\omega_{32}-\Omega_{13}\) and \(\Delta_{23}^{\Lambda}=\omega_{31}+2\omega_{32}-\Omega_{23}\), \[H^{V}=\left(\begin{array}{ccc}\frac{1}{3}\big{(}2\Delta_{13}^{V}-\Delta_{12} ^{V}\big{)}&0&g_{13}\\ 0&\frac{1}{3}\big{(}2\Delta_{12}^{V}-\Delta_{13}^{V}\big{)}&g_{12}\\ g_{13}&g_{12}&-\frac{1}{3}\big{(}\Delta_{12}^{V}+\Delta_{13}^{V}\big{)},\end{array}\right) \tag{3a}\] \[\mathfrak{L}_{D}^{V} =\Gamma_{31}^{V}(V_{-}\rho^{V}V_{+}-\frac{1}{2}V_{+}V_{-}\rho^{V} -\frac{1}{2}\rho^{V}V_{+}V_{-})\] \[+\Gamma_{21}^{V}(U_{-}\rho^{V}U_{+}-\frac{1}{2}U_{+}U_{-}\rho^{V} -\frac{1}{2}\rho^{V}U_{+}U_{-}), \tag{3b}\] for the \(V\) system with \(\Delta_{12}^{V}=\omega_{31}+2\omega_{21}-\Omega_{12}\), \(\Delta_{13}^{V}=2\omega_{31}+\omega_{21}-\Omega_{13}\) and \[H^{\Xi}=\left(\begin{array}{ccc}\frac{1}{3}\big{(}\Delta_{12}^{\Xi}+2\Delta_ {23}^{\Xi}\big{)}&g_{23}&0\\ g_{23}&\frac{1}{3}\big{(}\Delta_{12}^{\Xi}-\Delta_{23}^{\Xi}\big{)}&g_{12}\\ 0&g_{12}&-\frac{1}{3}\big{(}2\Delta_{12}^{\Xi}+\Delta_{23}^{\Xi}\big{)}\end{array} \right), \tag{3a}\] \[\mathfrak{L}_{D}^{\Xi} =\Gamma_{32}^{\Xi}(T_{-}\rho^{\Xi}T_{+}-\frac{1}{2}T_{+}T_{-} \rho^{\Xi}-\frac{1}{2}\rho^{\Xi}T_{+}T_{-})\] \[+\Gamma_{21}^{\Xi}(U_{-}\rho^{\Xi}U_{+}-\frac{1}{2}U_{+}U_{-}\rho^ {\Xi}-\frac{1}{2}\rho^{\Xi}U_{+}U_{-}), \tag{3b}\] for the \(\Xi\) system with detuning \(\Delta_{23}^{\Xi}=2\omega_{21}+\omega_{32}-\Omega_{23}\) and \(\Delta_{12}^{\Xi}=2\omega_{32}-2\omega_{21}-\Omega_{12}\). Here, \(g_{ij}\) (\(i,j=1,2,3\)) is the system-field coupling parameter with the laser field with frequency \(\Omega_{ij}\), \(\Gamma_{ji}\) be the decay constant for the transition pathway \(j\to i\) (\(j>i\)) and \(\omega_{ij}=\omega_{i}-\omega_{j}\) be the difference between transition frequencies of \(|i\rangle\)-th and \(|j\rangle\)-th levels with \(\hbar\omega_{1}\) (\(=E_{1}\)), \(\hbar\omega_{2}\) (\(=E_{2}\)) and \(\hbar\omega_{3}\) (\(=E_{3}\)) be the absolute energies of three levels shown in Fig.1. The SU(3) shift vectors appearing in Equations (3_a_-5_b_) are given by [36], \[\begin{split}& T_{\pm}=\frac{1}{2}(\lambda_{1}\pm i\lambda_{2}), \quad V_{\pm}=\frac{1}{2}(\lambda_{4}\pm i\lambda_{5}),\quad U_{\pm}=\frac{1}{ 2}(\lambda_{6}\pm i\lambda_{7})\\ & T_{3}\,=\lambda_{3},\quad V_{3}=\frac{1}{2}(\sqrt{8}\lambda_{8 }+\lambda_{3}),\quad U_{3}=\frac{1}{2}(\sqrt{8}\lambda_{8}-\lambda_{3}).\end{split} \tag{6}\] where the Gell-Mann matrices are given by, \[\lambda_{0}= \left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right),\quad\lambda_{1}=\left(\begin{array}{ccc}0&1&0\\ 1&0&0\\ 0&0&0\end{array}\right),\quad\lambda_{2}=\left(\begin{array}{ccc}0&-i&0\\ i&0&0\\ 0&0&0\end{array}\right),\] \[\lambda_{3}= \left(\begin{array}{ccc}1&0&0\\ 0&-1&0\\ 0&0&0\end{array}\right),\quad\lambda_{4}=\left(\begin{array}{ccc}0&0&1\\ 0&0&0\\ 1&0&0\end{array}\right),\quad\lambda_{5}=\left(\begin{array}{ccc}0&0&-i\\ 0&0&0\\ i&0&0\end{array}\right), \tag{7}\] \[\lambda_{6}= \left(\begin{array}{ccc}0&0&0\\ 0&0&1\\ 0&1&0\end{array}\right),\quad\lambda_{7}=\left(\begin{array}{ccc}0&0&0\\ 0&0&-i\\ 0&i&0\end{array}\right),\quad\lambda_{8}=\frac{1}{\sqrt{3}}\left(\begin{array} []{ccc}1&0&0\\ 0&1&0\\ 0&0&-2\end{array}\right),\] which are normalized as \(\lambda_{l}\lambda_{m}=\delta_{lm}+d_{lmn}\lambda_{n}+f_{lmp}\lambda_{p}\) with \(d_{lmn}\) and \(f_{lmp}\)\((l,m,n,p=1,2,\ldots,8)\) being the completely symmetric and completely antisymmetric structure constants. It is worth mentioning here that aforesaid time-independent Hamiltonian appearing in Equations (3\(a\),4\(a\),5_a_) can be obtained by using the transformation [34, 35], \[H^{A}=U^{A\dagger}(t)H^{A}(t)U^{A}(t)-i\Bigg{[}\frac{dU^{A}(t)}{dt}\Bigg{]}U^{A \dagger}(t), \tag{8}\] where the unitary operators are given by, \[U^{\Lambda}(t)=\exp\big{[}-\frac{i}{3}\big{(}(2\Omega_{13}-\Omega_{23})V_{3}+( 2\Omega_{23}-\Omega_{13})T_{3}\big{)}\big{]}t,\] (9_a_) \[U^{V}(t)=\exp\big{[}-\frac{i}{3}\big{(}(2\Omega_{12}-\Omega_{13})U_{3}+( \Omega_{13}-2\Omega_{23})V_{3}\big{)}\big{]}t,\] (9_b_) \[U^{\Xi}(t)=\exp\big{[}-\frac{i}{3}\big{(}(2\Omega_{12}+\Omega_{23})U_{3}+( \Omega_{12}+2\Omega_{23})T_{3}\big{)}\big{]}t,\] (9_c_) with the time-dependent Hamiltonians of three configurations, \[H^{\Lambda}(t)=\omega_{31}V_{3}+\omega_{32}T_{3}+g_{13}V_{+}\exp(-i\Omega_{13 }t)+g_{23}T_{+}\exp(-i\Omega_{23}t)+h.c.,\] (10_a_) \[H^{V}(t)=\omega_{31}V_{3}+\omega_{21}U_{3}+g_{13}V_{+}\exp(-i \Omega_{13}t)+g_{12}U_{+}\exp(-i\Omega_{12}t)+h.c.,\] (10_b_) \[H^{\Xi}(t)=\omega_{32}T_{3}+\omega_{21}U_{3}+g_{12}U_{+}\exp(-i \Omega_{12}t)+g_{23}T_{+}\exp(-i\Omega_{23}t)+h.c..\] (10_c_) Having knowledge about the model Hamiltonian of all three configurations and the Lindblad term which characterizes the dissipation in such system, we proceed to derive the Optical Bloch Equations. ## 3 Bloch Equation for Three-level Configuration The Bloch vector \({\bf S}_{\mathbb{P}_{i}}^{A}(t)\) of a generic three-level configuration is defined as, \[{\bf S}_{\mathbb{P}_{i}}^{A}(t)=Tr[\rho^{A}(t)\mathbb{P}_{i}], \tag{11}\] where \(\mathbb{P}_{i}\) (\(T_{+,-,3},V_{+,-,3},U_{+,-,3}\)) is the \(SU(3)\) shift operators. Using the algebra of the shift vectors it is easy to see that, \[\frac{d{\bf S}_{\mathbb{P}_{i}}^{A}(t)}{dt}=Tr\big{[}\frac{d\rho^ {A}(t)}{dt}\mathbb{P}_{i}\big{]}, \tag{12}\] \[=-i\big{\langle}\big{[}H^{A},\rho^{A}\big{]}\mathbb{P}_{i}\big{\rangle} +\big{\langle}{\mathfrak{L}}_{D}^{A}\mathbb{P}_{i}\big{\rangle},\] where Equation (1) is used along with \(\big{\langle}\hat{\mathcal{O}}\big{\rangle}^{A}={\rm Tr}\Big{[}\hat{\mathcal{ O}}\rho^{A}\Big{]}\). Using the Lindblad terms in Equations (3b, 4b, 5b), Equation (12) for three configurations are given by, \[\frac{d{\bf S}_{\mathbb{P}_{i}}^{A}(t)}{dt}=-i\,{\rm Tr}\left[(H^{ \Lambda}\rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})\mathbb{P}_{i}\right]+\Gamma _{31}Tr\big{[}(V_{+}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}- \frac{1}{2}V_{-}V_{+}\rho^{\Lambda})\mathbb{P}_{i}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{+}\rho^{\Lambda}T_{-}-\frac{ 1}{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})\mathbb{P}_ {i}\big{]}, \tag{13a}\] \[\frac{d{\bf S}_{\mathbb{P}_{i}}^{V}(t)}{dt}=-i\,{\rm Tr}\left[(H^ {V}\rho^{V}-\rho^{V}H^{V})\mathbb{P}_{i}\right]+\Gamma_{31}Tr\big{[}(V_{-} \rho^{\Lambda}V_{+}-\frac{1}{2}\rho^{\Lambda}V_{+}V_{-}-\frac{1}{2}V_{+}V_{-} \rho^{\Lambda})\mathbb{P}_{i}\big{]}\] \[\qquad\qquad+\Gamma_{21}Tr\big{[}(U_{-}\rho^{V}U_{+}-\frac{1}{2} \rho^{V}U_{+}U_{-}-\frac{1}{2}U_{+}U_{-}\rho^{V})\mathbb{P}_{i}\big{]},\] (13b) \[\frac{d{\bf S}_{\mathbb{P}_{i}}^{\Xi}(t)}{dt}=-i\,{\rm Tr}\left[(H^ {\Xi}\rho^{\Xi}-\rho^{\Xi}H^{\Xi})\mathbb{P}_{i}\right]+\Gamma_{21}Tr\big{[}(U_ {-}\rho^{\Xi}U_{+}-\frac{1}{2}\rho^{\Xi}U_{+}U_{-}-\frac{1}{2}U_{+}U_{-}\rho^ {\Xi})\mathbb{P}_{i}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{-}\rho^{\Xi}T_{+}-\frac{1}{2 }\rho^{\Xi}T_{+}T_{-}-\frac{1}{2}T_{+}T_{-}\rho^{\Xi})\mathbb{P}_{i}\big{]}, \tag{13c}\] Thus we note that only a pair of SU(3) projection vectors are involved for a specific configuration. To derive the OBE of the \(\Lambda\) configuration, we first substitute the shift vector \(\mathbb{P}_{i}=V_{\pm},V_{3},T_{\pm},T_{3},U_{\pm},U_{3}\) in Equation (13a) and obtain, \[\frac{dS_{T_{+}}^{\Lambda}(t)}{dt}=-i\,{\rm Tr}\left[(H^{\Lambda} \rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})T_{+}\right]+\Gamma_{31}Tr\big{[}(V_{ +}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1}{2}V_{-}V_{+ }\rho^{\Lambda})T_{+}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})T_{-}\big{]}, \tag{13d}\] \[\frac{dS_{T_{-}}^{\Lambda}(t)}{dt}=-i\,{\rm Tr}\left[(H^{\Lambda} \rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})T_{-}\right]+\Gamma_{31}Tr\big{[}(V_{ +}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1}{2}V_{-}V_{+ }\rho^{\Lambda})T_{-}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})T_{-}\big{]},\] (13e) \[\frac{dS_{T_{3}}^{\Lambda}(t)}{dt}=-i\,{\rm Tr}\left[(H^{\Lambda} \rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})T_{3}\right]+\Gamma_{31}Tr\big{[}(V_{ +}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1}{2}V_{-}V_{+ }\rho^{\Lambda})T_{3}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}T_{-}T_{+}\rho^{\Lambda}-\frac{1}{2}\rho^{\Lambda}T_{-}T_{+})T_{3}\big{]}, \tag{13f}\] \[\frac{dS^{\Lambda}_{V_{+}}(t)}{dt}=-i\,{\rm Tr}\,\big{[}(H^{\Lambda} \rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})V_{+}\big{]}+\Gamma_{31}Tr\big{[}(V_{+} \rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1}{2}V_{-}V_{+} \rho^{\Lambda})V_{+}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}T_{+}\rho^{\Lambda}T_{-}-\frac{1 }{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})V_{+}\big{]},\] (14_d) \[\frac{dS^{\Lambda}_{V_{-}}(t)}{dt}=-i\,{\rm Tr}\,\big{[}(H^{ \Lambda}\rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})V_{-}\big{]}+\Gamma_{31}Tr \big{[}(V_{+}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1}{ 2}V_{-}V_{+}\rho^{\Lambda})V_{-}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})V_{-}\big{]},\] (14_e) \[\frac{dS^{\Lambda}_{V_{3}}(t)}{dt}=-i\,{\rm Tr}\,\big{[}(H^{ \Lambda}\rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})V_{3}\big{]}+\Gamma_{31}Tr \big{[}(V_{+}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1} {2}V_{-}V_{+}\rho^{\Lambda})V_{3}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})V_{3}\big{]},\] (14_f) \[\frac{dS^{\Lambda}_{U_{+}}(t)}{dt}=-i\,{\rm Tr}\,\big{[}(H^{ \Lambda}\rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})U_{+}\big{]}+\Gamma_{31}Tr \big{[}(V_{+}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1} {2}V_{-}V_{+}\rho^{\Lambda})U_{+}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{(}T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})U_{+}\big{]},\] (14_g) \[\frac{dS^{\Lambda}_{U_{-}}(t)}{dt}=-i\,{\rm Tr}\,\big{[}(H^{ \Lambda}\rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})U_{-}\big{]}+\Gamma_{31}Tr \big{[}(V_{+}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{-}V_{+}-\frac{1} {2}V_{-}V_{+}\rho^{\Lambda})U_{-}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}\rho^{\Lambda}T_{-}T_{+}-\frac{1}{2}T_{-}T_{+}\rho^{\Lambda})U_{-}\big{]},\] (14_h) \[\frac{dS^{\Lambda}_{U_{3}}(t)}{dt}=-i\,{\rm Tr}\,\big{[}(H^{ \Lambda}\rho^{\Lambda}-\rho^{\Lambda}H^{\Lambda})U_{3}\big{]}+\Gamma_{31}Tr \big{[}(V_{+}\rho^{\Lambda}V_{-}-\frac{1}{2}\rho^{\Lambda}V_{+}V_{-}-\frac{1} {2}V_{+}V_{-}\rho^{\Lambda})U_{3}\big{]}\] \[\qquad\qquad+\Gamma_{32}Tr\big{[}(T_{+}\rho^{\Lambda}T_{-}-\frac {1}{2}\rho^{\Lambda}T_{+}T_{-}-\frac{1}{2}T_{+}T_{-}\rho^{\Lambda})U_{3}\big{]},\] (14_i) Equation (11) supplemented by the normalization condition \({\rm Tr}\,\rho^{\Lambda}(t)=1\) gives the elements of the density matrix in terms of the Bloch vectors, \[\rho^{\Lambda}_{33}=\frac{1}{3}(1+S^{\Lambda}_{T_{3}}+S^{\Lambda}_{V_{3}}), \quad\rho^{\Lambda}_{32}=S_{T^{\Lambda}_{-}},\quad\rho^{\Lambda}_{31}=S^{ \Lambda}_{V_{-}},\] \[\rho^{\Lambda}_{23}=S^{\Lambda}_{T_{+}},\quad\rho^{\Lambda}_{22}= \frac{1}{3}(1-S^{\Lambda}_{T_{3}}+S^{\Lambda}_{U_{3}}),\quad\rho_{21}=S^{ \Lambda}_{U_{-}}, \tag{15}\] \[\rho^{\Lambda}_{13}=S^{\Lambda}_{V_{+}},\quad\rho^{\Lambda}_{12}=S ^{\Lambda}_{U_{+}},\quad\rho^{\Lambda}_{11}=\frac{1}{3}(1-S^{\Lambda}_{U_{3}}- S^{\Lambda}_{V_{3}}).\] Finally substituting the elements of the density matrix from Equation (15) into (14_a_-14_i_), we obtain the desired OBE of the \(\Lambda\) configuration, \[\frac{dS^{\bf A}_{\mathbb{P}_{1}}(t)}{dt}=M^{\Lambda}{\bf S}^{\bf A}_{\mathbb{ P}_{1}}+{\bf B}^{\bf A}, \tag{16}\] where the Bloch matrix \(M^{\Lambda}\) and the inhomogeneous term \({\bf B}^{\bf A}\) are given in Appendix. Proceeding similar way, the OBE for the \(V\) and \(\Xi\) type configurations are obtained as shown in Appendix. ## 4 Power Spectrum The evaluation of the power spectrum requires the information about the two-time correlation function of the fluctuation about the steady state. This is usually done by using the quantum regression theorem. In the long time limit (\(t\to\infty\)) when the system attains the steady state we have, \[\left.\frac{d\langle\mathbf{S}_{\mathbb{P}_{1}}^{\mathbf{A}}(\mathbf{t})\rangle }{dt}\right|_{t\to\infty}=0=M^{\Lambda}\langle\mathbf{S}_{\mathbb{P}_{1}}^{ \mathbf{A}}(\infty)\rangle_{s}+\mathbf{B}^{\mathbf{A}}. \tag{17}\] Now introducing the fluctuation of the atomic variable around the steady state, namely, \(\langle\delta\mathbf{S}_{\mathbb{P}_{1}}^{\mathbf{A}}(t)\rangle=\mathbf{S}_{ \mathbb{P}_{1}}^{\mathbf{A}}(t)-\langle\mathbf{S}_{\mathbb{P}_{1}}^{\mathbf{A} }(\infty)\rangle_{s}\equiv\langle\langle\mathbb{P}_{i}(t)\rangle\rangle\), the in-homogeneous term \(\mathbf{B}^{\mathbf{A}}\) can be eliminated, \[\frac{d\langle\langle\mathbb{P}_{i}(t)\rangle\rangle}{dt}=M^{A}\langle\langle \mathbb{P}_{i}(t)\rangle\rangle. \tag{18}\] According to the quantum regression theorem [37], the two-time correlation function \(\mathbf{K}_{\mathbb{P}_{1}\mathbb{P}_{j}}^{\mathbf{A}}(\tau)\) is given by \[\frac{d\mathbf{K}_{\mathbb{P}_{i}\mathbb{P}_{j}}^{\mathbf{A}}(\tau)}{d\tau}=M ^{A}\mathbf{K}_{\mathbb{P}_{i}\mathbb{P}_{j}}^{\mathbf{A}}(\tau), \tag{19}\] where the incoherent part of the correlation function is given by, \[\mathbf{K}_{\mathbb{P}_{1}\mathbb{P}_{j}}^{\Lambda}(\tau)=\lim_{t\to\infty} \langle\delta\mathbf{S}_{\mathbb{P}_{1}}^{\mathbf{A}}(t+\tau)\delta\mathbf{S} _{\mathbb{P}_{j}}^{\mathbf{A}}(\mathbf{t})\rangle\equiv\langle\langle \mathbb{P}_{i}(\tau)\mathbb{P}_{j}(0)\rangle\rangle. \tag{20}\] with the two-time correlation function depends upon the difference of time \(\tau\). In the case of the \(\Lambda\) configuration (\(A=\Lambda\)), in order to calculate the power spectrum for the \(\langle\langle V_{+}(\tau)V_{-}(0)\rangle\rangle\) in transition pathway \(3\to 1\), we need to solve the equation, \[\frac{d\mathbf{K}_{V_{+}\mathbb{P}_{j}}^{\Lambda}(\tau)}{d\tau}=M^{\Lambda} \mathbf{K}_{V_{+}\mathbb{P}_{j}}^{\Lambda}(\tau),\] (21a) where the correlation vector \[\mathbf{K}_{V_{+}\mathbb{P}_{j}}^{\Lambda}(\tau)=\big{(}\langle\langle V_{+}( \tau)T_{+}(0)\rangle\rangle,\langle\langle V_{+}(\tau)T_{-}(0)\rangle\rangle, \langle\langle V_{+}(\tau)T_{3}(0)\rangle\rangle, \tag{21b}\] \[\langle\langle V_{+}(\tau)V_{+}(0)\rangle\rangle,\langle\langle V_ {+}(\tau)V_{-}(0)\rangle\rangle,\langle\langle V_{+}(\tau)V_{3}(0)\rangle \rangle\big{)}^{T},\] To evaluate the correlation function, we first obtain the initial conditions using GellMann algebra, \[K_{V_{+}T_{+}}^{A}(0)=0,K_{V_{+}T_{-}}^{A}(0)=0,K_{V_{+}T_{3}}^ {A}(0)=0,K_{V_{+}V_{+}}^{A}(0)=0,\] \[K_{V_{+}V_{-}}^{\Lambda}(0)=\frac{1}{3}(1+\langle\langle T_{3} \rangle\rangle_{s}+\langle\langle V_{3}\rangle\rangle_{s}),\quad K_{V_{+}V_{3 }}^{\Lambda}(0)=-\langle\langle V_{+}\rangle\rangle_{s}, \tag{21c}\] \[K_{V_{+}U_{+}}^{\Lambda}(0)=0,\quad K_{V_{+}U_{-}}^{\Lambda}(0)= \langle\langle T_{+}\rangle\rangle_{s},\quad K_{V_{+}U_{3}}^{\Lambda}(0)=- \langle\langle V_{+}\rangle\rangle_{s}\] where \(\langle\langle\mathbb{P}_{i}\rangle\rangle_{s}\) is obtained from Equation (18) under the steady state condition. Proceeding in a similar way, to find the correlation vector \(\mathbf{K}_{T_{+}\mathbb{P}_{j}}^{\Lambda}(\tau)\) and hence \(\langle\langle T_{+}(\tau)T_{-}(0)\rangle\rangle\), we solve, \[\frac{d\mathbf{K}_{T_{+}\mathbb{P}_{j}}^{\Lambda}(\tau)}{d\tau}=M^{\Lambda} \mathbf{K}_{T_{+}\mathbb{P}_{j}}^{\Lambda}(\tau), \tag{22a}\] where the correlation vector is given by \[\mathbf{K}^{\Lambda}_{T_{+}\mathbb{P}_{j}}(\tau) = \big{(}\langle\langle T_{+}(\tau)T_{+}(0)\rangle,\langle\langle T_{+ }(\tau)T_{-}(0)\rangle\rangle,\langle\langle T_{+}(\tau)T_{3}(0)\rangle\rangle,\] \[\langle\langle T_{+}(\tau)V_{+}(0)\rangle,\langle\langle T_{+}( \tau)V_{-}(0)\rangle\rangle,\langle\langle T_{+}(\tau)V_{3}(0)\rangle\rangle,\] \[\langle\langle T_{+}(\tau)U_{+}(0)\rangle\rangle,\langle\langle T _{+}(\tau)U_{-}(0)\rangle\rangle,\langle\langle T_{+}(\tau)U_{3}(0)\rangle \rangle\big{)}^{T},\] with the initial conditions, \[K^{A}_{T_{+}T_{+}}(0)=0,K^{A}_{T_{+}T_{-}}(0)=\frac{1}{3}(1+ \langle\langle T_{3}\rangle\rangle_{s}+\langle\langle V_{3}\rangle\rangle_{s}),\] \[K^{A}_{T_{+}T_{3}}(0)=-\langle\langle T_{+}\rangle\rangle_{s},K ^{A}_{T_{+}V_{+}}(0)=0,K^{\Lambda}_{T_{+}V_{-}}(0)=0,K^{\Lambda}_{T_{+}V_{3}} (0)=0, \tag{22c}\] \[K^{\Lambda}_{T_{+}U_{+}}(0)=\langle\langle V_{+}\rangle\rangle_{s },\quad K^{\Lambda}_{T_{+}U_{-}}(0)=0,\quad K^{\Lambda}_{T_{+}U_{3}}(0)= \langle\langle T_{+}\rangle\rangle_{s}\] Finally having knowledge about the correlation function, we proceed to evaluate the power spectrum of the three-level configuration given by [27, 38], \[S^{A}_{ij}(\omega)=Re\bigg{\{}\int_{0}^{\infty}d\tau\mathbf{K}^{A}_{P_{i}P_{j }}(\tau)e^{-i(\omega-\Omega_{jk})\tau}\bigg{\}}. \tag{22d}\] where \(\omega\) be the frequency of the fluorescent light emanating from the three-level system corresponding to the transition \(|i\rangle\leftrightarrow|j\rangle\). To compute the incoherent spectrum, we solve the evolution equations of the correlation function \(\mathbf{K}^{A}_{P_{i}P_{j}}(\tau)\) using Laplace transformation with the initial conditions discussed above. Taking two detuning offsets at resonant frequencies, \(\Delta^{A}_{ij}\approx 0\) and \(\Delta^{A}_{jk}\approx 0\), the incoherent spectrum can be given by the algabraic equation [38, 27], \[S^{A}_{ij}(\omega)\approx Re\bigg{\{}\mathbf{K}^{A}_{P_{i}P_{j}}(s)\bigg{|}_{ s=-i(\omega-\Omega^{A}_{ij})}^{\Delta_{ij}=0,\Delta_{jk}=0}\bigg{\}}. \tag{22e}\] with the Laplacian parameter given by \(s\approx-i(\omega-\Omega^{A}_{ij})\). For example, in the \(\Lambda\) configuration, we have two distinct transition pathways \(|1\rangle\leftrightarrow|3\rangle\) and \(|2\rangle\leftrightarrow|3\rangle\) which correspond to the correlation function \(\langle\langle V_{+}(\tau)V_{-}(0)\rangle\rangle\) and \(\langle\langle T_{+}(\tau)T_{-}(0)\rangle\rangle\). Then for the pathway \(|1\rangle\leftrightarrow|3\rangle\), the power spectrum is given by, \[S^{\Lambda}_{13}(\omega)\approx Re\bigg{\{}\mathbf{K}^{\Lambda}_{V_{+}V_{-}}( s)\bigg{|}_{s=-i(\omega-\Omega^{\Lambda}_{13})}^{\Delta_{13}=0,\Delta_{23}=0} \bigg{\}}, \tag{22e}\] and for the transition \(|2\rangle\leftrightarrow|3\rangle\) we obtain, \[S^{\Lambda}_{23}(\omega)\approx Re\bigg{\{}\mathbf{K}^{\Lambda}_{T_{+}T_{-}}( s)\bigg{|}_{s=-i(\omega-\Omega^{\Lambda}_{23})}^{\Delta_{13}=0,\Delta_{23}=0} \bigg{\}}, \tag{22f}\] respectively. The evaluation of the power spectrum for the \(V\) and \(\Xi\) configurations is similar where the pairs of correlation functions, \(\{\langle\langle V_{+}(\tau)V_{-}(0)\rangle\rangle\,,\langle\langle U_{+}( \tau)U_{-}(0)\rangle\rangle\}\) and \(\{\langle\langle U_{+}(\tau)U_{-}(0)\rangle\rangle\,,\langle\langle T_{+}( \tau)T_{-}(0)\rangle\rangle\}\) are relevant. ## 5 Results and Discussions We now turn our attention to the resonance fluorescence profiles of different configurations. To get the desired power spectrum of the \(\Lambda\) configuration for two distinct transition pathways, we have to solve Equations (21a) and (22a), subject to the steady state conditions given by Equations (21c) and (22c). It is convenient to work with the dimensionless parameters, \(\tilde{\Delta}_{ij}=\Delta_{ij}/\Gamma\), \(\tilde{\Gamma}_{ij}=\Gamma_{ij}/\Gamma\), \(\tilde{g}_{ij}=g_{ij}/\Gamma\), where the scaling factor \(\Gamma\) is different for different configurations (See figures 2-7). As mentioned earlier, we solve the correlation function using Laplace transformation method while taking the resonant detuning to be \(\Delta_{13}^{\Lambda}\approx 0\) and \(\Delta_{23}^{\Lambda}\approx 0\) and then, retrace back to frequency domain with the Laplace variables \(s=-i(\omega-\Omega_{13})\) or \(s=-i(\omega-\Omega_{23})\)[27, 38]. The power spectra of \(\Lambda\) configuration are plotted in Fig.2 and 3 for various combination of input parameters. It is noteworthy that in the strong coupling regime with \(g_{23}>1\) and \(g_{13}>1\), we observe distinctive quintuplet spectrum as depicted in Fig.2(a,b) and Fig.3(a,b) for the transition pathways, \(|3\rangle\rightarrow|1\rangle\) and \(|3\rangle\rightarrow|2\rangle\), respectively. In contrast, when the laser coupling in one transition is weak and that in the other transition is strong, then we observe Mollow-like triplet in the RF of the latter transition as shown in Figs. 2(d) and 3(c), while a doublet in the RF profile depicted in Figs.2(c) and 3(d). The appearance of a Mollow-like triplet structure clearly indicates that the system effectively reduces to a strongly driven two-level atom when the laser-coupling to the other transition pathway for which the RF is not observed goes to zero. The emergence of a doublet structure is a signature of the RF counterpart where we have familiar Autler-Townes splitting. A strong laser-coupling in \(|3\rangle\rightarrow|2\rangle\) (\(|3\rangle\rightarrow|1\rangle\)) transition creates two excited dressed states the signature of which is then manifested as a doublet in the RF in weakly coupled \(|3\rangle\rightarrow|1\rangle\) (\(|3\rangle\rightarrow|2\rangle\)) transition. In a similar way, we can proceed to discuss the spectral features of the \(V\)-type configuration as shown in Figs. 4 and 5 which exhibit distinct shapes of the fluorescent spectra. The distinct shapes of the fluorescent spectra contrast to the \(\Lambda\) configurations is clearly evident. The RF profiles of the \(\Xi\) -type configuration are displayed in Fig.6 and 7. Once again, with strong couplings for both transition pathways we observe the quintet resonance profile. However, in the weak coupling regime for one transition and strong-coupling for other one, we observe a Mollow-like triplet or a doublet profile only for the transition \(|2\rangle\leftrightarrow|1\rangle\). In contrast, for the \(|3\rangle\leftrightarrow|2\rangle\) transition, we witness five-peak spectra with different shapes. At this point, it is important to emphasize that the magnitude of some specific profiles in the \(V\) and \(\Xi\) configurations which are quite small compare to other profiles as revealed in Fig.4(a,c), 5(b,d) for the \(V\) configuration and Fig.6c, 7(c,d) for the \(\Xi\) configuration. In essence, the unique spectral signatures of the resonance fluorescence serve as another key identifying feature to differentiate each three-level configuration from the other two. Among various profiles, appearance of the quintuplet resonance profile for different type of three-level configurations is in agreement with the results of previous theoretical investigations [21, 22, 23, 27], however, in different parameter regimes, the emergence of the Mollow-like triplet and Autler-Townes-like doublet is noteworthy. Finally for completeness, we discuss the dressed states of the three-level configuration which is indispensable part to understand the origin of quintuplet fluorescence spectrum and other related phenomena. It is well known that in presence of strong laser field, the dressed states \(|D(i,m,n)\rangle\) is a coherent superposition of three atom-field bare states, namely, \[\{|-,m,n\rangle\,,\quad|0,m-1,n+1\rangle\quad|+,m-1,n\rangle\}, \tag{26}\] \begin{table} \begin{tabular}{l l l} \hline \hline Sl. No. & Transition in dressed states & Order of Peak \\ \hline Transition I & \(|D(9,m-1,n)\rangle\Longrightarrow|D(1,m,n)\rangle\) & 2nd oder \\ \hline Transition II & \(|D(7,m-1,n)\rangle\Longrightarrow|D(2,m,n)\rangle\) & 1st order \\ & \(|D(8,m,n)\rangle\Longrightarrow|D(3,m,n)\rangle\) & \\ \hline Transition III & \(|D(7,m-1,n)\rangle\Longrightarrow|D(1,m,n)\rangle\) & (Central Peak) \\ & \(|D(8,m-1,n)\rangle\Longrightarrow|D(2,m,n)\rangle\) & Zero-th order \\ & \(|D(9,m-1,n)\rangle\Longrightarrow|D(3,m,n)\rangle\) & \\ \hline Transition IV & \(|D(9,m-1,n)\rangle\Longrightarrow|D(2,m,n)\rangle\) & 1st order \\ & \(|D(8,m-1,n)\rangle\Longrightarrow|D(1,m,n)\rangle\) & \\ \hline Transition V & \(|D(7,m-1,n)\rangle\Longrightarrow|D(3,m,n)\rangle\) & 2nd order \\ \hline \hline \end{tabular} \end{table} Table 1: Allowed transitions in pathway \(|3\rangle\rightarrow|1\rangle\) Figure 8: Three energy manifolds of the dressed states of the \(\Lambda\) configuration with all allowed transitions. For transition pathway \(|3\rangle\rightarrow|1\rangle\), (i.e., from energy manifold \(\mathcal{E}(m-1,n)\) to \(\mathcal{E}(m,n)\)), the nonvanishing transition amplitude is proportional to the coupling parameter \(g_{13}\), while for the route \(|2\rangle\rightarrow|1\rangle\), (i.e., from the energy manifold \(\mathcal{E}(m-1,n)\) to \(\mathcal{E}(m-1,n+1)\)), the amplitude is proportional to \(g_{23}\). where \(\left\{\left|-\right\rangle,\left|0\right\rangle,\left|+\right\rangle\right\}\) represent the atomic states with \(m\), \(n\) be the photon numbers of the bi-chromatic fields. The first triplet of the dressed states for the \(\Lambda\) configuration, which constitutes the energy manifold \(\mathcal{E}(m,n)\), is given by [35], \[\begin{bmatrix}\left|D(1,m,n)\right\rangle\\ \left|D(2,m,n)\right\rangle\\ \left|D(3,m,n)\right\rangle\end{bmatrix}=\mathcal{R}_{m,n}\begin{bmatrix} \left|+,m-1,n\right\rangle\\ \left|0,m-1,n+1\right\rangle\\ \left|-,m,n\right\rangle\end{bmatrix}. \tag{27}\] where \(\mathcal{R}_{m,n}(\theta,\phi,\psi)\) be the Euler matrix with mixing angles \(\left\{\theta,\phi,\psi\right\}\) which mixes the bare states. This triplet provides the basis of constructing the tower of remaining dressed states with judicious choice of the photon numbers of the bare states. Fig.8 shows the first three manifolds which precisely preserves the topology of the \(\Lambda\) configuration having two well-defined transition pathways, namely, \(\left|3\right\rangle\rightarrow\left|1\right\rangle\) (solid lines) and \(\left|3\right\rangle\rightarrow\left|2\right\rangle\) (dotted lines). The dipole transition amplitudes between \(j\) to \(i\)-th dressed states of either of these pathways are given by the dipole matrix element, \[d_{ij}=\left\langle D(j,m,n)\right|\hat{\mathcal{O}}\left|D(i,m,n)\right\rangle, \tag{28}\] where \(\hat{\mathcal{O}}\) be the interaction operator described by the Hamiltonian (3a) proportional to the dipole coupling strength \(g_{13}\) or \(g_{23}\) depending upon the route. The exact evaluation of the transition amplitudes requires the knowledge of the Euler angles for different configurations which is beyond the scope of the paper. To understand the quintuplet structure of the \(\Lambda\) configuration of different order, in Table-I we have displayed all possible spontaneous transitions from the dressed states of manifold \(\mathcal{E}(m-1,n)\) to those in \(\mathcal{E}(m,n)\) at zero detuning, namely \(\tilde{\Delta}_{23}=0\) and \(\tilde{\Delta}_{13}=0\). It is worth noting here that corresponding to zero-th order central peak we have three transitions, then a pair of of sidebands with diminishing intensity with two and then single transitions, respectively. Similarly we have spontaneous transitions from the energy manifold \(\mathcal{E}(m-1,n)\) to \(\mathcal{E}(m-1,n+1)\) which has same structure (Table not shown). The treatment for the \(V\) and \(\Xi\) configurations of the three-level system is similar. ## 6 Conclusion The quantum interference in the \(\Lambda\), \(V\) and \(\Xi\) - type of three-level configuration in presence of intense bi-chromatic resonant laser field exhibits several non-trivial features of this strong coupling phenomena. In this paper we have developed a comprehensive method to study the resonance fluorescence spectrum of different three-level configurations. In contrast to other available methods [21, 23, 27], our approach rooted in the SU(3) based group theoretical technique which offers a succinct derivation of the optical Bloch equation, presented in a concise format. When combined with the quantum regression theorem, this derivation enables the calculation of the fluorescence profiles of all configurations across different parameter regimes. Finally to explain the origin of quintuplet spectra, we have touched upon the phenomenological description of the dressed state of the \(\Lambda\) system. The emergence of the heralded photons steaming from the transitions of the dressed states of various three-level configurations is another important aspect which requires further exploration of the structural aspects of the dressed states. In the parlance of significant development of the atom-field interaction, further study of the dressed states of the three-level system may unravel uncharted phenomena of quantum information processing. ## Appendix A Bloch matrices of Lambda, Vee and Cascade configurations In Equation (16), the Bloch matrix \(M^{\Lambda}\) of the \(\Lambda\) configuration is found to be, \(M^{\Lambda}=\) \[\begin{pmatrix}D_{11}^{\Lambda}&0&-\frac{2ig_{23}}{3}&0&0&-\frac{ig_{23}}{3}&0& ig_{13}&\frac{ig_{23}}{3}\\ 0&D_{22}^{\Lambda}&\frac{2ig_{23}}{3}&0&0&\frac{ig_{23}}{3}&-ig_{13}&0&-\frac{ ig_{23}}{3}\\ -2ig_{23}&2ig_{23}&D_{33}^{\Lambda}&-ig_{13}&ig_{13}&-\frac{\Gamma_{31}}{3}&0&0& \frac{1}{3}\left(2\Gamma_{32}-\Gamma_{31}\right)\\ 0&0&-\frac{ig_{13}}{3}&D_{44}^{\Lambda}&0&-\frac{2ig_{13}}{3}&ig_{23}&0&-\frac{ ig_{13}}{3}\\ 0&0&\frac{ig_{13}}{3}&0&D_{55}^{\Lambda}&\frac{2ig_{13}}{3}&0&-ig_{23}&\frac{ ig_{13}}{3}\\ -ig_{23}&ig_{23}&-\frac{\Gamma_{32}}{3}&-2ig_{13}&2ig_{13}&D_{66}^{\Lambda}&0& 0&\frac{1}{3}\left(\Gamma_{32}-2\Gamma_{31}\right)\\ 0&-ig_{13}&0&ig_{23}&0&0&D_{77}^{\Lambda}&0&0\\ ig_{13}&0&0&0&-ig_{23}&0&0&D_{88}^{\Lambda}&0\\ ig_{23}&-ig_{23}&\frac{\Gamma_{32}}{3}&-ig_{13}&ig_{13}&-\frac{\Gamma_{31}}{3} &0&0&D_{99}^{\Lambda}\end{pmatrix} \tag{17}\] where the diagonal terms are given by, \[D_{11}^{\Lambda}= \ -\frac{\Gamma_{32}}{2}+i\Delta_{23},\quad D_{22}^{\Lambda}=- \frac{\Gamma_{32}}{2}-i\Delta_{23},\quad D_{33}^{\Lambda}=-\frac{2\Gamma_{32} }{3},\] \[D_{44}^{\Lambda}= \ -\frac{\Gamma_{31}}{2}+i\Delta_{13},\quad D_{55}^{\Lambda}=- \frac{\Gamma_{31}}{2}-i\Delta_{13},\quad D_{66}^{\Lambda}=-\frac{2\Gamma_{31} }{3}\] \[D_{77}^{\Lambda}= \ \frac{1}{2}\left(-\Gamma_{31}-\Gamma_{32}+2i\left(\Delta_{13}- \Delta_{23}\right)\right), \tag{18}\] \[D_{88}^{\Lambda}= \ \frac{1}{2}\left(-\Gamma_{31}-\Gamma_{32}-2i\left(\Delta_{13}- \Delta_{23}\right)\right),\quad D_{99}^{\Lambda}=\frac{1}{3}\left(-\Gamma_{31} -\Gamma_{32}\right)\] with the inhomogeneous term, \[\mathbf{B}^{\Lambda}=\big{[}0,0,\frac{1}{3}(\Gamma_{31}+2\Gamma_{32}),0,0, \frac{1}{3}(2\Gamma_{31}+\Gamma_{32}),0,0,\frac{1}{3}(\Gamma_{31}-\Gamma_{32} )\big{]}^{T}. \tag{19}\] For the \(V\) configuration, the components of the density matrix in terms of Bloch vectors are given by, \[\rho_{33}^{V}=\frac{1}{3}(1+S_{T_{3}}^{V}+S_{V_{3}}^{V}),\quad \rho_{32}^{V}=S_{T_{-}}^{V},\quad\rho_{31}^{V}=S_{V_{-}}^{V},\] \[\rho_{23}^{V}=S_{T_{+}}^{V},\quad\rho_{22}^{V}=\frac{1}{3}(1-S_{T _{3}}^{V}+S_{U_{3}}^{V}),\quad\rho_{21}=S_{U_{-}}^{V}, \tag{20}\] \[\rho_{13}^{V}=S_{V_{+}}^{V},\quad\rho_{12}^{V}=S_{U_{+}}^{V},\quad \rho_{11}^{V}=\frac{1}{3}(1-S_{U_{3}}^{V}-S_{V_{3}}^{V}).\] Substituting Equation (20) into (13_b_), the OBE of the \(V\) configuration is given by, \[\frac{d\mathbf{S}_{\mathbb{P}_{1}}^{\mathbf{V}}(t)}{dt}=M^{V}\mathbf{S}_{ \mathbb{P}_{1}}^{\mathbf{V}}+\mathbf{B}^{\mathbf{V}}, \tag{21}\] where the Bloch matrix \(M^{V}\) is given by, \[M^{V}=\] \[\left(\begin{array}{ccccccccc}D^{V}_{11}&0&0&-ig_{12}&0&0&0&ig_{13}&0\\ 0&D^{V}_{22}&0&0&ig_{12}&0&-ig_{13}&0&0\\ 0&0&D^{V}_{33}&-ig_{13}&ig_{13}&-\frac{\Gamma_{31}}{3}&ig_{12}&-ig_{12}&\frac{ \Gamma_{21}}{3}\\ -ig_{12}&0&-\frac{ig_{13}}{3}&D^{V}_{44}&0&-\frac{2ig_{13}}{3}&0&0&-\frac{ig_{1 3}}{3}\\ 0&ig_{12}&\frac{ig_{13}}{3}&0&D^{V}_{55}&\frac{2ig_{13}}{3}&0&0&\frac{ig_{13}}{ 3}\\ 0&0&\frac{1}{3}\left(\Gamma_{21}-2\Gamma_{31}\right)&-2ig_{13}&2ig_{13}&D^{V} _{66}&-ig_{12}&ig_{12}&-\frac{\Gamma_{21}}{3}\\ 0&-ig_{13}&\frac{ig_{12}}{3}&0&0&-\frac{ig_{12}}{3}&D^{V}_{77}&0&-\frac{2ig_{1 2}}{3}\\ ig_{13}&0&-\frac{ig_{12}}{3}&0&0&\frac{ig_{12}}{3}&0&D^{V}_{88}&\frac{2ig_{12 }}{3}\\ 0&0&\frac{1}{3}\left(2\Gamma_{21}-\Gamma_{31}\right)&-ig_{13}&ig_{13}&-\frac{ \Gamma_{31}}{3}&-2ig_{12}&2ig_{12}&D^{V}_{99}\end{array}\right) \tag{13}\] where the diagonal terms are given by \[D^{V}_{11} = \frac{1}{2}\left(-\Gamma_{21}-\Gamma_{31}-2i\left(\Delta_{12}- \Delta_{13}\right)\right),\] \[D^{V}_{22} = \frac{1}{2}\left(-\Gamma_{21}-\Gamma_{31}+2i\left(\Delta_{12}- \Delta_{13}\right)\right),\quad D_{33}=\frac{1}{3}\left(-\Gamma_{21}-\Gamma_{ 31}\right),\] \[D^{V}_{44} = -\frac{\Gamma_{31}}{2}+i\Delta_{13},\quad D^{V}_{55}=-\frac{ \Gamma_{31}}{2}-i\Delta_{13},\quad D^{V}_{66}=-\frac{2\Gamma_{31}}{3}, \tag{14}\] \[D^{V}_{77} = -\frac{\Gamma_{21}}{2}+i\Delta_{12},\quad D^{V}_{88}=-\frac{ \Gamma_{21}}{2}-i\Delta_{12},\quad D^{V}_{99}=-\frac{2\Gamma_{21}}{3}\] with the corresponding inhomogeneous term, \[B^{V}=\big{[}0,0,\frac{1}{3}(\Gamma_{21}-\Gamma_{31}),0,0,-\frac{1}{3}(\Gamma_ {21}+2\Gamma_{31}),0,0,-\frac{1}{3}(2\Gamma_{21}+\Gamma_{31})\big{]}^{T}, \tag{15}\] Finally for the \(\Xi\) configuration, the components of the density matrix in terms of Bloch vectors are given by, \[\rho^{\Xi}_{33}=\frac{1}{3}(1+S^{\Xi}_{T_{3}}+S^{\Xi}_{V_{3}}), \quad\rho^{\Xi}_{32}=S^{\Xi}_{T_{-}},\quad\rho^{\Xi}_{31}=S^{\Xi}_{V_{-}},\] \[\rho^{\Xi}_{23}=S^{\Xi}_{T_{+}},\quad\rho^{\Xi}_{22}=\frac{1}{3}( 1-S^{\Xi}_{T_{3}}+S^{\Xi}_{U_{3}}),\quad\rho^{\Xi}_{21}=S^{\Xi}_{U_{-}}, \tag{16}\] \[\rho^{\Xi}_{13}=S^{\Xi}_{V_{+}},\quad\rho^{\Xi}_{12}=S^{\Xi}_{U_ {+}},\quad\rho^{\Xi}_{11}=\frac{1}{3}(1-S^{\Xi}_{U_{3}}-S^{\Xi}_{V_{3}}).\] Substituting (16) into (13_c_) the OBE of the \(\Xi\) configuration is given by, \[\frac{d\mathbf{S}^{\Xi}_{\mathbb{P}_{1}}(t)}{dt}=M^{\Xi}\mathbf{S}^{\Xi}_{ \mathbb{P}_{1}}+\mathbf{B}^{\Xi}, \tag{17}\] with the Bloch matrix given by, \(M^{\Xi}=\) \[\begin{pmatrix}D_{11}^{\Xi}&0&-\frac{2ig_{23}}{3}&-ig_{12}&0&-\frac{ig_{23}}{3}&0 &0&\frac{ig_{23}}{3}\\ 0&D_{22}^{\Xi}&\frac{2ig_{23}}{3}&0&ig_{12}&\frac{ig_{23}}{3}&0&0&-\frac{ig_{23}} {3}\\ -2ig_{23}&2ig_{23}&D_{33}^{\Xi}&0&0&-\frac{2\Gamma_{32}}{3}&ig_{12}&-ig_{12}& \frac{\Gamma_{21}}{3}\\ -ig_{12}&0&0&D_{44}^{\Xi}&0&0&ig_{23}&0&0\\ 0&ig_{12}&0&0&D_{55}^{\Xi}&0&0&-ig_{23}&0\\ -ig_{23}&ig_{23}&\frac{1}{3}\left(\Gamma_{21}-\Gamma_{32}\right)&0&0&D_{66}^{ \Xi}&-ig_{12}&ig_{12}&-\frac{\Gamma_{21}}{3}\\ 0&0&\frac{ig_{12}}{3}&ig_{23}&0&-\frac{ig_{12}}{3}&D_{77}^{\Xi}&0&-\frac{2ig _{12}}{3}\\ 0&0&-\frac{ig_{12}}{3}&0&-ig_{23}&\frac{ig_{12}}{3}&0&D_{88}^{\Xi}&\frac{2ig _{12}}{3}\\ ig_{23}&-ig_{23}&\frac{1}{3}\left(2\Gamma_{21}+\Gamma_{32}\right)&0&0&\frac{ \Gamma_{32}}{3}&-2ig_{12}&2ig_{12}&D_{99}^{\Xi}\end{pmatrix} \tag{14}\] where the diagonal terms are given by \[D_{11}^{\Xi} = \frac{1}{2}\left(-\Gamma_{21}-\Gamma_{32}+2i\Delta_{23}\right), \quad D_{22}^{\Xi}=\frac{1}{2}\left(-\Gamma_{21}-\Gamma_{32}-2i\Delta_{23} \right),\] \[D_{33}^{\Xi} = \frac{1}{3}\left(-\Gamma_{21}-2\Gamma_{32}\right),\quad D_{44}^{ \Xi}=-\frac{\Gamma_{32}}{2}+i\left(\Delta_{12}+\Delta_{23}\right),\] \[D_{55}^{\Xi} = -\frac{\Gamma_{32}}{2}-i\left(\Delta_{12}+\Delta_{23}\right), \quad D_{66}^{\Xi}=-\frac{\Gamma_{32}}{3}, \tag{15}\] \[D_{77}^{\Xi} = -\frac{\Gamma 21}{2}+i\Delta_{12},\quad D_{88}^{\Xi}=-\frac{ \Gamma_{21}}{2}-i\Delta_{12},\quad D_{99}^{\Xi}=-\frac{2\Gamma_{21}}{3}\] with corresponding inhomogeneous term, \[\mathbf{B}^{\Xi}=\big{[}0,0,\frac{1}{3}(\Gamma_{21}-2\Gamma_{32}),0,0,-\frac{1 }{3}(\Gamma_{21}+\Gamma_{32}),0,0,\frac{1}{3}(\Gamma_{32}-2\Gamma_{21})\big{]} ^{T}. \tag{16}\]
2305.05621
Deep Learning-based Estimation for Multitarget Radar Detection
Target detection and recognition is a very challenging task in a wireless environment where a multitude of objects are located, whether to effectively determine their positions or to identify them and predict their moves. In this work, we propose a new method based on a convolutional neural network (CNN) to estimate the range and velocity of moving targets directly from the range-Doppler map of the detected signals. We compare the obtained results to the two dimensional (2D) periodogram, and to the similar state of the art methods, 2DResFreq and VGG-19 network and show that the estimation process performed with our model provides better estimation accuracy of range and velocity index in different signal to noise ratio (SNR) regimes along with a reduced prediction time. Afterwards, we assess the performance of our proposed algorithm using the peak signal to noise ratio (PSNR) which is a relevant metric to analyse the quality of an output image obtained from compression or noise reduction. Compared to the 2D-periodogram, 2DResFreq and VGG-19, we gain 33 dB, 21 dB and 10 dB, respectively, in terms of PSNR when SNR = 30 dB.
Mamady Delamou, Ahmad Bazzi, Marwa Chafii, El Mehdi Amhoud
2023-05-05T16:22:17Z
http://arxiv.org/abs/2305.05621v1
# Deep Learning-based Estimation for Multitarget Radar Detection ###### Abstract Target detection and recognition is a very challenging task in a wireless environment where a multitude of objects are located, whether to effectively determine their positions or to identify them and predict their moves. In this work, we propose a new method based on a convolutional neural network (CNN) to estimate the range and velocity of moving targets directly from the range-Doppler map of the detected signals. We compare the obtained results to the two dimensional (2D) periodogram, and to the similar state of the art methods, 2DResFreq and VGG-19 network and show that the estimation process performed with our model provides better estimation accuracy of range and velocity index in different signal to noise ratio (SNR) regimes along with a reduced prediction time. Afterwards, we assess the performance of our proposed algorithm using the peak signal to noise ratio (PSNR) which is a relevant metric to analyse the quality of an output image obtained from compression or noise reduction. Compared to the 2D-periodogram, 2DResFreq and VGG-19, we gain 33 dB, 21 dB and 10 dB, respectively, in terms of PSNR when SNR = 30 dB. Convolutional neural network, joint communication and sensing, monostatic radar ## I Introduction Network densification is one of the prominent building blocks for future wireless communication systems. In addition to high cell density, dense networks will integrate a large amount of connected drones and autonomous vehicles. Hence, setting up an effective object detection system becomes an important challenge for the wireless infrastructure. Object detection is a topic that is addressed from different angles as it becomes the focus of future technologies. It is investigated in joint communication and radar systems (JCRS) [1, 2, 3, 4] to improve radar detection in a merged communication and detection system. It is important to note that most parameter detection problems are complex because of their non-linearity [1]. Therefore, much effort is put into either proposing new algorithms or refining existing solutions, such as removing or reducing side lobes around the solutions, or even improving the robustness of the algorithms in low signal to noise ratio (SNR) regions. Among all target detection algorithms, the periodogram technique which is based on the discrete Fourier transform (DFT) was widely investigated [5]. Moreover, compressive sensing takes advantage of the sparsity property in some signals to reduce the number of samples needed for estimation [6, 7]. However, it is subject to degradation of image resolution. Furthermore, the matrix pencil [8] is an algebraic method that solves the parameter estimation problem by approximating a function by a sum of complex exponentials. In addition, the multiple signal classification (MUSIC) and the estimation of signal parameters by rotational invariance (ESPRIT) methods are two well-known parametric estimators [9]. They can reach super-resolutions, however, they require very large number of samples [10, 11]. Moreover, it is known that one of the main limitations of the two-dimensional (2D) matrix pencil and ESPRIT is the matching of the frequency pairs. Matrix pencil matching could be done as proposed in Step 3 of Subalgorithm 1 in [12]. Finally, MUSIC and ESPRIT have a high computational complexity, and as the SNR decreases, their performance degrades rapidly [13]. Furthermore, deep learning (DL)-based algorithms for solving complex problems in many areas, including radar signal processing have gained considerable attention in recent years [14]. This is due to the excellent ability of DL models to extract miscellaneous features that classical techniques fail in doing. Recently, Pingping et al., put forward 2DResFreq in [13], which is based on DL and aims at extracting several sinusoids in a 2D signal. This is an extension of the work proposed in [15] from a one-dimensional (1D) signal to a 2D one. The work initiated in [16] is a convolutional neural network model called VGG-19, which can achieve very good accuracy with the ImageNet dataset. In [17], the proposed technique comprises a convolutional neural network (CNN) for target detection with a typical pulsed Doppler radar. The neural network generates range-Doppler data for only a single target with an isotropic antenna which is not practical in a dense network. In addition, in [18], the authors came up with a machine learning model for processing echoed signals to determine whether a valid target exists. The model performs target detection based on random decision forests and recurrent neural networks, but don't take into account the range and velocity of those targets. In this work, we propose a new DL framework for learning the radar range-Doppler map. The latter is a 2D representation of the time-Doppler shift couple, which is translated into range-velocity information. Compared to the work presented in [13], instead of estimating the 2D-frequencies, we learn the range-Doppler map which is mainly converted into an image processing task. Therefore, the model matches each estimated channel to its corresponding range-Doppler map. Despite that many image classification and recognition tasks have benefited from CNN, recent evidence reveals that network depth is of crucial importance [16]. The question to be answered is if learning better networks is as easy as stacking more layers. As mentioned in [19], with the increase of network depth, accuracy gets saturated and then degrades rapidly. Such degradation is not caused by overfitting, and adding more layers to a suitable DL model leads to higher training error. As reported in [20] and thoroughly verified in [19], the degradation can be mitigated by introducing a deep residual learning framework with shortcut connections. With this is mind, our contributions are summarized as follows: (i) introducing a CNN tailored for radar range-Doppler estimation, which is then trained on synthetic data and (ii) testing the CNN on newly generated signals, and comparing with other state of the art methods, i.e., 2DResFreq and VGG-19. We report an improved range and velocity root mean square error (RMSE), a high noise reduction, along with high peak signal to noise ratio (PSNR) and a lower prediction time. The remainder of the paper is organized as follows: In Section II, we introduce the system model and formulate the radar detection problem. In Section III, we detail the structure of our proposed CNN model. In Section IV, we present simulation results. Finally, in Section V, we conclude and set forth our perspectives. ## II System model and problem statement We consider a wireless communication system consisting in a communication transmitter co-located with a monostatic radar. The transmitted signal from the communication antenna is perfectly known to the radar, and is reflected by targets characterized by their range and velocity as illustrated in Fig. 1. We assume that the interference between the reflected signal (radar signal) and the communication signal is perfectly managed. The transmitter and receiver exchange orthogonal frequency division multiplexing (OFDM) frames. The total bandwidth \(B\) is divided into \(N\) small bands with central frequencies \(f_{0}\),\(f_{1}\)...\(f_{N-1}\) such that \(\Delta f=\frac{B}{N}\). The OFDM symbol duration \(T\) is given by \(T=\frac{1}{\Delta f}\), where \(\Delta f\) is the OFDM frequency spacing. We consider that an OFDM frame composed of \(M\) OFDM symbols, with \(\mathbf{S}\) representing the transmitted quadrature amplitude modulation (QAM) symbols matrix, \(\mathbf{X}\) is the OFDM frame and \(\mathbf{H}\) the channel matrix. The matrices \(\mathbf{S}\), and \(\mathbf{X}\) have the same dimension, i.e., \(N\) rows and \(M\) columns. Since a cyclic prefix (CP) is added between consecutive symbols within the frame, the number of QAM symbols in each OFDM symbol becomes \(N_{s}=N+N_{cp}\), where \(N_{cp}\) is the number of QAM symbols transmitted in CP duration. The total OFDM symbol transmission time \(T_{s}\) becomes \(T_{s}=T+T_{cp}\), with \(T_{cp}\) the CP duration. The conversion from digital to analog is accomplished within a dedicated digital-to-analogue converter (DAC) and the signal is up-converted using the carrier frequency \(f_{c}\). At the radar, the CP is removed, and then fast Fourier transform (FFT) is performed on the OFDM bandpass signals. Finally, after the spectral division, targets detection algorithm is applied. The received signal can be written as \(\mathbf{Y}=\mathbf{S}\cdot\mathbf{H}+\mathbf{Z}\), where \(\mathbf{Z}\) is the noise matrix. By considering the baseband signal \(x(t)\), the transmitted passband signal is \(x_{pb}(t)=x(t)e^{j2\pi f_{c}t}\). For a target \(p\) at distance \(d_{p}\) from the transmitter, and moving at velocity \(v_{p}\), the received passband signal at the radar is impacted by the following effects [5]: * The attenuation factor \(b_{p}\) which depends on the distance \(d_{p}\), the radar cross section (RCS) \(\sigma_{RCS}\), the carrier frequency \(f_{c}\) and the speed of light \(c\). By using Friis equation of transmission, we have \(b_{p}=\sqrt{\frac{c\sigma_{RCS}}{(4\pi)^{d_{p}}d_{p}^{2}f_{c}^{2}}}\). * The signal delay \(\tau_{p}\) caused by the round-trip, \(\tau_{p}=\frac{d_{p}}{c}\). * The Doppler-shift \(f_{D_{p}}\) caused by the velocity \(v_{p}\) of the target, \(f_{D_{p}}=\frac{2v_{p}}{c}f_{c}\). * The random rotation phase \(\varphi\) introduced when the signal hits the target. * The additive white Gaussian noise (AWGN) \(z(t)\) such that \(z(t)\sim\mathscr{N}(\mu,\,\sigma^{2})\). By denoting \(N_{t}\), the total number of moving targets. The estimated channel \(\hat{\mathbf{H}}\) has entries given by [5, 21, 22]: \[\begin{split}\hat{h}_{k,l}=\frac{y_{k,l}}{s_{k,l}}=\sum_{p=0}^{N _{t}-1}b_{p}e^{j2\pi\frac{lN_{s}f_{D_{p}}}{N\Delta f_{p}}}e^{-j2\pi k\Delta f \tau_{p}}e^{j\Phi}+\tilde{z}_{k,l},\\ \text{with }0\leqslant k\leqslant N-1,0\leqslant l \leqslant M-1,\end{split} \tag{1}\] where \(s_{k,l}\), \(y_{k,l}\), and \(\hat{h}_{k,l}\) are the (\(k\),\(l\))th entry of \(\boldsymbol{S}\), \(\boldsymbol{Y}\), \(\hat{\mathbf{H}}\), respectively. \(\Phi\), \(\tilde{z}_{k,l}\) represent the phase added after reflection and the (\(k\),\(l\))th entry of the noise matrix \(\tilde{\mathbf{Z}}\), obtained after the spectrum division, respectively. Let us write \(f_{p,1}\) and \(f_{p,2}\) as \[f_{p,1}=\Delta f\times\tau_{p},\text{ and }f_{p,2}=T_{s}\times f_{D_{p}}, \tag{2}\] Fig. 1: A mono-static co-located integrated radar and communication system, where the signal used for communication is then re-used for radar processing. with \(f_{D_{p}}=\frac{2v_{p}}{c}f_{c}\), \(\tau_{p}=\frac{2d_{p}}{c}\), and \(T_{s}=\frac{N_{s}}{N\Delta f}\). From (2), (1) can be written as \[\hat{h}_{k,l}=\sum_{p=0}^{N_{t}-1}b_{p}\exp\left(-j2\pi f_{p,1}k \right)\exp\left(j2\pi f_{p,2}l\right)e^{j\Phi}+\tilde{z}_{k,l}, \tag{3}\] \[\text{with }0\leqslant k\leqslant N-1,0\leqslant l\leqslant M-1.\] The target detection problem consists in estimating \(f_{D_{p}}\), \(\tau_{p}\) and \(b_{p}\) from which we retrieve range, velocity and the reflectance of targets. Hence, we turn the problem into estimating \(f_{p,1}\), \(f_{p,2}\) and \(b_{p}\). This is equivalent to estimating the couples of indexes (\(\hat{k}_{p}\), \(\hat{l}_{p}\)) corresponding to the indexes of (\(f_{p,1}\), \(f_{p,2}\)) in \(\hat{\mathbf{H}}\). \(\hat{k}_{p}\) and \(\hat{l}_{p}\) are referenced as range and velocity indexes, respectively. Once we get the list of the indexes (\(\hat{k}_{p}\), \(\hat{l}_{p}\)), the corresponding (\(f_{p,1}\), \(f_{p,2}\)) are deduced. Finally ranges and velocities can be retrieved from (2). ## III Deep learning-based multitarget detection ### _DL architecture_ In this section, we present our DL model used to estimate the targets range and velocity index. The optimization problem introduced consists in estimating the dominant frequencies' index contained in \(\hat{\mathbf{H}}\), which can be found in the radar range-Doppler map. Some numerical approaches can be introduced to solve it. Instead, we introduce DL because of its ability to extract miscellaneous features that are hidden from classical techniques. The DL first reduces the noise by filtering the noisy signal, and greatly learns the main features contained in the data, i.e., the peaks in this case. The input layer takes \(\left(\mathbf{I},\mathbf{Q}\right)\) where \(\hat{\mathbf{H}}=\mathbf{I}+\mathbf{jQ}\). The overall network is composed of convolution layers, batch normalization layers, rectified linear unit (ReLU), dropout layers, and a dense layer. As shown in Fig. 2, the DL network is based on deep residual learning. In fact, as mentioned in [19], as the depth of a network increases, the accuracy reaches saturation and at this point, adding more layers increases the training error. This behavior can be mitigated using the residual network with shortcut connections [20]. Based on this alternative, we achieve a deep learning model without saturation. However, adding a very large number of layers increases the complexity. Hence, to avoid heavy models, we are also limited by the training and the prediction time. The CNN we propose is composed of a matched filter whose output is such that it maximizes the ratio of output peak power to mean noise power. Moreover, to achieve good radar resolution, we must get better frequency resolution which refers to the minimum frequency difference below which two frequencies can not be distinguished. The residual shortcut connections are used throughout the frequency resolution module to achieve much deeper convergent model which improves the frequency resolution and consequently improves the radar resolution (Fig. 2). ### _Model label generation_ As a model to be trained, the ground-truth (GT) label is crucial, it is the real range-Doppler map that the model matches the extracted features with, during a supervised learning. At the training stage, it is used to associate \(\hat{\mathbf{H}}\) with its range-Doppler. Each target \(p\) is characterized by a pair of frequencies (\(f_{p,1}\),\(f_{p,2}\)) and the amplitude \(b_{p}\) in accordance with (3). A GT range-Doppler map \(g_{t}\) of dimension (\(N\), \(M\)) contains zeros in all the matrix points except at entries where targets are located (ideal map). Let us consider three sets \(F_{1}\), \(F_{2}\) and \(B\) such that \(f_{r,1}\in F_{1}\), \(f_{s,2}\in F_{2}\) and \(b\in B\) for any \(r\in\{0;1;2;...;N_{F_{1}}-1\}\) and for any \(s\in\{0;1;2;...;N_{F_{2}}-1\}\). \(N_{F_{1}}\) and \(N_{F_{2}}\) are the cardinalities of \(F_{1}\) and \(F_{2}\), respectively. The GT range-Doppler map is constructed as follows: * For each target \(p\), select randomly \(f_{p,1}\in F_{1}\) and \(f_{p,2}\in F_{2}\) with \(i_{p}\), \(j_{p}\) the indexes of \(f_{p,1}\) and \(f_{p,2}\) in \(F_{1}\) and \(F_{2}\), respectively. * The indexes of \(f_{p,1}\) and \(f_{p,2}\) in the GT are \(k_{p}=\frac{i_{p}\times N}{N_{F_{1}}}\), and \(l_{p}=\frac{j_{p}\times M}{N_{F_{2}}}\), respectively. * The GT range-Doppler map is defined as \[g_{t}(k_{p},l_{p})=\begin{cases}\beta\ln(\gamma b_{p}+1),&0\leqslant p \leqslant N_{t}\\ 0,&\text{otherwise}\end{cases},\] (4) where \(\beta\) and \(\alpha\) are two constants. \(\beta\ln(\gamma b_{p}+1)\) is the logarithm information of the amplitude \(b_{p}\). It expresses the amplitude information of frequencies. The loss function based on square error is given by: \[\mathscr{L}(g_{t},\hat{g})\ =\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}\left(\hat{g}(k,l)-g_{t} \left(k,l\right)\right)^{2}, \tag{5}\] where the frequency representation \(\hat{g}\) denotes the output of the CNN. The objective is to minimize \(\mathscr{L}(g_{t},\hat{g})\) over iterations. ## IV Simulation results In this section, we assess the performance of our CNN in terms of the RMSE of the range and velocity index estimation, the PSNR of the estimated range-Doppler map and the model training and prediction time. All the simulation has been done on a computer provided with an Intel Xeon CPU 2.20 GHz, 13 GB RAM, Tesla K80 gas pedal, and 12 GB GDDR5 VRAM. The learning rate is initialized to \(8\times 10^{-5}\), the batch size is set to 64, the dropout factor to 0.5 and \(\beta=\gamma=100\). The number of targets to be randomly predicted is \(N_{t}\) = 5. For each given target \(p\), the amplitude \(b_{p}\) is the absolute value of \(0.1+r_{g}\), and the phases are chosen to be the same, with \(r_{g}\) being sampled from a standard Gaussian distribution. The frequency coordinates of the \(p\)th target, \(f_{p,1}\) and \(f_{p,2}\), are both generated within the range \([-0.5,0.5]\). We have fixed the dimensions of the radar ratio signal equal to \(N=64\) and \(M=8\). To avoid targets to be too close to each other, the minimum separation between the coordinates of any two targets on the \(f_{1}\) axis is \(\frac{1}{3N}\), whereas the minimum separation between the coordinates of any two components on the \(f_{2}\) axis is \(\frac{1}{3M}\)[13]. First, we generate 3,000 noise-free signals and their GT following the previous mentioned configuration. For each noise free signal, the corresponding noisy signals are generated with SNR within \([-15,30]\) dB, all matching the same equivalent GT. We end up with a dataset of 30,000 entries, well distributed over all the SNR ranges as shown in Fig. (a)a. Afterwards, we introduce the PSNR, which expresses the ratio between the maximum value of a signal and the power of distorting noise that affects the quality of its representation. In the following implementation, we are dealing with a standard 2D arrays. The mathematical representation of the PSNR for the estimated label \(\hat{g}\) is given by: \[PSNR=20\log_{10}\left(\frac{\max\{\hat{g}\}}{\sqrt{MSE}}\right), \tag{6}\] where the MSE is expressed as \[MSE=\frac{1}{NM}\sum_{k=0}^{N-1}\sum_{l=0}^{M-1}\|\hat{g}(k,l)-g_{t}(k,l)\|^{2}. \tag{7}\] Here, \(\max\{\hat{g}\}\) is the maximum pixel value of the image. When the pixels are represented using \(A\) bits per sample, \(\max\{\hat{g}\}\) is \(2A-1\). For this implementation, labels are normalized between Fig. 4: Comparison of the estimation of models Fig. 3: Data structure, convergence and sample prediction. Fig. 2: Data structure, convergence and sample prediction 0 and 1 during the PSNR computation process. Fig. 3 depicts the loss of the training and validation sets for our model, 2DResFreq and VGG-19. From the figure, we notice that our model starts converging after almost 30 epochs whereas 2DResFreq and VGG-19 are still learning. This rapid convergence is due to batch normalization and ReLU units. Fig. 3 shows a random sample GT map, the corresponding predicted map using our model, VGG-19, 2DResFreq, and the estimated one using 2D periodogram. As it can be remarked in the figure, the periodogram suffers from side lobes, which are removed in the DL models. Fig. 4 and Fig. 4 present the RMSE of range and velocity index estimation as a function of the SNR. Since each target \(p\) in the estimated map is characterized by its frequencies \(f_{p,1}\) and \(f_{p,2}\) with \(k_{p}\), \(l_{p}\) their respective indices, in a parameterized OFDM system where \(\Delta f\), \(f_{c}\) and \(T_{s}\) are defined and fixed, the ranges and the velocities can be directly computed using \(k_{p}\) and \(l_{p}\) as described in (2). Then, the RMSE of the estimated ranges and velocities can be calculated. Instead, we directly computed the RMSE of the indexes. The proposed CNN outperforms the other approaches not only in range estimation but also in velocity estimation. For example, at SNR = 30 dB, in range index estimation, we have a log(2.8) dB, log(3.9) dB and log(5) dB gain with reference to VGG-19, 2DResFreq and 2D periodogram, respectively. Similarly, in velocity index estimation, we have a log(0.45) dB, log(1.2) dB and log(1.6) dB gain with reference to VGG-19, 2DResFreq and 2D periodogram, respectively. In Fig. 4, we plot the PSNR of the output of the proposed CNN and we compare it with VGG-19, 2DResFreq and 2D periodogram. In fact, regarding all the predicted maps, the 2D periodogram output is the most corrupted by noise compared to the DL models, which layers reduce the noise effect on the input signal. In fact, compared to VGG-19, 2DResFreq and 2D periodogram, we gain 10 dB, 21 dB and 33 dB, respectively. Furthermore, we compute the training and prediction times of the three models over all the training and testing datasets respectively. We report the results in Table I. As clearly shown from the table, our model is slightly slower than the VGG-19 during the training step which can be performed offline in real applications. Nonetheless, during the prediction step which is the most critical for radar applications, our DL model has the fastest prediction time. ## V Conclusion In this work, we proposed to estimate the range and velocity of moving targets by using a CNN that predicts the range-Doppler map directly from the channel estimates. The simulation results show that our model performs better in terms of the estimation error compared to VGG-19, 2DResFreq and 2D periodogram. Moreover, our proposed model outputs the range and velocity estimates in a short prediction time and has a high ability to reduce the noise effect on the range-Doppler map which leads to a better detection accuracy. In our future work, we aim to extend this model to dynamic OFDM waveforms and longer frames.
2304.02267
Hairy black holes in AdS with Robin boundary conditions
We study hairy black holes in Einstein-Maxwell-complex scalar theory in four-dimensional asymptotically global anti-de Sitter (AdS) spacetime when the Robin boundary conditions are imposed on the scalar field. This setup is dual to the double trace deformation of strongly interacting field theory on $R \times S^2$ by charged scalar operators. We identify the instability of the Reissner-Nordstr\"{o}m-AdS (RNAdS) black holes under the Robin boundary conditions and construct backreacted geometries branching at the onset of the instability. Also considering associated horizonless geometries called boson stars, we obtain phase diagrams with fairly rich structure in the grand canonical ensemble depending on the boundary condition parameter or the deformation parameter, where phase transition occurs between thermal AdS, RNAdS, charged boson stars, and hairy black holes.
Tomohiro Harada, Takaaki Ishii, Takuya Katagiri, Norihiro Tanahashi
2023-04-05T07:31:52Z
http://arxiv.org/abs/2304.02267v2
# Hairy black holes in AdS with Robin boundary conditions ###### Abstract We study hairy black holes in Einstein-Maxwell-complex scalar theory in four-dimensional asymptotically global anti-de Sitter (AdS) spacetime when the Robin boundary conditions are imposed on the scalar field. This setup is dual to the double trace deformation of strongly interacting field theory on \(R\times S^{2}\) by charged scalar operators. We identify the instability of the Reissner-Nordstrom-AdS (RNAdS) black holes under the Robin boundary conditions and construct backreacted geometries branching at the onset of the instability. Also considering associated horizonless geometries called boson stars, we obtain phase diagrams with fairly rich structure in the grand canonical ensemble depending on the boundary condition parameter or the deformation parameter, where phase transition occurs between thermal AdS, RNAdS, charged boson stars, and hairy black holes. ## 1 Introduction Asymptotically anti-de Sitter (AdS) spacetime offers diverse gravitational dynamics. In contrast to asymptotically flat spacetime, black hole geometry can be considered in the canonical ensemble, where asymptotically global AdS experiences the first order phase transition between horizonless and black hole spacetimes [1; 2]. Through the AdS/CFT duality [3; 4; 5], it is interpreted as the confinement/deconfinement phase transition in strongly coupled Yang-Mills theory. When the gravitational theory has \(U(1)\) gauge field and charged scalar field, the spontaneous breaking of the gauge symmetry is discussed as the appearance of the superfluid/superconducting phase [6; 7; 8]. Aforementioned phenomena are often considered with the Dirichlet boundary conditions imposed on the asymptotic behavior of the scalar field at the AdS boundary. However, general conditions known as the Robin boundary conditions (also called mixed boundary conditions) are allowed [9; 10; 11; 12; 13] if the field in AdS has a mass close to the Breitenlohner-Freedman bound [14; 15]. When the parameter for the Robin boundary conditions exceeds a critical value and the deviation from the Dirichlet boundary condition becomes sufficiently large, the AdS spacetime becomes unstable [11]. The Robin (or mixed) boundary conditions are related to multitrace deformation in the dual field theory in the AdS/CFT interpretation [16; 17; 18]. Not only for scalar field considered in these literature, but also the Robin boundary conditions can be imposed for vector field and discussed in the context of introducing dynamical gauge field on the AdS boundary [11; 19; 20]. Robin boundary conditions have also been considered for metric field so as to promote the boundary metric dynamical [21]. In [22], two of the authors studied the linear mode stability of the four-dimensional Reissner-Nordstrom AdS (RNAdS) spacetime with global AdS asymptotics for neutral and charged complex scalar field perturbations with Robin boundary conditions.1 The neutral field shows an instability for the Robin boundary conditions with parameters greater than a critical value. The charged scalar field suffers another type of instability due to the electromagnetic interaction with the black hole, which is known as superradiance [25; 26; 27; 28; 29].2 With the imposition of the Robin boundary conditions, superradiance and the boundary contribution interplay with each other, potentially enhancing the instability caused by the superradiance depending on the parameters of the scalar field and the background spacetime. It was argued in [22] that the instability would change the RNAdS to charged hairy black hole solutions with a nontrivial scalar field satisfying the Robin boundary conditions, which are a candidate for the final fate of the instability. First studied for neutral scalar, the presence of hairy solutions with the Robin boundary conditions has been known; see [31; 32; 33] for early works. Motivated by [22], we study charged hairy solutions in four dimensional global AdS spacetime in detail. Footnote 1: There is a recent work on the quasinormal mode spectrum of a scalar field with the Robin boundary conditions in Schwarzschild AdS\({}_{4}\) spacetime [23]. See also superradiance in BTZ black holes with the Robin boundary conditions [24]. Footnote 2: Instability of RNAdS can be associated with the violation of near horizon AdS\({}_{2}\) BF bound, but it is a necessary condition. For charged scalar, superradiance occurs regardless [30], so here we simply describe the cause of this charged instability as superradiance. In this paper, we study hairy black holes that branch at the onset of instability of the charged scalar field with the Robin boundary conditions on the four-dimensional RNAdS, and obtain results that agree with the expectation of [22] explained above. Following [7; 8], hairy black holes have been widely studied in Einstein-Maxwell-complex scalar theory in asymptotically AdS spacetime, in both Poincare and global AdS spacetimes and in various dimensions. In studies of this sort, the Dirichlet (and Neumann) boundary conditions are often considered. For example, the phase diagram in asymptotically global AdS\({}_{4}\) in the grand canonical ensemble was explored in [34].3 In this paper, we conduct a comprehensive study on the phase structures realized under the Robin boundary conditions in the grand canonical ensemble. Within the four dimensional global AdS spacetime, charged scalar solitons (boson stars) and hairy black holes in setups including the same model as ours have been considered in [41].4 Our work may be viewed as a generalization of this work, clarifying the full phase structure of such solutions under the Robin boundary conditions. Footnote 3: hairy black holes have been also considered in global AdS\({}_{5}\)[35; 36; 37; 38]. See also [39; 40]. Footnote 4: See also prior works in three dimensions [42; 43]. See also a recent study of boson stars of mixed boundary conditions deformation [44; 45] motivated by the analysis on the large charge limit in CFT [46]. This paper is organized as follows. In section 2, we prepare the setup for constructing boson stars and hairy black holes with the Robin boundary conditions. In particular, we study the onset of instability of the four dimensional RNAdS spacetime with respect to the charged scalar field perturbations with the Robin boundary conditions. In section 3, we show results of the phase diagram for our setup under the Robin boundary conditions. Section 4 concludes the paper. In appendix A, we summarize holographic renormalization for the Robin boundary conditions. In appendix B, we discuss the first law of thermodynamics. In appendix C, we comment on entropies in microcanonical ensemble. ## 2 Setup ### Reissner-Nordstrom AdS black hole We consider Einstein-Maxwell-complex scalar theory in four-dimensional asymptotically global AdS spacetime. The action is \[S=\frac{1}{8\pi G_{N}}\int\mathrm{d}^{4}x\sqrt{-g}\left(\frac{1}{2}\left(R-2 \Lambda\right)-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-|D\phi|^{2}-m^{2}|\phi|^{2} \right), \tag{1}\] where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\), and \(D_{\mu}\phi=\partial_{\mu}\phi-iqA_{\mu}\phi\). The gauge coupling constant is written by \(q\). We use units in which \(\Lambda=-3\) so that the AdS radius can be set to unity. The mass of the scalar is related to the conformal dimension of the scalar operator in the dual field theory as \(m^{2}=\Delta(\Delta-3)\). We set \(m^{2}=-2\) in this paper. Then, this equation is solved by \(\Delta=1,2\). The equations of motion are \[G_{\mu\nu}+\Lambda g_{\mu\nu}=T_{\mu\nu},\quad\nabla_{\mu}F^{\mu\nu}=J^{\nu}, \quad\left(D_{\mu}D^{\mu}-m^{2}\right)\phi=0, \tag{2}\] where \[T_{\mu\nu} =F_{\mu\lambda}F_{\nu}{}^{\lambda}+(D_{\mu}\phi)^{*}D_{\nu}\phi+ (D_{\nu}\phi)^{*}D_{\mu}\phi+g_{\mu\nu}\mathcal{L}, \tag{3}\] \[\mathcal{L} =-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-|D\phi|^{2}-m^{2}|\phi|^{2},\] \[J^{\mu} =2q^{2}|\phi|^{2}A^{\mu}+iq(\phi^{*}\partial^{\mu}\phi-\phi \partial^{\mu}\phi^{*}).\] We study spherically symmetric static solutions in the spherical AdS boundary. The ansatz can be given by \[\mathrm{d}s^{2} =-\left(1+r^{2}\right)f(r)e^{-\chi(r)}\mathrm{d}t^{2}+\frac{ \mathrm{d}r^{2}}{\left(1+r^{2}\right)f(r)}+r^{2}\mathrm{d}\Omega_{2}^{2}, \tag{4}\] \[A =A_{t}(r)\mathrm{d}t,\qquad\phi=\phi(r). \tag{5}\] The conformal boundary of the AdS is \(R\times S^{2}\) and located at \(r=\infty\). When \(f(r)=1\) and \(\chi(r)=0\) (as well as \(A=\phi=0\)), the empty AdS is obtained. For horizonless geometries, \(r=0\) is the center of the AdS. The RNAdS black hole is given by \[\begin{split}\left(1+r^{2}\right)f(r)&=1+r^{2}- \left(1+r_{h}^{2}+\frac{Q^{2}}{2r_{h}^{2}}\right)\frac{r_{h}}{r}+\frac{Q^{2}} {2r^{2}},\\ A_{t}(r)&=\mu-\frac{Q}{r},\quad\chi(r)=\phi(r)=0, \end{split} \tag{6}\] where \(r_{h}\) denotes the location of the outermost horizon, satisfying \(f(r_{h})=0\), and \(Q\) is the charge of the black hole per solid angle. The total charge is given by \(\mathcal{Q}=4\pi Q\). We choose the gauge as \(A_{t}(r_{h})=0\), and then we obtain \(\mu=Q/r_{h}\), where \(\mu\) is identified as the chemical potential of the gauge field. For the diagonal metric (4), the Hawking temperature and the Bekenstein-Hawking entropy are given by \[T_{\rm H} =\frac{1}{4\pi}(1+r_{h}^{2})f^{\prime}(r_{h})e^{-\chi(r_{h})/2}, \tag{7}\] \[S_{\rm BH} =\frac{8\pi^{2}r_{h}^{2}}{8\pi G_{N}}. \tag{8}\] For the RNAdS, the temperature is \[T_{\rm H}=\frac{2(1+3r_{h}^{2})-\mu^{2}}{8\pi r_{h}}. \tag{9}\] If \(\mu^{2}<2\), the temperature has the minimum \(T_{\rm H}=T_{0}\), when \[r_{h}=\frac{\sqrt{2-\mu^{2}}}{\sqrt{6}}\equiv r_{0},\quad T_{0}=\frac{\sqrt{3 (2-\mu^{2})}}{2\sqrt{2}\,\pi}=\frac{3r_{0}}{2\pi}. \tag{10}\] Black holes with \(r_{h}>r_{0}\) are called large black holes, while those with \(r_{h}<r_{0}\) are small. In the grand canonical ensemble, the first order transition known as the Hawking-Page transition occurs between the RNAdS and AdS when [1; 2; 47] \[r_{h}=\frac{\sqrt{2-\mu^{2}}}{\sqrt{2}}\equiv r_{\rm HP},\quad T_{\rm HP}= \frac{\sqrt{2-\mu^{2}}}{\sqrt{2}\,\pi}=\frac{r_{\rm HP}}{\pi}. \tag{11}\] The horizon radius, or temperature, of this transition can be determined by comparing grand potentials between Euclidean RNAdS (102) and thermal AdS geometries. The solution with the lower grand potential is identified to be realized physically. The RNAdS is favored over the thermal AdS in \(r>r_{\rm HP}\), and vice versa. Note that \(r_{\rm HP}>r_{0}\). In the grand canonical ensemble, the phase in \(T>T_{\rm H}\) is the _RNAdS black hole phase_. The phase in \(T<T_{\rm H}\) corresponds to horizonless AdS geometry, which we refer to as the _thermal AdS phase_. If \(\mu^{2}>2\), the temperature (9) becomes zero when \[\mu=\sqrt{2(1+3r_{h}^{2})}\equiv\mu_{\rm ext}. \tag{12}\] This is when the RNAdS black hole becomes extremal. For fixed \(r_{h}\), the range of \(\mu\) is bounded from above as \(\mu\leq\mu_{\rm ext}\). Note that both \(T_{\rm HP}\) and \(T_{0}\) become zero at the borderline value \(\mu^{2}=2\). Therefore, for \(\mu^{2}>2\), the Hawking-Page transition does not appear in the phase diagram, and the zero temperature geometry is the extremal RNAdS. To solve the equations of motion, it is convenient to use the \(z\)-coordinate defined by \(z\equiv 1/r\). In this coordinate, the AdS boundary is located at \(z=0\). By the coordinate change, the metric (4) can be rewritten as \[{\rm d}s^{2}=\frac{1}{z^{2}}\left[-\left(1+z^{2}\right)f(z)e^{-\chi(z)}{\rm d} t^{2}+\frac{{\rm d}z^{2}}{\left(1+z^{2}\right)f(z)}+{\rm d}\Omega_{2}^{2} \right]. \tag{13}\] The RNAdS black hole solution (6) becomes \[\left(1+z^{2}\right)f(z)=1+z^{2}-\left(1+z_{h}^{2}+\frac{Q^{2}z_{h}^{4}}{2}\right) \frac{z^{3}}{z_{h}^{3}}+\frac{Q^{2}z^{4}}{2},\quad A_{t}=\mu\left(1-\frac{z}{z_{ h}}\right), \tag{14}\] where \(z_{h}\equiv 1/r_{h}\). ### Instability of RNAdS We consider spherically symmetric scalar field perturbations of the RNAdS, \(\phi(z)e^{-i\omega t}\). The Klein-Gordon equation takes the form \[\phi^{\prime\prime}+\left(\frac{F^{\prime}}{F}-\frac{2}{z}\right)\phi^{\prime }-\left(\frac{m^{2}}{z^{2}F}-\frac{(\omega+qA_{t})^{2}}{F^{2}}\right)\phi=0, \tag{15}\] where \({}^{\prime}\equiv\partial_{z}\), \(F=(1+z^{2})f\), \(m^{2}=-2\), and \(f,A_{t}\) are given by the RNAdS background (14). Because of the presence of the horizon, the frequency \(\omega\) is complex in general. The imaginary part of the frequency is negative \(\operatorname{Im}\omega<0\) if the perturbation is stable, and positive \(\operatorname{Im}\omega>0\) if instability is induced in the RNAdS background. The border \(\operatorname{Im}\omega=0\) is the onset of instability. In the gauge we use, \(A_{t}(z_{h})=0\), both the real and imaginary parts of \(\omega\) become zero simultaneously at the onset of instability, \(\operatorname{Re}\omega=\operatorname{Im}\omega=0\).5 This means that, to search the onset of instability of \(\phi\), it is sufficient to assume the static perturbation \(\phi(z)\) and find nontrivial normal modes. Footnote 5: Another gauge is often used that the gauge field vanishes asymptotically while it is nonzero on the horizon, \(A_{t}\to 0\) (\(z\to 0\)) and \(A_{t}(z_{h})\neq 0\). In that gauge, the perturbation \(\phi=e^{-i\omega t}\phi(z)\) has a nonzero real part \(\operatorname{Re}\omega\neq 0\) at the onset of instability \(\operatorname{Im}\omega=0\)[22]. However, this frequency-dependence in the real part can be absorbed by the gauge choice. In this paper, we use a gauge where \(\operatorname{Re}\omega=\operatorname{Im}\omega=0\) at the onset of instability. At the onset of instability \(\omega=0\), (15) is reduced to a static perturbation equation, \[\phi^{\prime\prime}+\left(\frac{F^{\prime}}{F}-\frac{2}{z}\right)\phi^{\prime} -\left(\frac{m^{2}}{z^{2}F}-\frac{q^{2}A_{t}^{2}}{F^{2}}\right)\phi=0, \tag{16}\] which depends on three parameters \((r_{h},\mu,q)\) for given \(m\). For the onset of instability, we search normal mode solutions to (16) when boundary conditions are imposed at \(z=0\) and \(z=z_{h}\). On the horizon \(z=z_{h}\), we impose regularity (which used to be the ingoing wave boundary condition if \(\omega\neq 0\), away from the onset of instability). We impose Robin boundary conditions at the AdS boundary \(z=0\). For \(m^{2}=-2\), the asymptotic behavior of \(\phi\) in \(z\to 0\) takes the form \[\phi=\phi_{1}z+\phi_{2}z^{2}+\cdots, \tag{17}\] where \(\phi_{1}\) and \(\phi_{2}\) are integration constants. Because the scalar mass is in the range \(-9/4\leq m^{2}\leq-5/4\), both asymptotic behaviors \(\phi\sim z\) and \(\phi\sim z^{2}\) are normalizable [48]. This means that both coefficients \(\phi_{1}\) and \(\phi_{2}\) can be nonzero for normalizable normal modes. The boundary conditions with \(\phi_{1}=0\) and \(\phi_{2}\neq 0\) are called Dirichlet, and those with \(\phi_{1}\neq 0\) and \(\phi_{2}=0\) are Neumann. The case with general values of \(\phi_{1}\neq 0\) and \(\phi_{2}\neq 0\) is called the Robin boundary conditions. The Robin boundary conditions can be specified by a parameter \(\zeta\) defined by \[\cot\zeta=\frac{\phi_{2}}{\phi_{1}}. \tag{18}\] We choose the domain of \(\zeta\) to be periodic in \(0\leq\zeta<\pi\). The points \(\zeta=0\) and \(\zeta=\pi/2\) correspond to the Dirichlet and Neumann boundary conditions, respectively. Under the Robin boundary conditions, we search the onset of instability for the scalar field perturbation in the four-dimensional parameter space \((\zeta,r_{h},\mu,q)\). Technically, for a set of three parameters \((r_{h},\mu,q)\), we integrate the perturbation equation (16) from the horizon to the AdS boundary and read off the asymptotic coefficients \(\phi_{1}\) and \(\phi_{2}\) in (17), from which \(\zeta\) can be obtained. This procedure gives a location of the onset of instability in the \((\zeta,r_{h},\mu,q)\) parameter space. Iterating this procedure while varying the values for the three parameters \((r_{h},\mu,q)\), we obtain a relation among the four parameters \((\zeta,r_{h},\mu,q)\). Thus, for instance, fixing \((r_{h},q)\), we obtain the onset of instability is given as a curve in \((\mu,\zeta)\) plane. In the horizonless limit \(r_{h}=0\), the perturbation equation (16) can be solved analytically. The background is the global AdS \(f=1\) with a constant gauge field \(A_{t}=\mu\). The perturbation equation (16) then becomes \[\phi^{\prime\prime}-\frac{2}{z(1+z^{2})}\phi^{\prime}-\left(\frac{m^{2}}{z^{2} (1+z^{2})}-\frac{\mu^{2}q^{2}}{(1+z^{2})^{2}}\right)\phi=0. \tag{19}\] When the horizon is absent, we impose \(\phi^{\prime}(z)|_{z=\infty}=0\) at the center of the AdS. With this boundary condition and \(m^{2}=-2\), (19) is solved by \[\phi(z)=\frac{z}{\mu q}\sin\left(\mu q\cot^{-1}z\right), \tag{20}\] which is normalized as \(\phi(z)|_{z=\infty}=1\). Expanding this around \(z=0\), we find [11; 22] \[\cot\zeta=-\frac{\mu q}{\tan(\pi\mu q/2)}. \tag{21}\] For \(r_{h}=0\), \(\mu\) and \(q\) always show up in a pair \(\mu q\). The set of the parameters \((\zeta,\mu,q)\) satisfying the above relation gives a normal mode in the global AdS. While the global AdS is stable against linear perturbations, nontrivial scalar solutions branch from the AdS at the normal modes. For this reason, with a slight abuse of terminology, we also refer to the location of the AdS normal modes as the onset of instability. In figure 1, we show (a) the location of the AdS charged scalar field normal modes (\(r_{h}=0\)) and (b) the onset of instability of the RNAdS for \(r_{h}=0.1,0.5,1\) at \(q=1\). In figure 1, the value of \(\mu\) is bounded from above by extremality as \(\mu\leq\mu_{\rm ext}\) (12), which is marked by the vertical red dashed line for each \(r_{h}\). In the same figure, the RNAdS is unstable to the charged scalar field perturbation above each curve, which can be found by studying full quasinormal modes by including nonzero frequencies \(\omega\) (see also [22]). Correspondingly, also in figure 1, the scalar field will be nonzero in the region upper from the curve. In figure 1, we emphasize that the normal modes can be characterized by the number of nodes in the radial direction, which increases as the curve reaches \(\zeta=0\). The solution without a node is called the fundamental mode, and the solution with nodes are called overtones. Because overtones cost more energy than the fundamental mode, later in the paper, we consider only the backreacted solutions as a fully nonlinear extension of the fundamental mode. In figure 1(b), the data for \(r_{h}=0.1\) shows that, when the coupling \(q\) is small, the onset of instability terminates at the extremality before reaching the Dirichlet boundary conditions (\(\zeta=0\)). For the Dirichlet boundary conditions to be unstable, a larger \(r_{h}\) is necessary. In figure 2, the onset of instability in the Schwarzschild AdS limit (\(\mu=0\)) is shown. Figure 1: (a) The charged scalar field normal modes of global AdS with constant \(A_{t}=\mu\). (b) The onset of instability of the RNAdS for \(r_{h}=0.1,0.5,1\) at \(q=1\). Figure 2: The onset of instability of the Schwarzschild AdS (\(\mu=0\)). The blue and orange parts are respectively in the large and small black hole branches of the Schwarzschild AdS. Combining with the analysis of quasinormal modes [22], we find that, in (a), the Schwarzschild AdS is unstable above the curve, and correspondingly in (b), it is unstable to the right of each of the blue and orange curves. The value in the horizonless limit (\(r_{h}=0\)) is analytically given by \[\zeta_{c}=\pi-\tan^{-1}(\pi/2)\simeq 0.6805\pi. \tag{22}\] In figure 2(a), the curve has the minimum at \(r_{h}\simeq 0.4807(<r_{0})\) with \(\zeta_{\rm min}\simeq 0.6728\pi\) and approaches \(\zeta\rightarrow\pi\) as \(r_{h}\rightarrow\infty\). There are hence no overtones for the Schwarzschild AdS. In figures 3,4,5, we show the onset of instability of the RNAdS for the fundamental modes with different \(\zeta\) at \(q=1,\sqrt{2},2\), respectively. The same onset results are shown in the \((\mu,r_{h})\) and \((\mu,T_{\rm H})\) planes. We do so because we will discuss the phase structure in the \((\mu,T_{\rm H})\) plane of the phase diagram later in the paper, and it will be instructive to have the location of the instability both in the \((\mu,r_{h})\) and \((\mu,T_{\rm H})\) planes. In each figure, we show the locations of the onset of instability for 8 parameter values \(\zeta/\pi=0,0.1,\ldots,0.7\). (Among the 8 color lines, the lightest color is \(\zeta=0\) and the darkest \(\zeta/\pi=0.7\).) The red dashed line denotes the extremal RNAdS, below which no regular RNAdS exist. The arc by the thin black line (\(r_{h}=r_{\rm HP}\) and \(T_{\rm H}=T_{\rm HP}\)) is the Hawking-Page transition of the RNAdS (11). Inside the arc, the thermal AdS is thermodynamically favored over the RNAdS. In figures (a), the gray dashed line (\(r_{h}=r_{0}\)) separates the small and large black holes (10), and small black holes are inside the arc. In figures (b), the gray dashed line (\(T_{\rm H}=T_{0}\)) denotes the minimal temperature \(T_{0}\), which is realized when \(r_{h}=r_{0}\). Figure 3: The onset of instability of the RNAdS for \(q=1\). Color lines show the locations of the onset of instability for \(\zeta/\pi=0,0.1,\ldots,0.7\) from bottom to top (lighter to darker). For each value of \(\zeta\), instability occurs in the region below the corresponding curve. Panel (a): phase diagram on the \((\mu,r_{H})\) plane. The red dashed line corresponds to the extremal solutions, below which no black hole solutions exist. The curves for \(r_{h}=r_{\rm HP}\) (black solid) and \(r_{h}=r_{0}\) (gray dashed) denote the Hawking-Page transition and the transition between the small/large black holes. Panel (b): phase diagram on the \((\mu,T_{\rm H})\) plane. The curve for \(T_{\rm H}=T_{0}\) (gray dashed) denote the minimum horizon temperature, which corresponds to \(r_{H}=r_{0}\) and no black hole solutions with \(T<T_{\rm H}\) exist. Note that \(r_{h}=0\) on the \((\mu,r_{h})\) plane is mapped to \(T_{\rm H}\rightarrow\infty\) on the \((\mu,T_{\rm H})\) plane. The red dashed line at \(T_{\rm H}=0\) corresponds to the extremal solutions. In figures (a), the RNAdS is unstable below each onset curve. In the grand canonical ensemble, we are interested in the onset of instability outside the arc given by \(r_{h}=r_{\rm HP}\) or \(T_{\rm H}=T_{\rm HP}\). Instability can be understood in terms of superradiance [22].6 With the imposition of the Robin boundary condition, superradiance and the boundary contribution interplay with each other, potentially enhancing instability caused by superradiance depending on the parameters of the scalar field and the background spacetime. It is demonstrated in figures (a) that, for fixed \(q\) and \(r_{h}\), the value of \(\zeta\) at the onset increases as \(\mu\) is decreased. That is, the parameter range of \(\mu\) for instability is wider as \(\zeta\) is increased (see also figure 8 Figure 4: Same as figure 3 but for \(q=\sqrt{2}\). Figure 5: Same as figure 3 but for \(q=2\). in [22]). In figure 3, we can see that the extremal RNAdS are stable if \(\zeta\) is small and \(\mu\) is not sufficiently large, while in figures 4 and 5, the extremal RNAdS are unstable to all \(\zeta\). The critical value for the instability of the Dirichlet boundary condition \(\zeta=0\) is \(q=\sqrt{2}\), that is, on the phase diagrams on the \((\mu,r_{h})\) plane (see Panel (a) of figures 3, 4, 5), the onset curve for \(\zeta=0\) ends on the (red dashed) curve of the extremal black hole solutions when \(q<\sqrt{2}\), while it ends on the \(r_{h}=0\) axis when \(q>\sqrt{2}\).7 On the phase diagrams on the \((\mu,T_{\rm H})\) plane, the extremal black hole solutions correspond to the \(\mu\geq\sqrt{2}\) part of the \(T_{\rm H}=0\) axis, and only a part of it is covered by the instability region for \(\zeta=0\) when \(q<\sqrt{2}\), while it is wholly covered by the instability region when \(q>\sqrt{2}\). Footnote 7: See [34] for an earlier discussion on the phase diagram for the Dirichlet boundary condition. Note that the Hawking-Page transition occurs at \(\mu=1\) in this reference due to the normalization different from ours. See also [29], which studied massless scalar but observations given there can be easily generalized to massive cases. ### Hairy black holes Knowing the onset of instability for the charged scalar field perturbation of the RNAdS, we will construct backreacted hairy black hole solutions branching at the onset of instability. With the ansatz (13), the equations of motion (2) are reduced to coupled ODEs for \(f(z),\chi(z),\phi(z),A_{t}(z)\) as \[F^{\prime}-\left(\frac{3}{z}+z\phi^{\prime 2}\right)F-e^{ \chi}\left(\frac{z^{3}A_{t}^{\prime 2}}{2}+\frac{zq^{2}A_{t}^{2}\phi^{2}}{F} \right)+z+\frac{3}{z}+\frac{2\phi^{2}}{z} =0, \tag{23}\] \[\chi^{\prime}-2z\phi^{\prime 2}-\frac{2ze^{\chi}q^{2}A_{t}^{2}\phi^{2}}{F ^{2}} =0,\] (24) \[\phi^{\prime\prime}+\left(\frac{F^{\prime}}{F}-\frac{\chi^{ \prime}}{2}-\frac{2}{z}\right)+\left(\frac{2}{z^{2}F}+\frac{e\chi q^{2}A_{t}^ {2}}{z^{2}F}\right)\phi =0,\] (25) \[A_{t}^{\prime\prime}+\frac{\chi^{\prime}}{2}A_{t}^{\prime}- \frac{2q^{2}\phi^{2}}{z^{2}F}A_{t} =0, \tag{26}\] where \(F=(1+z^{2})f\). We need the asymptotic behavior of the field variables in \(z=0\) and \(z=z_{h}\) or \(z\rightarrow\infty\). In the AdS boundary \(z=0\), the asymptotic solutions are given by \[f(z) =1+\phi_{1}^{2}z^{2}+f_{3}z^{3}+\left(2\phi_{1}^{4}+2\phi_{2}^{2 }+\frac{a_{1}^{2}e^{\chi 0}}{2}\right)z^{4}+\cdots, \tag{27}\] \[\chi(z) =\chi_{0}+\phi_{1}^{2}z^{2}+\frac{8}{3}\phi_{1}\phi_{2}z^{3}+ \left(\frac{3}{2}\phi_{1}^{4}+2\phi_{2}^{2}-q^{2}a_{0}^{2}\phi_{1}^{2}e^{\chi 0} \right)z^{4}+\cdots,\] (28) \[\phi(z) =\phi_{1}z+\phi_{2}z^{2}+\frac{1}{2}\phi_{1}\left(\phi_{1}^{2}-q^ {2}a_{0}^{2}e^{\chi 0}\right)z^{3}+\cdots,\] (29) \[A_{t}(z) =a_{0}+a_{1}z+q^{2}a_{0}\phi_{1}^{2}z^{2}+\frac{1}{6}\phi_{1} \left(4q^{2}a_{0}\phi_{2}+(2q^{2}-1)a_{1}\phi_{1}\right)z^{3}+\cdots, \tag{30}\] where \((f_{3},\chi_{0},\phi_{1},\phi_{2},a_{0},a_{1})\) are six integration coefficients not determined in the asymptotic analysis. We read off them from the asymptotic form of numerical solutions. With these asymptotic behavior, the metric (13) in \(z\to 0\) naively becomes \[{\rm d}s^{2}|_{z\to 0}=\frac{1}{z^{2}}\left(-e^{-\chi_{0}}{\rm d}t^{2}+{\rm d}z^{2}+{\rm d }\Omega_{2}^{2}\right). \tag{31}\] This can be rescaled to \(\chi_{0}=0\) by the scaling symmetry (redefinition of \(t\)) as we will see shortly. In the presence of the black hole horizon, the regular asymptotic solutions near the horizon \(z=z_{h}=1/r_{h}\) are given by \[f(z) =-\frac{3+z_{h}^{2}+2\phi_{h}^{2}-e^{\chi_{h}}A_{h}^{2}z_{h}^{3}} {z_{h}(1+z_{h}^{2})}(z-z_{h})+\cdots, \tag{32}\] \[\chi(z) =\chi_{h}+\cdots,\] (33) \[\phi(z) =\phi_{h}+\cdots,\] (34) \[A_{t}(z) =A_{h}(z-z_{h})+\cdots, \tag{35}\] where \((\chi_{h},\phi_{h},A_{h})\) are integration constants, and the higher order coefficients are determined fully in terms of them. Two degrees of freedom are considered to be correlated to physical parameters, while the remaining one can be fixed by the scaling symmetry discussed below. In the absence of the horizon, the solutions (32)-(35) are replaced with the following series in \(z\to\infty\), \[f(z) =1+O(z^{-2}), \tag{36}\] \[\chi(z) =\chi_{h}+O(z^{-2}),\] (37) \[\phi(z) =\phi_{h}+O(z^{-2}),\] (38) \[A_{t}(z) =A_{h}+O(z^{-2}). \tag{39}\] There are again three integration constants. Our ansatz, (4) and (5), has the following scaling symmetry: \[t\to e^{-c/2}t,\quad\chi\to\chi-c,\quad A_{t}\to e^{c/2}A_{t}, \tag{40}\] where \(c\) is an arbitrary constant. By this scaling, solutions with \(\chi_{0}\neq 0\) can be rescaled to those with canonical boundary metric satisfying \(\chi_{0}=0\). This means that, in numerical calculations, we can set the normalization of \(\chi\) to an arbitrary value convenient for us without loss of generality. We fix \(\chi_{h}=0\) when we compute and then rescale numerical results by (40) to satisfy \(\chi|_{z=0}=0\). From numerical results, we construct thermodynamic quantities. Carrying out the holographic renormalization as will be described in appendix A, we obtain the expressions of the thermodynamic quantities in terms of the asymptotic coefficients given in (27)-(30). For the Robin boundary conditions, the scalar field is dual to the dimension 1 operator \(\mathcal{O}_{1}\). After rescaling to \(\chi_{0}=0\), the expression of the total energy, charge, and scalar expectation value for the Robin boundary conditions are obtained in (102) and (106) as (the subscript \(R\) is removed here) \[\mathcal{E} =4\pi(-f_{3}+3\phi_{1}^{2}\cot\zeta)=4\pi(-f_{3}+3\phi_{1}\phi_{2 }), \tag{41}\] \[\mathcal{Q} =-4\pi a_{1},\qquad\langle\mathcal{O}_{1}\rangle=4\pi\sqrt{2}\, \phi_{1}.\] We also have the temperature \(T_{\rm H}\) through (7) and entropy \({\cal S}_{\rm BH}\equiv 8\pi G_{N}S_{\rm BH}=8\pi^{2}r_{h}^{2}\) through (8). We consider the grand canonical ensemble to discuss the phase structure. The grand potential is given by \[\Omega={\cal E}-T_{\rm H}{\cal S}_{\rm BH}-\mu{\cal Q}, \tag{42}\] where \(\mu=a_{0}\). The grand potential \(\Omega\) can be evaluated in two different expressions. One is by the combination of thermodynamic quantities as in the RHS of (42), and the other is directly by a bulk integral (100). These give the same physical quantity. In practice, the latter is less convenient and costly because of the necessity of numerically cancelling the divergent terms in the integrand. Hence, we use \(\Omega\) given by (42) when we evaluate the phase structure. Numerical solutions to (23)-(26) satisfying the Robin boundary conditions can be obtained simply by integrating the equations of motion. Specifying \((\phi_{h},A_{h},q,r_{h})\), we integrate (23)-(26) from the horizon \(z=z_{h}\) (or AdS center \(z=\infty\)) to the boundary \(z=0\) and read off \((f_{3},\chi_{0},\phi_{1},\phi_{2},a_{0},a_{1})\) in the asymptotic boundary behavior (27)-(30). After the rescaling to set \(\chi_{0}\to 0\), we calculate the thermodynamic quantities and \(\zeta\) (18). By these quantities, the grand canonical phase diagram is given as a four-dimensional space \((\mu,T_{\rm H},\zeta,q)\). When we present our results, we use data slices in the four dimensional parameter space. To check numerical results, we can evaluate first-law-like relations generalizing the first law of thermodynamics/black hole mechanics to the case with a nontrivial scalar field. The expressions are discussed in appendix B. For our solutions in the presence of the scalar field satisfying the Robin boundary conditions, we can use (104), \[{\rm d}{\cal E}=T_{\rm H}{\rm d}{\cal S}_{\rm BH}+\mu{\rm d}{\cal Q}+\frac{1}{ 8\pi}\langle{\cal O}_{1}\rangle^{2}{\rm d}(\cot\zeta). \tag{43}\] Note that this contains an atypical variation with respect to \(\cot\zeta=\phi_{1}/\phi_{2}\), which is not a thermodynamic quantity but is a parameter in the model. However, if we compare between numerical solutions where both \(\phi_{1}\) and \(\phi_{2}\) vary while their ratio is not fixed, the first-law-like equation (43) is useful. We find that the above relation is satisfied within numerical errors. ## 3 Results ### Neutral boson stars and black holes First, we consider neutral solutions.8 Here, we focus on the phase transition in the canonical ensemble.9 Before discussing the black holes, let us recall the basic features of the horizonless solutions (see also [50]). In figure 6, we show the energy and expectation value of neutral horizonless solutions branching at the appearance of the zero normal mode of AdS. Because the scalar field is subject to the Robin boundary conditions, we call the horizonless solutions _Robin boson stars_. These are a one-parameter family of solutions parametrized by \(\zeta\). Scalar hair grows in \(\zeta>\zeta_{c}\simeq 0.6805\pi\), where the phase transition is of second order. The quantities in the figure approach \(\langle\mathcal{O}_{1}\rangle\to+\infty\) and \(\mathcal{E}\to-\infty\) in \(\zeta\to\pi\). In the following, we will consider two kinds of generalization: black holes by introducing temperature \(T_{\rm H}\), and gauge field by adding \((\mu,q)\). Without the gauge field, the phase structure is specified by two parameters \((T_{\rm H},\zeta)\). In this situation, the free energy we compare for determining the phase structure is nothing but the grand potential (42) with \(\mu=0\), \(\Omega|_{\mu=0}=\left(\mathcal{E}-T_{\rm H}\mathcal{S}_{\rm BH}\right)_{\mu=0}\). We compare free energies among thermal AdS, Schwarzschild AdS, Robin boson stars, and black holes with neutral scalar hair, which we call _Robin black holes_. The free energy for the thermal AdS is zero, and that for the Schwarzschild AdS is given by (100) with \(\mu=0\). In figure 7(a), we show an example of the comparison of free energies among neutral solutions. For \(r_{h}\lesssim 1\), the two solutions experience the first order phase transition. In the figure, the lines of the \(r_{h}=0.9\) Robin black holes (blue) and boson stars (black dashed) cross around \(\zeta/\pi\sim 0.8\). For \(r_{h}\gtrsim 1\), the free energy of the Robin black holes is always lower than that of the Robin boson stars. The free energy for \(r_{h}=1.1\) (orange) is shown in the figure. The phase diagram for the neutral solutions is summarized in figure 7(b). The vertical green line at \(\zeta=\zeta_{c}\) is the second order phase transition from thermal AdS to Robin boson stars. The blue line in \(\zeta\geq\zeta_{t}\) at the border of the Schwarzschild AdS and hairy Robin black holes is the second order phase transition for growing scalar hair, where \[\zeta_{t}\simeq 0.6847\pi. \tag{10}\] Because the source of the scalar field is assumed to be zero, the scalar becomes nonzero spontaneously when the temperature is decreased [51; 52]. As \(\zeta\) increases, the critical tem Figure 6: Neutral Robin boson stars. perature for this scalar hair formation rises, and in the limit \(\zeta\to\pi\) (\(\cot\zeta\to-\infty\)), the Robin black holes dominate at any high temperatures. The red line in \(\zeta\geq\zeta_{t}\) marks the first order Hawking-Page transition between Robin black holes and Robin boson stars. The short orange segment in \(\zeta_{c}\leq\zeta\leq\zeta_{t}\) (see the inset) is the first order phase transition between Schwarzschild AdS and Robin boson stars; for \(\zeta\) in this region, Robin black holes have the higher free energy than these two, and hence the first order phase transition is between the Schwarzschild AdS and Robin boson stars. The three lines (red, orange, blue) merge at \(\zeta=\zeta_{t}\) and \[T_{\rm H}\simeq 0.3184, \tag{3.2}\] which corresponds to the triple point at which the Schwarzschild AdS black hole, Robin black hole, and the Robin boson star have the same free energy. The temperature (3.2) at the triple point is slightly higher than the transition temperature \(T_{\rm HP}\) for the Schwarzschild AdS and thermal AdS phases (2.11), \(T_{\rm HP}|_{\mu=0}=1/\pi\simeq 0.3183\). We find that the Hawking-Page transition temperature depends on \(\zeta\) very mildly. We were not able to pin down the line of the Hawking-Page transition up to \(\zeta\to\pi\) because of numerical limitations. But, as long as we could confirm, the transition temperature (red line) behaves as \[T_{\rm H}\simeq 0.03(\zeta-\zeta_{t})/\pi+0.3184, \tag{3.3}\] which is close to \(T_{\rm HP}|_{\mu=0}\simeq 0.3183\) and is mostly insensitive to \(\zeta\). Thus, for the Hawking-Page transition temperature of neutral geometries, the effect of the Robin boundary conditions on the free energy is minor. This behavior suggests that the free energies of the Robin boson star and the Robin black hole changes by almost the same amount when \(\zeta\) changes. Figure 7: (a) Comparison of free energies between Robin black holes with \(r_{h}=0.9\) (top, blue) and \(1.1\) (bottom, orange) and boson stars (black dashed). (b) The phase diagram for neutral solutions. ### Charged boson stars To proceed with the reduced number of parameters, we discuss charged but horizonless solutions with the Robin boundary conditions, which we call _charged Robin boson stars_. Features of these solutions have been explored in [41] in the same setup as ours, but here we discuss the solutions in the phase space parametrized by \((\mu,\zeta,q)\). In figure 8, the expectation value \(\langle\mathcal{O}_{1}\rangle\) is compared for three cases with \(\zeta>\zeta_{c}\), \(\zeta=\zeta_{c}\), and \(\zeta<\zeta_{c}\) for \(q=1,2\). Recall that AdS at \(\mu=0\) is unstable for \(\zeta\geq\zeta_{c}\simeq 0.6805\pi\) for forming neutral boson stars. This implies that, for \(\zeta>\zeta_{c}\), charged Robin boson stars are connected to neutral Robin boson stars (with \(\mu=0\)) by turning on finite \(\mu\). Meanwhile, for \(\zeta<\zeta_{c}\), they branch at the appearance of the zero normal mode of AdS with finite \(\mu\). For example, when \(\zeta/\pi=0.6\), the value of \(\mu\) at the branching point of the condensed solution in figures 8(a) and 8(b) (i.e. the limit of \(\langle O_{1}\rangle\to 0\)) corresponds to \(\mu\) in the \(r_{h}=0\) limit in figures 3(a) and 5(a), respectively. The boundary between these two families of the solutions is \(\zeta=\zeta_{c}\). In addition, in figure 8(a), these charged Robin boson stars have the maximal \(\mu\) above which solutions do not exist. In the inset, the data region near the maximal \(\mu\) for \(\zeta/\pi=0.6\) is enlarged. While it might be visually unclear even in the inset, the region near the largest \(\mu\) has a spiral structure, corresponding to the attractor solutions discussed in [41]. In figure 8(b), the expectation value can be arbitrarily large. This corresponds to solutions allowing the planar limit discussed in [41]. The boundary between these two distinct behaviors depends both on \(\zeta\) and \(q\). The tendency is that the spiral structure disappears (moves to infinity on the \((\mu,\langle O_{1}\rangle)\) plane) as \(q\) and \(\zeta\) are increased. Not only \(\langle O_{1}\rangle\) but also the energy \(E\) shows a qualitatively similar behavior. This tendency can be qualitatively understood as an outcome of the balance between the gravitational attraction, scalar field pressure and the electric repulsion. When \(q\) is small, the electric repulsion is weak and then there is a critical mass (and \(\langle O_{1}\rangle\)) for a boson star beyond which the boson star cannot exist. When \(q\) is large, the electric repulsion becomes strong enough to sustain the boson star against the gravitational collapse, and correspondingly the mass and \(\langle O_{1}\rangle\) can become arbitrarily large. The grand potential of charged Robin boson stars always satisfy \(\Omega<0\), where thermal AdS has \(\Omega=0\). Therefore, when the charged Robin boson stars exist, they are always preferred over the thermal AdS. This feature is the same as that in the neutral case, in which the boson stars have the smaller free energy than the thermal AdS (see section 3.1 and figure 7(a)). ### Charged black holes Finally, we consider black holes with nontrivial charged scalar field with the Robin boundary conditions. We call these _hairy Robin black holes_. The phase space depends on the all four parameters \((\mu,T_{\rm H},\zeta,q)\). In figure 9, phase diagrams for \(q=1\) are shown for different \(\zeta\). In each figure, the blue line on the border between the RNAdS and hairy Robin black holes denotes the second-order phase transition below which the scalar hair forms. The red line is the first order Hawking-Page transition between hairy Robin black holes and charged Robin boson stars. The orange segment denotes the first order phase transition between the RNAdS and charged Robin boson stars. The black dashed line is plotted for reference of the Hawking-Page transition between the thermal AdS and RNAdS (11), although it is not physically dominant because it is superseded by the charged Robin boson star phase. Starting from a large value of \(\zeta\), we browse notable features in the phase structure by decreasing \(\zeta\). * \(\zeta>\zeta_{t}\simeq 0.6847\pi\): In figure 9(a) (see Eq. (10) for the definition of \(\zeta_{t}\)), neutral solutions (\(\mu=0\)) can have nontrivial scalar hair. Thermal AdS does not appear because its free energy is always higher than Robin boson stars when the latter exist as solutions. Hence, the phase diagram contains three phases: zero scalar RNAdS, hairy Robin black holes, and charged Robin boson stars. By decreasing the temperature, the RNAdS spontaneously grows the scalar hair, and then the hairy Robin black hole transitions to the charged Robin boson stars. This feature is common to all \(\mu\). * \(\zeta_{t}>\zeta>\zeta_{c}\simeq 0.6805\pi\): In figure 9(b), the phase structure for this parameter region is shown for \(\zeta/\pi=0.682\). When \(\zeta\) is decreased to \(\zeta_{t}\), the two phase transition lines (blue and red) first meet at \(\mu=0\). As shown in figure 7(b), \(\zeta=\zeta_{t}\) is bigger than \(\zeta=\zeta_{c}\) where the thermal AdS phase shows up. This means that, in \(\zeta<\zeta_{t}\), the phase transition from the RNAdS to charged Robin boson stars (orange line) appears. * \(\zeta_{c}>\zeta\gtrsim 0.24\pi\): In \(\zeta<\zeta_{c}\), the thermal AdS phase can be present as \(\mu\) is increased from \(0\) until the charged Robin boson stars branch from thermal AdS as discussed in figure 8. The phase diagram in this parameter region is shown in figure 9(c). The vertical green line is the second order phase transition between thermal AdS and charged Robin boson stars. * In figures 9(a)-9(c), the Hawking-Page transition (red line) will approach \(T_{\rm H}\to 0\) as \(\mu\) is increased. We were not able to compute until this limit due to tough numerics, but we can see that the transition line will go down towards \(T_{\rm H}\to 0\) for a wide Figure 9: Phase diagram for \(q=1\) and \(\zeta/\pi=0.7,0.682,0.6,0.239,0.2\). The dashed and solid black curves denote the Hawking-Page transition temperature \(T_{\rm HP}\) (Eq. 10) between the RN AdS black holes and the thermal AdS. When the grand potential of these two solutions are bigger than that of the charged Robin boson star, the corresponding part of this curve is irrelevant and does not represent a physical phase boundary, but we added it with a dashed line for reader’s convenience. parameter range (in \(\zeta/\pi\gtrsim 0.24\)). We also expect that the Hawking-Page transition should go to \(T_{\rm HP}\to 0\) before boson star solutions disappear at the upper limit in \(\mu\) for Robin boson stars with small \(q\) (discussed in section 3.2). * \(\zeta\simeq 0.24\pi\): When \(\zeta\) is decreased further, the Hawking-Page transition between hairy Robin black holes and charged Robin boson stars reaches zero temperature and disappears. For \(q=1\), this occurs in a small parameter window of \(\zeta\) near \(\zeta/\pi\simeq 0.24\). Figure 9(d) is the phase diagram for \(\zeta/\pi=0.239\). This has four phases, but the charged Robin boson stars and hairy Robin black holes are separated by the RNAdS, and correspondingly there is a small gap of \(\mu\) where the extremal RNAdS survives in the phase diagram at zero temperature. The hairy Robin black holes branch from the extremal RNAdS. * \(\zeta\lesssim 0.24\pi\): The charged Robin boson star phase then disappears when \(\zeta\) is decreased further. In figure 9(e), the phase diagram at \(\zeta/\pi=0.2\) is shown. While charged Robin boson stars also exist as solutions in this parameter region, their grand potential is always bigger than that of hairy Robin black holes, and hence they do not show up in the grand canonical phase diagram. When the coupling \(q\) is increased, the \(\zeta\) dependence of the phase structure can be different. * For \(q=\sqrt{2}\), the phase structures of figures 9(a), 9(b), and 9(c) are observed for \(\zeta>0\), but those of figures 9(d) and 9(e) are absent because no stable extremal RNAdS exist even for the Dirichlet boundary condition \(\zeta=0\). Instead, at \(\zeta=0\), a phase structure not shown here appears (see figure 7(a) in [34]). It contains three phases, where thermal AdS and hairy Robin BH are separated by the RN AdS BH reaching sufficiently low temperature. There, the phase of the charged boson stars also disappears because the onset is exactly on the \(T_{\rm H}=0\) axis [34]. * For \(q>\sqrt{2}\), the scalar hair grows at finite temperatures before extremality is reached, because all the extremal solutions with \(T_{\rm H}=0\) are unstable toward scalar hair formation when \(q>\sqrt{2}\), as explained in section 2.2. Therefore, the phase structures depicted in figures 9(d) and 9(e) are absent in \(q>\sqrt{2}\). There is qualitative difference for the phase structures of the Robin boundary conditions from those for the Dirichlet boundary condition (see [34]) that some of the phase structures in figure 9 are absent in the same system under the Dirichlet boundary condition. The structures of figures 9(a) and 9(b) do not exist for the Dirichlet boundary condition, because thermal AdS phase should appear in small \(\mu\) region when \(\zeta<\zeta_{c}\) and particularly in the Dirichlet case (\(\zeta=0\)). Adding to that, the presence of neutral Robin black hole phase (with \(\mu=0\)) for \(\zeta>\zeta_{t}\) observed in figure 9(a) is another unique feature of the Robin boundary condition. The structure of figure 9(c) is observed for the Dirichlet boundary condition with a gauge coupling \(q>\sqrt{2}\) (in our normalization) [34]. For the Robin boundary condition, however, this phase structure can be seen even for small \(q\) if \(\zeta\) is sufficiently large. The structure of figure 9(d) is not seen for the Dirichlet boundary condition because the charged boson star phase disappears at the same time as \(T_{\rm HP}\to 0\) at \(q=\sqrt{2}\) (see [34]). The structure of figure 9(e) is typical in \(q<\sqrt{2}\). ## 4 Conclusion We considered charged boson stars and black holes in four-dimensional Einstein-Maxwell-complex scalar theory with the Robin boundary conditions for the charged scalar field in asymptotically global AdS spacetime. This setup is dual to the double trace deformation of three-dimensional dual field theory on \(R\times S^{2}\) with a dimension 1 charged scalar operator. The current setup has the four-dimensional parameter space \((T_{\rm H},\mu,q,\zeta)\), and the consideration of the Robin boundary conditions offers the most general solutions in the four-dimensional Einstein-Maxwell-complex scalar theory. The phase structure and phase transition are studied in the grand canonical ensemble. There are four phases characterized by the presence and absence of the black hole horizon and nontrivial scalar hair. There is an interplay between two kinds of instability on the formation of a charged scalar hair, the one caused by the Robin boundary conditions and the other by the chemical potential or the black hole charge. These introduce the richer phase structure compared with the case of the Dirichlet boundary condition, as explained in section 3. We considered the Robin boundary conditions for scalar field in this paper. This type of boundary conditions can be also imposed on vector and metric fields [11; 19; 20; 21]. It will be interesting to consider phases of gravitational solutions where the Robin boundary conditions are imposed on such different kinds of field. Rather recently, the Robin boundary conditions are utilized for studies in various context including the holography and also the supergravity (see e.g. [53; 54; 55]). Our study would provide useful information to clarify various properties in these cases, such as the thermodynamical phase structures and also the dynamical (in)stabilities. The authors thank Li Li, Keiju Murata and Matthew Roberts for useful discussions, and also thank the JGRG weibnar series where this work was initiated. The work of T.H. was supported in part by JSPS KAKENHI Grant Numbers 19H01895, 20H05853, and 19K03876. The work of T.I. was supported in part by JSPS KAKENHI Grant Number 19K03871. The work of T.K. was supported in part by JSPS KAKENHI Grant Number 17H06360. T.K. thanks for support by VILLUM FONDEN (grant no. 37766), by the Danish Research Foundation, and under the European Union's H2020 ERC Advanced Grant "Black holes: gravitational engines of discovery" grant agreement no. Gravitas-101052587. The work of N.T. was supported in part by JSPS KAKENHI Grant Numbers 18K03623 and 21H05189. Holographic renormalization We carry out holographic renormalization in the asymptotically global AdS spacetime with the Robin boundary conditions (also called the mixed boundary conditions) [56]. We follow the calculations in [57], the application of which to complex scalar theory in global AdS is straightforward. We use the \(r\)-coordinate in calculation. The asymptotic solutions near the AdS boundary (27)-(30) take the form \[f(r) =1+\frac{\phi_{1}^{2}}{r^{2}}+\frac{f_{3}}{r^{3}}+\left(2\phi_{1 }^{4}+2\phi_{2}^{2}+\frac{a_{1}^{2}e^{\chi_{0}}}{2}\right)\frac{1}{r^{4}}+ \cdots, \tag{115}\] \[\chi(r) =\chi_{0}+\frac{\phi_{1}^{2}}{r^{2}}+\frac{8}{3}\frac{\phi_{1} \phi_{2}}{r^{3}}+\left(\frac{3}{2}\phi_{1}^{4}+2\phi_{2}^{2}-q^{2}a_{0}^{2} \phi_{1}^{2}e^{\chi_{0}}\right)\frac{1}{r^{4}}+\cdots,\] (116) \[\phi(r) =\frac{\phi_{1}}{r}+\frac{\phi_{2}}{r^{2}}+\frac{\phi_{1}}{2} \left(\phi_{1}^{2}-q^{2}a_{0}^{2}e^{\chi_{0}}\right)\frac{1}{r^{3}}+\cdots,\] (117) \[A_{t}(r) =a_{0}+\frac{a_{1}}{r}+\frac{q^{2}a_{0}\phi_{1}^{2}}{r^{2}}+ \frac{1}{6}\phi_{1}\left(4q^{2}a_{0}\phi_{2}+(2q^{2}-1)a_{1}\phi_{1}\right) \frac{1}{r^{3}}+\cdots. \tag{118}\] In the following, we assume that the scaling (40) has been applied so that \(\chi_{0}=0\). The action is regularized by introducing a cutoff surface at \(r=r_{\Lambda}\). Let \(M\) denote the regularized spacetime manifold defined in \(r\leq r_{\Lambda}\) and \(\partial M\) the cutoff surface at \(r=r_{\Lambda}\). The bulk action (1) accompanied by the Gibbons-Hawking term can be regularized as \[S_{\rm reg}=\frac{1}{8\pi G_{N}}\int_{M}\mathrm{d}^{4}x\sqrt{-g}\left(\frac{1} {2}\left(R-2\Lambda\right)-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-|D\phi|^{2}-m^{2}| \phi|^{2}\right)+S_{\rm GH}, \tag{119}\] where \[S_{\rm GH}=\frac{1}{8\pi G_{N}}\int_{\partial M}\mathrm{d}^{3}x\sqrt{-\gamma}\,K \tag{120}\] with \(K\equiv K_{ij}\gamma^{ij}\) being the trace of the extrinsic curvature \(K_{ij}\) with respect to the induced metric \(\gamma_{ij}\) on \(\partial M\) (\(i,j\) run over the three-dimensional coordinates on \(\partial M\)). The extrinsic curvature is given by \[K_{ij}=\frac{1}{2}\delta_{i}^{\mu}\delta_{j}^{\nu}\left(\nabla_{\mu}n_{\nu}+ \nabla_{\nu}n_{\mu}\right), \tag{121}\] where \(n^{\mu}\) is an outward unit normal \(g_{\mu\nu}n^{\mu}n^{\nu}=1\). However, the "bare" action (119) diverges when the cutoff is simply removed by taking the limit of \(r_{\Lambda}\to\infty\). This divergence can be cancelled by counterterms \(S_{\rm ct}\). Including \(S_{\rm ct}\) formally, we can define a subtracted action that is finite in the limit \(r_{\Lambda}\to\infty\) as \[S_{\rm sub}=S_{\rm reg}+S_{\rm ct}. \tag{122}\] Then, removing the cutoff gives a renormalized action, \[S_{\rm ren}=\lim_{r_{\Lambda}\to\infty}S_{\rm sub}. \tag{123}\] The form of \(S_{\rm ct}\) depends on the boundary conditions at the AdS boundary. We will discuss the cases of the Dirichlet theory, Neumann theory, and double trace deformation in turn. Dirichlet theoryWhen \(\zeta=0\), our Einstein-Maxwell complex scalar system is treated as the Dirichlet theory that has a dimension 2 operator \(O_{2}\) in the dual field theory on the AdS boundary. This is also called the standard quantization. The counterterms for this case can be given by [57; 58; 59] \[S_{\rm ct}=-\frac{1}{8\pi G_{N}}\int_{\partial M}{\rm d}^{3}x\sqrt{-\gamma} \left[2+\frac{R_{\gamma}}{2}+\phi^{2}\right], \tag{111}\] where \(R_{\gamma}\) is the Ricci scalar for \(\gamma_{ij}\), and we ignored derivative terms of the scalar field that do not contribute in our spherically symmetric static solutions. With these counterterms, let \(S_{\rm ren}^{D}\) denote the renormalized action for the Dirichlet theory. The expectation values of field theory operators can be obtained through variation as \[\delta S_{\rm ren}^{D}=\int{\rm d}^{3}x\sqrt{-h}\left(\frac{1}{2}\langle T^{ij }\rangle\delta h_{ij}+\langle J^{i}\rangle\delta\Psi_{i}+\langle O_{2}\rangle \delta\Phi_{D}\right), \tag{112}\] where \(\Phi_{D}=\sqrt{2}\phi_{1}\) is the source of the scalar operator, \(\Psi_{i}\) denotes that of the gauge field, and \(h_{ij}\) are the metric components of the boundary \(R\times S^{2}\). For the gauge field, we turn on the chemical potential \(\Psi_{t}=\mu=a_{0}\). The boundary stress energy tensor can be practically calculated as follows. From the subtracted action, the stress energy tensor on the cutoff surface can be obtained as \[(T_{\gamma})_{ij}=-\frac{2}{\sqrt{-\gamma}}\frac{\delta S_{\rm sub}^{D}}{ \delta\gamma^{ij}}=\frac{1}{8\pi G_{N}}\left(-K_{ij}+K\gamma_{ij}-2\gamma_{ij} +(G_{\gamma})_{ij}-\phi^{2}\gamma_{ij}\right), \tag{113}\] where \((G_{\gamma})_{ij}=(R_{\gamma})_{ij}-\frac{1}{2}R_{\gamma}\gamma_{ij}\) is the Einstein tensor for the induced metric. This scales as \((T_{\gamma})_{ij}\sim 1/r_{\Lambda}\) because \(\gamma^{ij}\sim 1/r_{\Lambda}^{2}\) and \(\sqrt{-\gamma}\sim r_{\Lambda}^{3}\). Hence, by switching from \(\gamma_{ij}\) to \(h_{ij}\), the expectation value of the boundary stress energy tensor (112) reads \[\langle T_{ij}\rangle=\lim_{r_{\Lambda}\to\infty}r_{\Lambda}(T_{\gamma})_{ij}. \tag{114}\] Explicitly, the components are given by \[8\pi G_{N}\langle T_{tt}\rangle =-f_{3}+2\phi_{1}\phi_{2}, \tag{115}\] \[8\pi G_{N}\langle T_{\theta\theta}\rangle =-\frac{f_{3}}{2}+2\phi_{1}\phi_{2},\] (116) \[8\pi G_{N}\langle T_{\psi\psi}\rangle =\sin^{2}\theta\left(-\frac{f_{3}}{2}+2\phi_{1}\phi_{2}\right), \tag{117}\] where \((\theta,\psi)\) denote the coordinates on \(S^{2}\) introduced as \({\rm d}\Omega_{2}^{2}={\rm d}\theta^{2}+\sin^{2}\!\theta{\rm d}\psi^{2}\). From the stress energy tensor, the total energy (also called the total mass) of the Dirichlet theory is expressed as \[\mathcal{E}_{D}\equiv 8\pi G_{N}\int{\rm d}\Omega_{2}\langle T_{tt}\rangle=4\pi(- f_{3}+2\phi_{1}\phi_{2}), \tag{118}\] where \(8\pi G_{N}\) is included in the definition of the LHS. The expectation values for the matter fields are \[8\pi G_{N}\langle J^{t}\rangle=-a_{1},\quad 8\pi G_{N}\langle O_{2}\rangle= \sqrt{2}\phi_{2}. \tag{119}\] These quantities are the densities per solid angle. The total charge is given by \[\mathcal{Q}\equiv 4\pi\cdot 8\pi G_{N}\langle J^{t}\rangle=-4\pi a_{1}. \tag{111}\] Similarly, the scalar expectation value integrated over the sphere is \[\langle\mathcal{O}_{2}\rangle\equiv 4\pi\cdot 8\pi G_{N}\langle O_{2}\rangle=4\pi \sqrt{2}\,\phi_{2}. \tag{112}\] The trace of the stress energy tensor satisfies \[\langle T^{i}{}_{i}\rangle=\frac{2\phi_{1}\phi_{2}}{8\pi G_{N}}=\Phi_{D} \langle O_{2}\rangle. \tag{113}\] If both \(\phi_{1}\) and \(\phi_{2}\) are nonzero, the theory that gives the variation (110) can be interpreted as Dirichlet theory in the presence of a nonzero source \(\Phi_{D}\). The nonzero trace (113) then indicates that the conformal symmetry is explicitly broken by the source. When the source is absent \(\Phi_{D}=0\), i.e. \(\zeta=0\), the expression of the energy (110) reduces to \(\mathcal{E}_{D}|_{\phi_{1}=0}=-4\pi f_{3}\). We can also calculate the finite Euclidean on-shell action when the counterterms are added.10 Using the equations of motion, we obtain (see section 3.4 in [8]) Footnote 10: We thank Li Li for discussions on this calculation. \[\frac{1}{2}\left(R-2\Lambda\right)+\mathcal{L}=-\frac{1}{2}\left(G^{t}{}_{t}+ G^{r}{}_{r}\right)=-\frac{1}{\sqrt{-g}}\left(\frac{(1+r^{2})f\sqrt{-g}}{r} \right)^{\prime}+\frac{1}{r^{2}}, \tag{114}\] where the last term, which is not a total derivative, is due to the spherical topology of the global AdS. By this relation, the bulk action (1) is simplified to \[S_{\rm bulk} =\frac{4\pi}{8\pi G_{N}}\int\mathrm{d}t\mathrm{d}r\left[\left(-r (1+r^{2})fe^{-\frac{\chi}{2}}\right)^{\prime}+e^{-\frac{\chi}{2}}\right]\] \[=\frac{4\pi}{8\pi G_{N}}\int\mathrm{d}t\left[-r_{\Lambda}^{3}- \left(1+\frac{\phi_{1}^{2}}{2}\right)r_{\Lambda}-f_{3}+\frac{4\phi_{1}\phi_{2 }}{3}+O\left(\frac{1}{r_{\Lambda}}\right)+\int_{r_{h}}^{r_{\Lambda}}\mathrm{ d}r\,e^{-\frac{\chi}{2}}\right], \tag{115}\] where (109)-(110) were used. This diverges for \(r_{\Lambda}\to\infty\). The divergence can be cancelled by adding the counterterms as well as the Gibbons-Hawking term, \[S_{\rm GH}+S_{\rm ct}=\frac{4\pi}{8\pi G_{N}}\int\mathrm{d}t\left[r_{\Lambda}^ {3}+\frac{\phi_{1}^{2}}{2}r_{\Lambda}+\frac{f_{3}}{2}+\frac{2\phi_{1}\phi_{2}} {3}+O\left(\frac{1}{r_{\Lambda}}\right)\right]. \tag{116}\] Combining (115) and (116), we obtain the finite Lorentzian renormalized on-shell action, \[S_{L}=S_{\rm ren}^{D}=\lim_{r_{\Lambda}\to\infty}\left(S_{\rm bulk}+S_{\rm GH }+S_{\rm ct}\right). \tag{117}\] The Euclidean on-shell action \(S_{E}\) can be obtained by replacing \(\int\mathrm{d}t\to-\int_{0}^{1/T_{\rm H}}d\tau\) where \(\tau\) denotes the Euclidean time. It is related to the grand potential for the Dirichlet theory as \(\Omega_{D}\equiv 8\pi G_{N}T_{\rm H}S_{E}\). The expression of the grand potential in terms of the bulk integral is hence given by \[\Omega_{D}=4\pi\int_{r_{h}}^{\infty}\mathrm{d}r(1-e^{-\frac{\chi}{2}})+4\pi \left(\frac{f_{3}}{2}-2\phi_{1}\phi_{2}+r_{h}\right), \tag{118}\] where we used \(r_{\Lambda}=\int_{r_{h}}^{r_{\Lambda}}\mathrm{d}r+r_{h}\) to rewrite the cutoff dependence for numerical evaluation of the \(r\)-integral. Neumann theoryFor \(\zeta\neq 0\), the bulk theory is considered to be dual to the boundary field theory with a dimension 1 scalar operator \(O_{1}\). This is known as the alternative quantization. The case of \(\zeta=\pi/2\) is the Neumann theory. It turns out that the source of the scalar operator is identified as \(\Phi_{N}=-\sqrt{2}\phi_{2}\), and the expectation value of the scalar operator is \(8\pi G_{N}\langle O_{1}\rangle=\sqrt{2}\phi_{1}\). The renormalized action is modified from the Dirichlet theory as follows. The Neumann theory is the Legendre transform of the Dirichlet theory [16], \[S_{\rm ren}^{N}=S_{\rm ren}^{D}+S_{\rm LT}, \tag{101}\] where \(N\) denotes the Neumann theory, and \[S_{\rm LT}=-\frac{2}{8\pi G_{N}}\int\mathrm{d}^{3}x\sqrt{-h}\,\phi_{1}\phi_{2}. \tag{102}\] The variation with respect to the scalar field gives \[\delta_{\phi}S_{\rm LT}=-\frac{2}{8\pi G_{N}}\int\mathrm{d}^{3}x \sqrt{-h}\left(\phi_{2}\delta\phi_{1}+\phi_{1}\delta\phi_{2}\right)=\int \mathrm{d}^{3}x\sqrt{-h}\left(-\langle O_{2}\rangle\delta\Phi_{D}+\langle O_{ 1}\rangle\delta\Phi_{N}\right). \tag{103}\] The variation of the renormalized Neumann action hence takes the form \[\delta S_{\rm ren}^{N}=\int\mathrm{d}^{3}x\sqrt{-h}\left(\frac{1}{2}\langle T ^{ij}\rangle\delta h_{ij}+\langle J^{i}\rangle\delta\Psi_{i}+\langle O_{1} \rangle\delta\Phi_{N}\right). \tag{104}\] In the above equation, \(\langle T^{ij}\rangle\delta h_{ij}\) contains the contribution from the variation of (102) by \(h_{ij}\), which shifts (100)-(103). The stress energy tensor for the Neumann theory is thus given by \[8\pi G_{N}\langle T_{tt}\rangle =-f_{3}+4\phi_{1}\phi_{2}, \tag{105}\] \[8\pi G_{N}\langle T_{\theta\theta}\rangle =-\frac{f_{3}}{2},\] (106) \[8\pi G_{N}\langle T_{\psi\psi}\rangle =-\sin^{2}\theta\,\frac{f_{3}}{2}. \tag{107}\] The trace of the stress energy tensor is \[\langle T^{i}{}_{i}\rangle=-\frac{4\phi_{1}\phi_{2}}{8\pi G_{N}}=2\Phi_{N} \langle O_{1}\rangle. \tag{108}\] Correspondingly, the total energy is \[\mathcal{E}_{N}=4\pi(-f_{3}+4\phi_{1}\phi_{2})=\mathcal{E}_{D}+\mathcal{E}_{ \rm LT}, \tag{109}\] and so is the grand potential, \(\Omega_{N}=\Omega_{D}+\Omega_{\rm LT}\), where \(\mathcal{E}_{\rm LT}=\Omega_{\rm LT}=8\pi\phi_{1}\phi_{2}\). When the source is absent \(\Phi_{N}=0\), i.e. \(\zeta=\pi/2\), the energy is given by \(\mathcal{E}_{N}|_{\phi_{2}=0}=-4f_{3}\). Double trace deformationFor \(\zeta\neq 0\) nor \(\pi/2\), the theory is interpreted as double trace deformation of the Neumann theory. For this, we need to include additional finite boundary terms in order for consistent variation with respect to the source in the deformed theory. We give the source in the form \[\Phi_{R}=-\sqrt{2}\left(\phi_{2}+\alpha\phi_{1}\right), \tag{101}\] where \(\alpha\) is a real parameter. The undeformed Neumann theory corresponds to \(\alpha=0\). For this source, we need an additional finite boundary term, \[S_{\rm Dtr}=-\frac{\alpha}{8\pi G_{N}}\int\mathrm{d}^{3}x\sqrt{-h}\,\phi_{1}^{ 2}. \tag{102}\] This term corresponds to the relevant double trace deformation of the dual field theory. The renormalized action is modified to \[S_{\rm ren}^{R}=S_{\rm ren}^{N}+S_{\rm Dtr}=S_{\rm ren}^{D}+S_{\rm LT}+S_{\rm Dtr}. \tag{103}\] The renormalized action equipped with the finite term \(S_{\rm Dtr}\) gives the correct variation with respect to the source \(\Phi_{R}\). The scalar field variation of (103) is \[\delta_{\phi}S_{\rm ren}^{R}=\frac{1}{8\pi G_{N}}\int\mathrm{d}^{3}x\sqrt{-h} \left(-2\phi_{1}\delta\phi_{2}-2\alpha\phi_{1}\delta\phi_{1}\right)=\int \mathrm{d}^{3}x\sqrt{-h}\langle O_{1}\rangle\delta\Phi_{R}. \tag{104}\] The full variation of \(S_{\rm ren}^{R}\) takes the form \[\delta S_{\rm ren}^{R}=\int\mathrm{d}^{3}x\sqrt{-h}\left(\frac{1}{2}\langle T^ {ij}\rangle\delta h_{ij}+\langle J^{i}\rangle\delta\Psi_{i}+\langle O_{1} \rangle\delta\Phi_{R}\right). \tag{105}\] The above stress energy tensor \(\langle T^{ij}\rangle\) contains finite contribution from the variation of \(S_{\rm Dtr}\) with respect to \(h_{ij}\), shifting the expressions of the Neumann theory (100)-(101). The expectation values in (105) are given by \[8\pi G_{N}\langle T_{tt}\rangle =-f_{3}+4\phi_{1}\phi_{2}+\alpha\phi_{1}^{2}, \tag{106}\] \[8\pi G_{N}\langle T_{\theta\theta}\rangle =-\frac{f_{3}}{2}-\alpha\phi_{1}^{2},\] (107) \[8\pi G_{N}\langle T_{\psi\psi}\rangle =\sin^{2}\theta\left(-\frac{f_{3}}{2}-\alpha\phi_{1}^{2}\right),\] (108) \[8\pi G_{N}\langle J^{t}\rangle =-a_{1},\] (109) \[8\pi G_{N}\langle O_{1}\rangle =\sqrt{2}\phi_{1}. \tag{110}\] In our setup, we consider the Robin boundary conditions (18) as the double trace deformation with vanishing source \(\Phi_{R}=0\). From (18), we choose \(\alpha=-\cot\zeta\), and the condition for the source is reduced to \[\Phi_{R}=-\sqrt{2}\left(\phi_{2}-\phi_{1}\cot\zeta\right)=0. \tag{111}\] Under this condition, the components of the stress energy tensor (A.41)-(A.43) become \[8\pi G_{N}\langle T_{tt}\rangle =-f_{3}+4\phi_{1}\phi_{2}-\phi_{1}^{2}\cot\zeta,\] (A.47) \[8\pi G_{N}\langle T_{\theta\theta}\rangle =-\frac{f_{3}}{2}+\phi_{1}^{2}\cot\zeta,\] (A.48) \[8\pi G_{N}\langle T_{\psi\psi}\rangle =\sin^{2}\theta\left(-\frac{f_{3}}{2}+\phi_{1}^{2}\cot\zeta\right).\] (A.49) When \(\zeta=\pi/2\), these expressions reduce to those for the Neumann boundary conditions (A.31)-(A.33). The total energy is given by \[\mathcal{E}_{R}\equiv 8\pi G_{N}\int\mathrm{d}\Omega_{2}\langle T_{tt}\rangle =4\pi(-f_{3}+4\phi_{1}\phi_{2}-\phi_{1}^{2}\cot\zeta).\] (A.50) This can be decomposed into individual contributions as \[\mathcal{E}_{R}=\mathcal{E}_{N}+\mathcal{E}_{\mathrm{Dtr}}=\mathcal{E}_{D}+ \mathcal{E}_{\mathrm{LT}}+\mathcal{E}_{\mathrm{Dtr}},\] (A.51) where \(\mathcal{E}_{\mathrm{Dtr}}=-4\pi\phi_{1}^{2}\cot\zeta\) is the contribution of \(S_{\mathrm{Dtr}}\). Among these, \(\mathcal{E}_{\mathrm{LT}}+\mathcal{E}_{\mathrm{Dtr}}\) is interpreted as the energy stored on the AdS boundary.11 Note that \(\mathcal{E}_{\mathrm{LT}}+\mathcal{E}_{\mathrm{Dtr}}=0\) when \(\zeta=\pi/2\), while it is not when \(\zeta\neq\pi/2\) (and \(\zeta\neq 0\), of course). The total charge and scalar expectation value are given by Footnote 11: See also [60] for the relationship between the Robin boundary condition and the modification of the potential energy near a boundary, which may suggest that the parameter \(\zeta\) for the Robin boundary condition controls the amount of the energy stored in the near-boundary region. \[\mathcal{Q}\equiv-4\pi a_{1},\quad\langle\mathcal{O}_{1}\rangle\equiv 4\pi \sqrt{2}\,\phi_{1}.\] (A.52) Using \(\cot\zeta=\phi_{2}/\phi_{1}\), we can rewrite (A.47)-(A.49) as \[8\pi G_{N}\langle T_{tt}\rangle =-f_{3}+3\phi_{1}\phi_{2},\] (A.53) \[8\pi G_{N}\langle T_{\theta\theta}\rangle =-\frac{f_{3}}{2}+\phi_{1}\phi_{2},\] (A.54) \[8\pi G_{N}\langle T_{\psi\psi}\rangle =\sin^{2}\theta\left(-\frac{f_{3}}{2}+\phi_{1}\phi_{2}\right).\] (A.55) The total energy is expressed as \[\mathcal{E}_{R}=4\pi(-f_{3}+3\phi_{1}\phi_{2}).\] (A.56) The trace of the energy momentum tensor can be written in the form \[8\pi G_{N}\langle T^{i}{}_{i}\rangle=-\phi_{1}\phi_{2}=-\cot\zeta\,\phi_{1}^{ 2}=-\frac{\cot\zeta}{2}(8\pi G_{N})^{2}\langle O_{1}\rangle^{2}.\] (A.57) This implies the spontaneous breaking of the conformal symmetry in the double trace deformed theory when the scalar operator acquires an expectation value. The grand potential of the double trace deformed theory is also shifted from the Dirichlet and Neumann theories by a finite term as \[\Omega_{R}=\Omega_{N}+\Omega_{\mathrm{Dtr}}=\Omega_{D}+\Omega_{\mathrm{LT}}+ \Omega_{\mathrm{Dtr}},\] (A.58) where \[\Omega_{\rm Dtr}=\mathcal{E}_{\rm Dtr}=-4\pi\cot\zeta\,\phi_{1}^{2}=-4\pi\phi_{1} \phi_{2}.\] (A.59) The expression of the grand potential in terms of the bulk integral is shifted from (A.26) as \[\Omega_{R}=4\pi\int_{r_{h}}^{\infty}{\rm d}r(1-e^{-\frac{\chi}{2}})+4\pi\left( \frac{f_{3}}{2}-\phi_{1}\phi_{2}+r_{h}\right).\] (A.60) RNAdSFor the RNAdS black holes (Eqs. (2.4)-(2.6)), we have (the label of \(D,N,R\) is removed because the scalar field is zero) \[\mathcal{E}=4\pi r_{h}\left(1+r_{h}^{2}+\frac{\mu^{2}}{2}\right).\] (A.61) The grand potential is \[\Omega=2\pi r_{h}\left(1-r_{h}^{2}-\frac{\mu^{2}}{2}\right).\] (A.62) In thermal AdS, \(r_{h}=0\), we obtain \(\mathcal{E}=\Omega=0\). The Hawking-Page transition between the RNAdS and thermal AdS phases (2.11) occurs when the black hole reaches \(\Omega=0\). The grand potential of the RNAdS (A.62) is \(\Omega>0\) for \(r_{h}<r_{\rm HP}\) and \(\Omega<0\) for \(r_{h}>r_{\rm HP}\) (2.11). For the RNAdS, we can analytically check that (2.42) is satisfied, where \(\mathcal{Q}=4\pi Q=4\pi\mu r_{h}\) for the RNAdS. ## Appendix B First law of thermodynamics To check numerical results, we wish to evaluate the first law of thermodynamics/black hole mechanics. If we regard solutions with nonzero \(\phi_{1}\) and \(\phi_{2}\) as the Dirichlet theory with explicit scalar source, the generalization of the first law of thermodynamics to the presence of nonzero scalar source and expectation values is given by12 Footnote 12: On general grounds, this first law in the presence of a nonzero scalar source follows from the fact that the grand potential is the generating function for responses of sources. In [61], this was discussed for the holographic superconductor model same as this paper except in the probe limit with the planar AdS boundary. Recently in [62], this scalar source contribution to the first law was derived by using Wald’s formalism [63; 64]. See also [65; 66; 67] for earlier discussions. \[{\rm d}\mathcal{E}_{D}=T_{\rm H}{\rm d}\mathcal{S}_{\rm BH}+\mu{\rm d} \mathcal{Q}-\langle\mathcal{O}_{2}\rangle{\rm d}\Phi_{D}.\] (B.1) By the Legendre transform (A.35), this can be rewritten for the Neumann theory as \[{\rm d}\mathcal{E}_{N}=T_{\rm H}{\rm d}\mathcal{S}_{\rm BH}+\mu{\rm d} \mathcal{Q}-\langle\mathcal{O}_{1}\rangle{\rm d}\Phi_{N}.\] (B.2) Adding the double trace deformation (A.37), we can rewrite this for the double trace deformed theory. In this step, we can treat also \(\alpha\) (defined by Eq. (A.36)) as an independent variable. By doing this, we can compare solutions with different values of \(\alpha\). We obtain \[{\rm d}\mathcal{E}_{R}=T_{\rm H}{\rm d}\mathcal{S}_{\rm BH}+\mu{\rm d} \mathcal{Q}-\langle\mathcal{O}_{1}\rangle{\rm d}\Phi_{R}-\frac{1}{8\pi}\langle \mathcal{O}_{1}\rangle^{2}{\rm d}\alpha,\] (B.3) where \({\rm d}\Phi_{R}=-\sqrt{2}({\rm d}\phi_{2}+\alpha{\rm d}\phi_{1}+\phi_{1}{\rm d}\alpha)\). The coefficient of the last term \(1/(8\pi)=1/(2\cdot 4\pi)\) is due to the normalization of \(\langle\mathcal{O}_{1}\rangle\) (100). When we impose the Robin boundary conditions, i.e. \(\alpha=-\cot\zeta\) and \(\Phi_{R}=0\) (114), this equation reduces to \[{\rm d}\mathcal{E}_{R}=T_{\rm H}{\rm d}\mathcal{S}_{\rm BH}+\mu{\rm d} \mathcal{Q}+\frac{1}{8\pi}\langle\mathcal{O}_{1}\rangle^{2}{\rm d}(\cot\zeta). \tag{115}\] If we consider the family of solutions for fixed \(\zeta\) (together with no source \(\Phi_{R}=0\)), the last term drops, and we obtain the first law of thermodynamics that contains the variation only of thermodynamic variables as \[{\rm d}\mathcal{E}_{R}=T_{\rm H}{\rm d}\mathcal{S}_{\rm BH}+\mu{\rm d} \mathcal{Q}. \tag{116}\] However, if the last term is taken into account, we can use (115) as a relation useful to compare solutions where \(\zeta\) varies in general. We can use any of the above equations to check numerical results because these are rewriting of the same relation. ## Appendix C Comparison of entropy in microcanonical ensemble In the main text, we have seen the phase structures in the grand canonical ensemble. We can also consider the microcanonical ensemble where the total energy (mass) \(\mathcal{E}\) and charge \(\mathcal{Q}\) are treated as independent variables. In this ensemble, we can argue the fate of an unstable RNAdS black hole by comparing the entropies between solutions with and without scalar at the same \((\mathcal{E},\mathcal{Q})\) (see also Dirichlet boundary condition [35; 36]). In figure 10, we show the entropies of the two kinds of the solutions in the \((\mathcal{Q},\mathcal{S}_{\rm BH})\) plane for \(\mathcal{E}=10\), \(\zeta/\pi=0.6\) and \(q=1\). The black curve is the entropy of the RNAdS with \(\mathcal{E}=10\). The extremal RNAdS is marked by the red dot, and the onset of instability for the branching of the hairy black holes is shown by the blue dot. When the RNAdS and hairy Robin black hole both exist at the same parameters \((\mathcal{E},\mathcal{Q},\zeta,q)\), the latter has the higher entropy than the former. We also examined other values of the parameters \((\mathcal{E},\mathcal{Q},\zeta,q)\) and Figure 10: Comparison of entropies in the microcanonical ensemble for \(\mathcal{E}=10\), \(\zeta/\pi=0.6\), and \(q=1\). The end point of the blue line is at \((\mathcal{S}_{\rm BH},\mathcal{Q})=(0,9.955)\), and it corresponds to a charged Robin boson star. found that hairy black holes have higher entropy than RNAdS when solutions overlap (see also the same comparison in the Dirichlet boundary condition [35; 36]). This implies that an unstable RNAdS can dynamically evolve into a hairy black hole in the microcanonical ensemble when it is perturbed and nonlinear time evolution is considered. In figure 10, the zero entropy limit of the hairy Robin black hole is the zero size limit \(r_{h}\to 0\) with diverging temperature \(T_{\rm H}\to\infty\). The profile of the field variables \((f,\chi,\phi,A_{t})\) approaches that of a charged Robin boson star.
2306.03066
Of Mice and Mates: Automated Classification and Modelling of Mouse Behaviour in Groups using a Single Model across Cages
Behavioural experiments often happen in specialised arenas, but this may confound the analysis. To address this issue, we provide tools to study mice in the home-cage environment, equipping biologists with the possibility to capture the temporal aspect of the individual's behaviour and model the interaction and interdependence between cage-mates with minimal human intervention. Our main contribution is the novel Group Behaviour Model (GBM) which summarises the joint behaviour of groups of mice across cages, using a permutation matrix to match the mouse identities in each cage to the model. In support of the above, we also (a) developed the Activity Labelling Module (ALM) to automatically classify mouse behaviour from video, and (b) released two datasets, ABODe for training behaviour classifiers and IMADGE for modelling behaviour.
Michael P. J. Camilleri, Rasneer S. Bains, Christopher K. I. Williams
2023-06-05T17:43:50Z
http://arxiv.org/abs/2306.03066v2
Of Mice and Mates: Automated Classification and Modelling of Mouse Behaviour in Groups using a Single Model across Cages ###### Abstract Behavioural experiments often happen in specialised arenas, but this may confound the analysis. To address this issue, we provide tools to study mice in the homecage environment, equipping biologists with the possibility to capture the temporal aspect of the individual's behaviour and model the interaction and interdependence between cage-mates with minimal human intervention. We develop the ALM to automatically classify mouse behaviour from video, and a novel GBM for summarising their joint behaviour across cages, using a permutation matrix to match the mouse identities in each cage to the model. We also release two datasets, ABODe for training behaviour classifiers and IMADGE for modelling behaviour. ## 1 Introduction Understanding behaviour is a key aspect of biology, psychology and social science, e.g. for studying the effects of treatments [1], the impact of social factors [2] or the link with genetics [3]. Biologists often turn to model organisms as stand-ins, of which mice are a popular example, on account of their similarity in genetics, anatomy and physiology [4]. Traditionally, biological studies on mice have taken place in carefully controlled experimental conditions [4], in which individuals are removed from their home-cage, introduced into a specific arena and their response to stimuli (e.g. other mice) investigated: see e.g. [5; 6; 7; 8; 9; 10; 11]. This is attractive because: (a) it presents a controlled stimuli-response scenario that can be readily quantified [12], and (b) it lends itself easier to automated means of behaviour quantification (e.g. through top-mounted cameras in a clutter-free environment [10; 5; 8; 9]). The downside of such'sterile' environments is that they fail to take into account all the nuances in their behaviour [13]. Such stimuli-response scenarios presume a simple forward process of perception-action which is an over-simplification of their agency [13]. Finally, mice are highly social creatures, and isolating them for specific experiments is stressful and may confound the analysis [14; 15]. In this work, we tackle the problem of studying mice in the home-cage, giving biologists tools to analyse the temporal aspect of an individual's behaviour and model the interaction between cage-mates -- while minimising disruption due to human intervention. Our contributions are: (a) a novel Global Behaviour Model (GBM) for detecting patterns of behaviour in a group setting across cages, (b) the Activity Labelling Module (ALM), an automated pipeline for inferring mouse behaviours in the home-cage from video, and (c) two datasets, ABODe for automated activity classification and IMADGE for analysis of mouse behaviours, both of which we make publicly available. In what follows, we introduce the reader to the relevant literature in Sec. 2, detail our methods in Sec. 3, and describe our datasets in Sec. 4 and document the experiments and results in Sec. 5. ## 2 Related Work ### Experimental Setups Animal behaviour has typically been studied over short periods in specially designated arenas (e.g. [5, 7, 6]) and under specific stimulus-response conditions [8]. This simplifies data collection, but may impact behaviour [16] and is not suited to the kind of long-term studies in which we are interested. Instead, newer research uses either an enriched cage [11, 17, 18, 19] or, as in our case, the home-cage itself [14, 20]. Obviously this generates greater challenges for the automation of the analysis, and indeed, none of the systems we surveyed perform _automated behaviour classification_ for _individual_ mice in a _group-housed_ setting. As relates number of observed individuals, single-mice experiments are often preferred as they are easier to phenotype and control [9, 11, 17, 19]. However, mice are highly social creatures and isolating them affects their behaviour [15], as does handling (often requiring lengthy adjustment periods). Obviously, when modelling social dynamics, the observations must enforce include multiple individuals. Despite this, there are no automated systems that consider the behaviour of each individual as we do. Most research is interested in the behaviour of the group as a whole [7, 21, 22, 23, 24], which circumvents the need to identify the individuals. Carola et al. [25] do model a group setting, but focus on the mother only and how it relates to its litter: similarly, the social interaction test [7, 26] looks at the social dynamics, but only from the point of view of a resident/intruder and in a controlled setting. ### Automated Behaviour Classification Classifying animal behaviour has lagged behind that of humans, with even recent work using manual labels [6, 25, 27]: even automated methods often require heavy data engineering [28, 7, 24, 11]. Animal behaviour inference tends to be harder because human actions are more recognisable [18], videos are usually less cluttered [29] and most challenges in the human domain focus on classifying short videos rather than long-running recordings as in animal observation [24]. Another factor is the limited number of publicly available animal observation datasets. The few that are accessible are not relevant to our setup: CRIM13 [23] involves only single mice, MARS [26] considers only short snippets, and others like RatSI [30] and MouseAcademy [8] use a top-mounted camera in an open field environment (compared to our side-view recordings in an enriched home cage). We hope, by releasing ABODE, to bridge this gap. ### Modelling Mouse Behaviour The most common form of behaviour analysis involves reporting summary statistics: e.g. of the activity levels [31], the total duration in each behaviour [20] or the number of bouts [26], effectively throwing away the temporal information. Even where temporal models are used as in [7], this is purely as an aid to the behaviour classification with statistics being still reported in terms of total duration in each state (behaviour). This approach provides an incomplete picture, and one that may miss subtle differences [32] between individuals/groups. Some research output does report ethograms of the activities/behaviours through time [3, 26, 33] (and Bains et al. [14] in particular model this through sinusoidal functions), but none of the works we surveyed consider the temporal co-occurrence of behaviours between individuals in the cage as we do. An interesting problem that emerges in biological communities is determining whether there is evidence of different behavioural characteristics among individuals/groups [27, 4, 25] or across experimental conditions [32, 14]. Within the statistics and machine learning communities, this is typically the domain of Anomaly detection for which Chandola et al. [34] provide an exhaustive review. This is at the core of most biological studies and takes the form of hypothesis testing for significance [25]. The limiting factor is often the nature of the observations employed, with most studies based on frequency (time spent or counts) of specific behaviours [15, 35]. The analysis in [25] uses a more holistic temporal viewpoint, albeit only on individual mice (our models consider multiple individuals). Wiltschko et al. [9] employ Hidden Markov Models (HMMs) to identify prototypical behaviour (which they compare across environmental and genetic conditions) but only consider pose features -- body shape and velocity -- and do so only for individual mice. To our knowledge, we are the first to use a global temporal model inferred across cages to flag 'abnormalities' in another demographic. ## 3 Methods ### Data Modalities Our data stems from a collaboration with the Mary Lyon Centre at MRC Harwell, Oxfordshire (MLC at MRC Harwell), and consists of continuous three-day video and position recordings (using the HCA system [14]) of group-housed mice of the same sex and strain: we focus on male mice of the C57BL/6NTac strain. The home-cage contains bedding, food and water, and is maintained at a regular 12-hour light/dark cycle (lights on at 07:00 and off at 19:00). The mice are housed in groups of three and recorded through an infra-red side-view camera. With no visual markings, the mice are only identifiable through a Radio-Frequency Identification (RFID) tag embedded in them and picked up by a \(3\times 6\) antenna-array below the cage. We curated the data to form two datasets (at the 3-month and 1-year age groups), described in Sec. 4. ### Classifying Behaviour: the ALM Analysing behaviour dynamics in social settings requires knowledge of the individual behaviour throughout the observation period. Our goal is thus to label the activity of each mouse or flag that it is Not Observable at discrete Behaviour Time Intervals (BITs) -- in our case every second. Given the scale of our data, manual labelling is not feasible: instead, our ALM (Fig. 0(a)), automatically determines whether each mouse is observable in the video, and if so, infers a probability distribution over which behaviour it is exhibiting. Using discrete time-points simplifies the problem by framing it as a purely classification task, and making it easier to model (Sec. 3.3). We explicitly use a hierarchical label space (observability v. behaviour, Fig. 0(a)), since (a) it allows us to break down the problem using an Observability Classifier (OC) followed by a Behaviour Classifier (BC) in cascade, and (b) because we prefer to handle Not Observable explicitly as missing data rather than having the BC infer unreliable classifications which can in turn bias the modelling. Determining Observability.For the OC we use as features: the position of the mouse (RFID), the fraction of frames (within the BTI) in which a Bounding Box (BBox) for the mouse appears, the average area of such BBoxes and finally, the first 30 Principal Component Analysis (PCA) components from the feature-vector obtained by applying the LFB model [37] to the video. These are fed to a logistic-regression classifier trained using the binary cross-entropy loss [38, 206] with \(h_{2}\) regularisation, weighted by inverse class frequency (to address class imbalance). We judiciously choose the operating point (see Sec. 5.1) to balance the errors the system makes. Probability over Behaviours.The BC operates only on samples deemed Observable by the OC, outputting a probability distribution over the seven behaviour labels (Sec. 4.1). For each BTI, the centre frame and six others on either side at a stride of eight are combined with the first detection of the mouse in the same period. These are fed to an LFB model [37] (with temperature scaling [39]) finetuned on our data to yield the classification: where there is no detection, a fixed probability distribution is used instead. Figure 1: Model architectures. (a) The ALM for classifying mouse behaviour (vi) from positions (i), BBoxes (ii) and video frames (iii). It is composed of an observability (iv) and behaviour (v) classifier operating in cascade. The Tracking and Identification Module (TIM) is due to [36]. (b) Graphical representation of our GBM. ‘\(\times\)’ refers to standard matrix multiplication. To reduce clutter, the model is not shown unrolled in time. ### Modelling Behaviour Dynamics In modelling behaviour, we seek to: (a) capture the temporal aspect of the individual's behaviour, and (b) model the interaction and interdependence between cage-mates. These goals can be met through fitting a HMM on a per-cage basis, in which the behaviour of each mouse is represented by factorised categorical emissions contingent on a latent'regime' (which couples them together). However, this generates a lot of models, making it hard to analyse and compare observations across cages. To address this, we seek to fit one GBM across cages. The key problem is that the assignment of mouse identities in a cage (denoted as R, G, B) is arbitrary. As an example, if R represents a dominant mouse in one cage, this role may be taken by e.g. mouse G in another cage. Forcing the same emission probabilities across mice avoids this problem, but is too restrictive of the dynamics that can be modelled. Instead, we introduce a permutation matrix to match the mice in any given cage to the GBM as shown in Fig. 0(b). As in a HMM, there is a latent state \(Z\) indexed by cage \(m\), recording-run \(n\) and time \(t\), forming a Markov chain (over \(t\)), which represents the state of the cage as a whole. This'regime', is parametrised by \(\pi\) in the first time-point (initial probability) as well as \(\Omega\) (transition probabilities), and models dependence both in time as well as between individuals. Conditioned on \(Z\), \(\widetilde{X}\) captures the behaviour of each mouse, through the emission probabilities \(\Psi\). Note that \(\widetilde{X}\) is indexed over the \(K\) mice by \(\tilde{k}\) which is a 'canonical' assignment. For each cage \(m\), the random variable \(Q^{[m]}\) governs which mouse, \(k\) (R/G/B) is assigned to which index, \(\tilde{k}\), in the canonical representation \(\widetilde{X}\), and is fixed for all samples \(n,t\) and behaviours \(x\). The sample space of \(Q\) consists of all possible permutation matrices of size \(K\times K\) i.e. matrices whose entries are 0/1 such that there is only one 'on' cell per row/column. \(Q\) can therefore take on one of \(K\)! distinct values (permutations). This permutation matrix setup has been used previously e.g. in the context of learning inverse graphics representations [40], however here, we are able to use exact inference due to the low dimensionality (in our case \(|Q|=3!=6\)). Note that fixing \(Q\) and \(X\) determines \(\widetilde{X}\) completely by simple linear algebra. This allows us to write out the complete data likelihood as: \[P_{\Theta}\left(\mathcal{D}\right)=\prod_{m,n}P_{\xi}\left(Q^{[m]}\right) \left\{P_{\pi}\left(Z^{[,1]}\right)\prod_{t=2}^{T^{n}}P_{\Omega}\left(Z^{[,t ]}|Z^{[,t-1]}\right)\prod_{t=1}^{T^{n}}P_{\Psi}\left(X^{[,t]}|Z^{[,t]},Q^{[m]} \right)\right\}. \tag{1}\] The parameters \(\Theta=\{\pi,\Omega,\xi,\Psi\}\) of the model are inferred through the Expectation Maximisation (EM) algorithm [41] as shown in Algorithm 1 and detailed in the appendix. Specifically, we use the fact that the posterior over \(Q\) is highly peaked to replace the expectation over \(Q\) by its maximum, and iterate between fixing \(Q\) (per-cage) and optimising the remaining parameters using standard EM. ``` 0:\(X\)\(\triangleright\) Observations for all cages 0:\(\hat{\xi},\hat{\Psi},\hat{\pi},\hat{\Omega}\)\(\triangleright\) Initial Parameter Estimates 1:repeat 2:for all cages \(m\in M\)do 3:\(\hat{q}^{[m]}\leftarrow\arg\max_{q^{\prime}\in Q^{[m]}}P\left(q^{\prime}|X\right)\)\(\triangleright\) Eq. (A.5) 4: Compute \(\widetilde{X}^{[m]}\) given \(\hat{q}^{[m]}\)\(\triangleright\) Eq. (A.1) 5:endfor 6:E-Step: Compute Posterior Statistics for \(Z\) (\(\gamma\), \(\eta\))\(\triangleright\) Eqs. (A.32 - A.37) 7:M-Step: Update parameters \(\hat{\Psi},\hat{\pi}\) and \(\hat{\Omega}\)\(\triangleright\) Eqs. (A.44, A.49, A.54) 8: Compute Log-Likelihood using new \(\hat{\xi},\hat{\Psi},\hat{\pi},\hat{\Omega}\)\(\triangleright\) Eq. (A.38) 9:until Change in Log-Likelihood \(<\) Tolerance\(\triangleright\) (2: to 5:) 10:Re-Optimise Permutation ``` **Algorithm 1** Modified EM for GBM. Equations are in the appendix. ## 4 Datasets ### ABODE: A dataset for Behaviour Classification We curated the Annotated Behaviour and Observability Dataset (ABODE) to train and evaluate behaviour classifiers. The dataset, available at [https://github.com/michael-camilleri/ABODE](https://github.com/michael-camilleri/ABODE) consists of 200 two-minute snippets, with 100 for Training, 40 for Validation and 60 for Testing. Each snippet consists of the video, per-mouse locations in the frame and per-second behaviour labels for each of the mice. The per-frame BBox for each mouse is obtained using a TIM [36] which persistently tracks their identity (encoded as R/G/B). The behaviour of each mouse is annotated by a trained phenotypic, and is either Not Observable or one of seven mutually exclusive labels: Immobile, Feeding, Drinking, Self-Grooming, Allo-Grooming, Locomotion or Other. The labelling schema and annotation process are elaborated upon in our Appendix B.3. ### IMADGE: A dataset for Behaviour Analysis In support of the behaviour analysis of group-housed mice we curated the Individual Mouse Activity Dataset for Group Environments (IMADGE) (available at [https://github.com/michael-camilleri/IMADGE](https://github.com/michael-camilleri/IMADGE)). We selected male mice from the C57BL/ONTac strain at the three month (_young_) and one year (_adult_) age groups to provide data from two demographics. Since the mice are most active at dawn/dusk, we only use recordings of the 2\(\mathit{lb}\)-hour period around lights-on and lights-off. The clean dataset contains 15 cages (90 recordings) from the adult and ten cages (61 recordings) from the young subset. For each mouse, we provide the mouse position (as picked up by the RFID tag), the average BBox (from TIM [36]), a flag for observability and the probability scores over the behaviour labels (through the ALM, discussed below): all are at a granularity of one-second. ## 5 Experiments ### Fine-tuning the ALM The ALM was fit and evaluated on the ABODe dataset. Metrics.For both the observability and behaviour components of the ALM we report accuracy and F\({}_{1}\) score [42]. We use the macro-averaged F\({}_{1}\) to better account for the class imbalance. This is particularly severe for the observability classification, in which only about 7% of samples are Not Observable, but it is paramount to flag these correctly. This is because, given that the Observable samples will be used to infer behaviour (Sec. 3.2) which is in turn used to characterise the dynamics of the mice (Sec. 3.3) it is arguably more detrimental to give _Unreliable_ behaviour classifications (i.e. when the sample is Not Observable but the OC deems it to be Observable, which can throw the statistics awry) than missing some Observable periods (which, though _Wasteful_ of data, can generally be smoothed out by the temporal model). This construct is formalised as: \begin{tabular}{c|c c} \multicolumn{3}{c}{Predicted} \\ \cline{2-3} \multicolumn{1}{c|}{\multirow{2}{*}{GT}} & **Obs.** & **Obs.** & **N/Obs.** \\ & **Obs.** & True Observable [TP] & _Wasteful_ [FN] \\ & **N/Obs.** & _Unreliable_ [FP] & True Not Observable [TN] \\ \end{tabular} where GT refers to the ground-truth (annotated) and the standard machine learning terms -- True Positive (TP), False Positive (FP), True Negative (TN), False Negative (FN) -- are in square brackets. In our evaluation, we report the number of _Unreliable_ and _Wasteful_ samples to take this imbalance into account. For the BC, we also report the normalised (per-sample) log-likelihood score, \(\widehat{\mathcal{L}}\), given that we use it as a probabilistic classifier. Observability.The challenge in classifying observability was to handle the severe class imbalance, which implied judicious feature selection and classifier tuning. Although the observability sample count is high within ABODe, the skewed nature (with only 7% Not Observable) is prone to overfitting. Features were selected based on their correlation with the observability flag, and narrowed down to the subset already listed (Sec. 3.2). As for classifiers, we explored Logistic Regression (LgR), Naive Bayes (NB), Random Forests, Support-Vector Machines and feed-forward Neural Networks. Of these, the LgR and NB enveloped all others in the (validation set) ROC curve [42], and were taken forward as candidate methods. These were compared in terms of the number of _Unreliable_ and _Wasteful_ samples at two thresholds: one is at the point at which the number of _Wasteful_ samples is on par with the true number of Not Observable in the data (i.e. 8%), and the other at which the number of predicted Not Observable equals the statistic in the ground-truth data. These appear in Tab. 1: the LgR outperforms the NB in almost all cases, and hence we chose the LgR classifier operating at the _Wasteful_ = 8% point. Behaviour.We explored two models, the STLT [43] and LFB [37], on the basis of them being most applicable to the spatio-temporal action-localisation problem [44]. In the former case we adapted the architecture to to extend the temporal reach to outwith the BTI, drawing on temporal context from surrounding video frames. We also explored adding in the detections for the cage-mates (with index switching augmentations to encode cage-mate identity symmetry) and the hopper, and feeding in sub-portions of the image relevant to the identified mouse. For the LFB we explored various image augmentation procedures, but not left-right flipping (since we lack symmetry in our fixed setup). For both models, we also investigated lighting enhancement techniques [45], and optimised over batch sizes, learning rates/schedules and frame reach/stride. Given the results on the validation set in Tab. 0(b), with an F\({}_{1}\) of 0.61 (compared to 0.36 for the STLT), the LFB model was chosen as the BC. In samples for which the mouse is not identified by the TIM, but the OC reports that it should be Observable, a categorical distribution was fit instead. End to End performance.In Table 2 we show the performance of the ALM on the held-out test-set: in both cases, we compare against the prior classifier. In terms of observability, the ALM achieves slightly less accuracy but a much higher F\({}_{1}\) score, as it seeks to balance the types of errors (cutting the _Unreliable_ by 34%). In terms of behaviour, when considering only Observable classifications, the system achieves 68% accuracy and 0.54 F\({}_{1}\) despite the high class imbalance. The main culprits for the low score are the grooming behaviours, which as shown in Fig. 2, are often confused for Immobile. ### Group Behaviour Analysis The IMADGE dataset is used for our behaviour analysis, focusing on the adult demographic and comparing with the young one later. Metrics.We compare models using the normalised log-likelihood \(\widehat{\mathcal{L}}\). When reporting relative changes in \(\widehat{\mathcal{L}}\), we use a baseline model to set an artificial 0 (otherwise the log-likelihood is not bounded from below). Let \(\mathcal{L}_{BL}\) represent the normalised log-likelihood of a baseline model -- the independent distribution per mouse -- and \(\widehat{\mathcal{L}}_{\Theta}\) respectively for the model of interest (parameterised by \(\Theta\)). We can then define the Relative Difference in Log-Likelihood (RDL) between two models parameterised by \(\Theta\) and \(\Theta^{*}\) as: \[\text{RDL}\left(\Theta;\Theta^{*}\right)=\left|\frac{\widehat{\mathcal{L}}_{ \Theta}-\widehat{\mathcal{L}}_{\Theta^{*}}}{\widehat{\mathcal{L}}_{\Theta}- \widehat{\mathcal{L}}_{BL}}\right|\times 100\%\quad. \tag{2}\] \begin{table} \end{table} Table 1: Model Fitting for the ALM. (a) Comparison of LgR and NB (on the validation set) at different operating points. Note that for context, there are 10,124 samples, of which 750 are Not Observable. (b) Evaluation of the Prior (baseline), STLT and LFB models on the Training and Validation sets in terms of Accuracy (\(A_{BC}\)), macro-F\({}_{1}\) and normalised log-likelihood (\(\widehat{\mathcal{L}}\)). \begin{table} \end{table} Table 2: Test performance of the ALM and prior model, in terms of observability and behaviour. Within the former, U and W refer to the counts of _Unreliable_ and _Wasteful_ respectively: for context, there are 20,581 samples. Figure 2: Behaviour confusion matrix. Size of \(Z\).The number of latent states \(|Z|\) in the GBM governs the expressivity of the model: too small and it is unable to capture all the dynamics, but too large and it becomes harder to interpret. To this end, we fit a per-cage model (i.e. without the \(Q\) construct) to the adult mice data for varying \(|Z|\in\{2,\dots,13\}\), and computed \(\widehat{\mathcal{L}}\) on held out data (we used six-fold cross validation). As shown in Fig. 2(a), the likelihood increased gradually, but slowed down beyond \(|Z|=7\): we thus use \(|Z|=7\) in our analysis. Peaked Posterior over \(Q\).Our Algorithm 1 assumes that the posterior over \(Q\) is sufficiently peaked. To verify this, we computed the posterior for all permutations over all cages given each per-cage model. To two decimal places, the posterior is deterministic as shown for model \(L\) in Fig. 2(b). Quality of Fit.We wished to investigate the penalty paid by using a global rather than per-cage model. To this end, we show in Tab. 3, together with the \(\widehat{\mathcal{L}}\) for the data from each cage, the RDL of the GBM compared with that of the per-cage model. The average RDL is 4.8%, which is a reasonable penalty to pay in exchange for a global model. The RDL is less than 5% in all but three cages, A, D and F: cage D in particular exhibited a tendency towards a 6-state regime (data not shown). Latent Space Analysis.Figure 4 shows the parameters of the trained GBM. Most regimes have long dwell times. We note that regime F captures the Immobile behaviour for all mice, and is the most prevalent (0.26 steady state probability). The purity of this regime indicates that the mice often are Immobile at the same time, reinforcing the biological knowledge that they tend to huddle together for sleeping, but it is interesting that this was picked up by the model without any apriori bias. Conversely, regime A is most closely associated with the Other label, although it is less pure. A point of interest are the regimes associated with the Feeding behaviour, that are different across mice -- B, E and G for mice 1, 2 and 3 respectively. This is surprising given that more than one mouse can feed at a time (the mouse behaviour researchers at MLC at MRC Harwellindicated that there is no need for competition for feeding resources). This is significant, given that it is a global phenomenon, as it could be indicative of a pecking order in the cage. Another aspect that emerges is the co-occurrence of Self-Grooming with Immobile or Other behaviours: note how in regime (D) (which has the highest probability of Self-Grooming) these are the most prevalent. Abnormality Detection.We used the model trained on our 'normal' demographic to analyse data from 'other' cages: i.e. abnormality detection. This is useful e.g. to identify unhealthy mice, strain related differences, or, as in our proof of concept, evolution of behaviour through age. In Fig. 5 we show the trained GBM evaluated on data from both the adult (blue) and young (orange) demographics in IMADGE. Apart from two instances, the \(\widehat{\mathcal{L}}\) is consistently lower in the younger group compared to the adult demographic: moreover, for cages where we have data in both age groups, \(\widehat{\mathcal{L}}\) is Figure 3: Tuning of the GBM. (a) Normalised log-likelihood (\(\widehat{\mathcal{L}}\)) for various dimensionalities of the latent state over all cages. (b) Posterior over \(Q\) for all cages (\(|Z|=7\), model trained on cage L). \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & A & B & C & D & E & F & G & H & K & L & M & N \\ \hline \(\widehat{\mathcal{L}}_{\text{GBM}}\) & -1.10 & -1.17 & -1.16 & -1.43 & -1.25 & -1.36 & -1.36 & -1.20 & -1.13 & -1.22 & -1.13 & -1.29 \\ RDL & 5.32 & 4.56 & 2.29 & 11.14 & 4.94 & 7.00 & 4.63 & 3.17 & 2.81 & 2.13 & 4.61 & 4.75 \\ \hline \hline \end{tabular} \end{table} Table 3: Normalised log-likelihood (\(\widehat{\mathcal{L}}\)) of the GBM and RDL on each cage for \(|Z|=7\). always lower for the young mice. Indeed, a binary threshold achieves 90% accuracy when optimised and a T-test on the two subsets indicates significant differences (\(p\)-value \(=1.1\times 10^{-4}\)). Analysis of Young mice.Training the model from scratch on the young demographic brings up interesting different patterns. Firstly, the \(|Z|=6\) model emerged as a clear peak this time, as shown in Fig. 6. Figure 7 shows the parameters for the GBM with \(|Z|=6\) after optimisation on the young subset. It is noteworthy that the Immobile state is less pronounced (in regime D), which is consistent with the younger mice being more active. Interestingly, while there is a regime associated with Feeding, it is the same for all mice and also much less pronounced: recall that for the adults, the probability of feeding was 0.7 in each of the Feeding regimes. This could indicate that the pecking order, at least at the level of feeding, develops with age. ## 6 Conclusion In this paper, we have provided a set of tools for biologists to analyse the individual behaviours of group housed mice over extended periods of time. Our main contribution was the novel GBM -- a HMM equipped with a permutation matrix for identity matching -- to analyse the joint behaviour dynamics across different cages. This evidenced interesting dominance relationships, and also flagged significant deviations in an alternative young age group. In support of the above, we released two datasets, ABODe for training behaviour classifiers and IMADGE for modelling group dynamics (upon which our modelling is based). ABODe was used to develop and evaluate our proposed ALM that automatically classifies seven behaviours despite clutter and occlusion. Figure 4: GBM parameters on the Adult mice data for \(|Z|=7\). For \(\Omega\) (leftmost) we show the transition probabilities: underneath the \(Z^{[t+1]}\) labels, we also report the steady-state probabilities (first row) and the expected dwell times (in BTIs, second row). The other three panels show the emission probabilities \(\Psi_{k}\) for each mouse as Hinton plots. We omit zeros before the decimal point and suppress values close to 0 (at the chosen precision). Figure 5: \(\widehat{\mathcal{L}}\) scores (\(x\)-axis) of the GBM on each cage (\(y\)-axis, left) in the adult/young age groups, together with the accuracy of a binary threshold on the \(\widehat{\mathcal{L}}\) (scale on the right). Limitations and Future Work:Since our end-goal was to get a working pipeline to allow us to model the mouse behaviour, the tuning of the ALM leaves room for further exploration, especially as regards architectures for the BC. In future work we would like to analyse other mouse demographics. Much of the pipeline should work "out of the box", but to handle mice of different colours to those in the current dataset it may be necessary to annotate more data for the ALM (and possibly also for the mouse detector phase of the TIM as provided by Camilleri et al. [36]). Ethical Approval:We emphasize that no new data were collected for this study, in line with the Reduction strategy of the 3Rs [46]. The original observations were carried out at MLC at MRC Harwell in accordance with the Animals (Scientific Procedures) Act 1986, UK, Amendment Regulations 2012 (SI 4 2012/3039). Acknowledgements:We thank the staff at the MLC at MRC Harwell for providing the raw video and position data, and for their help in interpreting it. We are also grateful to Andrew Zisserman, for his age advice on architectures for the behaviour classifier of the ALM. MPJC was supported by the EPSRC CDT in Data Science (EP/L016427/1). RSB was supported by funding to MLC from the MRC UK (grant A410). Figure 6: \(\widehat{\mathcal{L}}\) as a function of \(|Z|\in\{2,3,...,7\}\), with each cage as initialiser. The average (per \(|Z|\)) is shown as a blue cross. Figure 7: GBM parameters on the Young mice data for \(|Z|=6\). Arrangement is as in Fig. 4. ## Appendix A Derivations for the Global Behaviour Model We defined our GBM graphically in Fig. 2 and through Eq. (1) in the main text. Herein we derive the update equations for our modified EM scheme in Algorithm 1. ### Notation We already defined our key variables in Sect. 3.3 in the main text. However, in order to facilitate our discussion, we make use of the following additional symbols. Firstly, let \(\mathbf{Q}^{[m]}\) represent the matrix manifestation (outcome in the sample-space) of the random variable \(Q^{[m]}\). Secondly, we use \(\mathbb{I}_{k}^{[m,n,t]}\) to signify that the observation for mouse \(k\) from cage \(m\) in sample \(t\) of run \(n\) is not available: i.e. it is missing data. We assume that this follows a Missing-at-Random mechanism [47] which allows us to simply ignore such dimensions: i.e. \(\mathbb{I}\) acts as a multiplier such that it zeros out all entries corresponding to missing observations. ### Posterior over \(Q\) Due to the deterministic multiplication, selecting a particular \(\mathbf{Q}\), and fixing \(X\) (because it is observed), completely determines \(\widetilde{X}\). Formally: \[\widetilde{X}^{[m,n,t]}=\left.\mathbf{Q}^{[m]}\right.^{\top}\left(X\mathbb{I} \right)^{[m,n,t]},\] (A.1) where we have made use of the fact that for a permutation matrix, the inverse is simply the transpose. It follows that: \[P\left(Q^{[m]}=q|X\right) \propto\sum_{z^{\prime},\tilde{x}^{\prime}}P\left(q,X,z^{\prime},\tilde{x}^{\prime}\right)\] (A.2) \[\propto\xi_{q}\sum_{z^{\prime}}P\left(z^{\prime}\right)\sum_{ \tilde{x}^{\prime}}P\left(\tilde{x}^{\prime}|z^{\prime}\right)P\left(X|\tilde {x}^{\prime},q\right)\] (A.3) \[\propto\xi_{q}\sum_{z^{\prime}}P\left(z^{\prime}\right)P\left( \widetilde{X}_{q}|z^{\prime}\right)\] (A.4) \[=\frac{\xi_{q}P\left(\mathbf{Q}^{\top}\left(X\mathbb{I}\right) \right)}{\sum_{q^{\prime}}\xi_{q}^{\prime}P\left(\mathbf{Q}^{\top}\left(X \mathbb{I}\right)\right)},\] (A.5) where in going from Eq. (A.3) to Eq. (A.4) we made use of the deterministic relationship so that all probabilities over \(\tilde{x}\) collapse to 0 if not following the permutation inferred by \(q\). In turn, \(P\left(\mathbf{Q}^{\top}\left(X\mathbb{I}\right)\right)\) is simply the observed data likelihood of \(\widetilde{X}\). ### Complete Likelihood and Auxiliary Function Due to Eq. (A.1), we can collapse \(X\) and \(Q\) into \(\widetilde{X}\). Given that we assume the distribution over \(Q\) to be sufficiently peaked so that we can pick a single configuration, we can define the complete log-likelihood solely in terms of \(Z\) and \(\widetilde{X}\), much like a HMM but with conditionally independent categorical emissions. Consequently, taking a Bayesian viewpoint and adding priors on each of the parameters, we define the complete data log-likelihood as: \[P\left(\mathcal{D},\Theta|Q\right) =\prod_{m=1}^{M}\prod_{n=1}^{N}\left(\prod_{z=1}^{|Z|}\pi_{z}^{ Z^{[m,n,1]}}\prod_{t=2}^{T}\prod_{z^{\prime}=1}^{|Z|}\sum_{z=1}^{|Z|}\Omega_{z^{ \prime},z}^{Z^{[m,n,t-1]}_{z^{\prime},z}Z^{[m,n,t]}}\prod_{t=1}^{T}\prod_{z=1} ^{|Z|}\prod_{k=1}^{|\widetilde{X}|}\prod_{x=1}^{|\widetilde{X}|}\Psi_{k,z,x}^{ \widetilde{X}^{[m,n,t]}_{z,x}}\right)\] \[\quad\times\text{Dir}\left(\pi;\alpha^{\pi}\right)\prod_{z=1}^{| Z|}\text{Dir}\left(\Omega_{z};\alpha_{z}^{\Omega}\right)\prod_{k=1}^{K} \prod_{z=1}^{|Z|}\text{Dir}\left(\Psi_{k,z};\alpha_{k,z}^{\Psi}\right),\] (A.6) where \[\text{Dir}\left(\theta;\alpha\right)=\frac{1}{\beta\left(\alpha\right)}\prod _{i=1}^{|\theta|}\theta_{i}^{\alpha_{i}-1}\] (A.7) is the usual Dirichlet prior with the multivariate \(\beta\) normaliser function for parameter \(\theta\in\{\pi,\Omega,\Psi\}\). Note that to reduce clutter, we index \(\widetilde{X}\) using \(k\) and \(x\) rather than \(\tilde{k}/\tilde{x}\). We seek to maximise the logarithm of the above, but we lack knowledge of the latent regime \(Z\). In its absence, we take the _Expectation_ of the log-likelihood with respect to the latest estimate of the parameters (\(\hat{\Theta}\)) and the observable \(\widetilde{X}\). We define this expectation as the **Auxiliary** function, \(\mathcal{Q}\): \[\mathcal{Q}\left(\Theta,\hat{\Theta}\right) \equiv\mathbb{E}\left\langle\log\left(P\left(\mathcal{D},\Theta|Q \right)\right)|X,\Theta^{*}\right\rangle\] \[=\quad\sum_{m=1}^{M}\sum_{n=1}^{N}\left(\sum_{z=1}^{|Z|}\mathbb{E }\left\langle Z_{z}^{[m,n,1]}\right\rangle\log\left(\pi_{z}\right)+\sum_{t=2}^ {T}\sum_{z^{\prime},z}\mathbb{E}\left\langle Z_{z^{\prime}}^{[m,n,t-1]}Z_{z}^{ [m,n,t]}\right\rangle\log\left(\Omega_{z^{\prime},z}\right)\right)\] \[\quad+\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t=1}^{T}\sum_{z=1}^{|Z|} \mathbb{E}\left\langle Z_{z}^{[m,n,t]}\right\rangle\sum_{k=1}^{K}\sum_{x=1}^{ |\widetilde{X}|}\widetilde{X}_{k,x}^{[m,n,t]}\log\left(\Psi_{k,z,x}\right)\] \[\quad+\log\left(P\left(\Theta;\mathcal{A}\right)\right)\] (A.8) Note that the number of runs \(N\) can vary between cages \(m\in M\), and similarly, \(T\) is in general different for each run \(n\): however, we do not explicitly denote this to reduce clutter. ### E-Step In Eq. (A.8) have two expectations, summarised as: \[\gamma_{z}^{[m,n,t]}=\mathbb{E}\left\langle Z_{z}^{[m,n,t]}\right\rangle=P \left(Z^{[m,n,t]}=z|\widetilde{X}\right)\] (A.9) and \[\eta_{z^{\prime},z}^{[m,n,t]}=\mathbb{E}\left\langle Z_{z^{\prime}}^{[m,n,t-1 ]}Z_{z}^{[m,n,t]}\right\rangle=P\left(Z^{[m,n,t-1]}=z^{\prime},Z^{[m,n,t]}=z| \widetilde{X}\right).\] (A.10) The challenge in computing these is that it involves summing out all the other \(z*\notin\{z,z^{\prime}\}\). This can be done efficiently using the recursive updates of the Baum-Welch algorithm [48], which is standard for HMMs. #### a.4.1 Recursive Updates We first split the dependence around the point of interest \(t\). To reduce clutter, we represent indexing over \(m/n\) by '\(\cdot\)' on the right hand side of equations and summarise the emission probabilities as: \[P_{\widetilde{X}}^{[m,n]}\left(t,z\right)\equiv\left(\prod_{k=1}^{K}\prod_{x =1}^{|\widetilde{X}|}\Psi_{k,z,x}^{\widetilde{X}_{k,x}^{[l,t]}}\right)\] (A.11) Starting with \(\gamma\): \[\gamma_{z}^{[m,n,t]} =\frac{P\left(\widetilde{X}|Z_{z}^{[l,t]}\right)P\left(Z_{z}^{[l, t]}\right)}{P\left(\widetilde{X}\right)}\] (A.12) \[=\frac{P\left(\widetilde{X}^{[\cdot,1:t]}|Z_{z}^{[\cdot,t]}\right) P\left(\widetilde{X}^{[\cdot,t+1:T]}|Z_{z}^{[\cdot,t]}\right)P\left(Z_{z}^{[\cdot,t]} \right)}{P\left(\widetilde{X}\right)}\] (A.13) \[=\frac{P\left(\widetilde{X}^{[\cdot,1:t]},Z_{z}^{[\cdot,t]}\right) P\left(\widetilde{X}^{[\cdot,t+1:T]}|Z_{z}^{[\cdot,t]}\right)}{P\left(\widetilde{X} \right)}.\] (A.14) Similarly, for \(\eta\): \[\eta_{z^{\prime},z}^{[m,n,t]} =\frac{P\left(\widetilde{X}|Z_{z^{\prime}}^{[,t-1]},Z_{z}^{[,t]} \right)P\left(Z_{z^{\prime}}^{[,t-1]},Z_{z}^{[,t]}\right)}{P\left(\widetilde{X} \right)}\] (A.15) \[=\frac{P\left(\widetilde{X}^{[,1:t-1]}|Z_{z^{\prime}}^{[,t-1]} \right)P_{\widetilde{X}}^{[,t]}\left(t,z\right)P\left(\widetilde{X}^{[,t+1:T] }|Z_{z}^{[,t]}\right)P\left(Z_{z^{\prime}}^{[,t-1]}\right)\Omega_{z^{\prime},z}}{ P\left(\widetilde{X}\right)}\] (A.16) \[=\frac{P\left(\widetilde{X}^{[,1:t-1]},Z_{z^{\prime}}^{[,t-1]} \right)P_{\widetilde{X}}^{[,t]}\left(t,z\right)P\left(\widetilde{X}^{[,t+1:T] }|Z_{z}^{[,t]}\right)\Omega_{z^{\prime},z}}{P\left(\widetilde{X}\right)}\] (A.17) We see that now we have two'messages' that crucially can be defined recursively. Let the 'forward' pass1 be denoted by \(F\) as: Footnote 1: In some texts these are usually referred to as \(\alpha\) and \(\beta\) but we use \(F/B\) to avoid confusion with the parameters of the Dirichlet priors. \[F_{z}^{[m,n,t]} =P\left(\widetilde{X}^{[,1:t]},Z_{z}^{[,t]}\right)\] (A.18) \[=\sum_{z^{\prime}=1}^{|Z|}P\left(\widetilde{X}^{[,1:t-1]},Z_{z^{ \prime}}^{[,t-1]}\right)P\left(Z_{z}^{[,t]}|Z_{z^{\prime}}^{[,t-1]}\right)P \left(\widetilde{X}^{[,t]}|Z_{z}^{[,t]}\right)\] (A.19) \[=P_{\widetilde{X}}^{[,]}\left(t,z\right)\sum_{z^{\prime}=1}^{|Z|} F_{z^{\prime}}^{[,t-1]}\Omega_{z^{\prime},z}.\] (A.20) For the special case of \(t=1\), we have: \[F_{z}^{[m,n,1]}=\pi_{z}P_{\widetilde{X}}^{[,]}\left(1,z\right).\] (A.21) Similarly, we denote the 'backward' recursion by \(B\): \[B_{z}^{[m,n,t]} =P\left(\widetilde{X}^{[,t+1:T]}|Z_{z}^{[,t]}\right)\] (A.22) \[=\sum_{z^{\prime}=1}^{|Z|}P\left(Z_{z^{\prime}}^{[,t+1]}|Z_{z}^{[,t]}\right)P\left(\widetilde{X}^{[,t+1]}|Z_{z^{\prime}}^{[,t+1]}\right)P\left( \widetilde{X}^{[,t+2:T]}|Z_{z^{\prime}}^{[,t+1]}\right)\] (A.23) \[=\sum_{z^{\prime}=1}^{|Z|}\Omega_{z,z^{\prime}}P_{\widetilde{X}}^{[ ]}\left(t+1,z\right)B_{z^{\prime}}^{[,t+1]}.\] (A.24) Again, we have to consider the special case for \(t=T\): \[B_{z}^{[m,n,T]}=1.\] (A.25) #### Scaling Factors To avoid numerical underflow, we work with normalised distributions. Specifically, we define: \[\hat{F}_{z}^{[m,n,t]}=P\left(Z_{z}^{[,t]}|\widetilde{X}^{[,1:t]}\right)=\frac {F_{z}^{[,t]}}{P\left(\widetilde{X}^{[,1:t]}\right)}.\] (A.26) We relate these factors together through: \[S^{[m,n,t]}=P\left(\widetilde{X}^{[,t]}|\widetilde{X}^{[,1:t-1]}\right),\] (A.27) and hence, from the product rule, we also have: \[P\left(\widetilde{X}^{[m,n,1:t]}\right)=\prod_{\tau=1}^{t}S^{[,\tau]}.\] (A.28) Consequently, we can redefine: \[\hat{F}_{z}^{[m,n,t]}=\frac{F_{z}^{[,t]}}{\prod_{\tau=1}^{t}S^{[,\tau]}}\] (A.29) and \[\hat{B}_{z}^{[m,n,t]}=\frac{B_{z}^{[,t]}}{\prod_{\tau=t+1}^{T}S^{[,\tau]}}\] (A.30) We denote for simplicity \[C^{[m,n,t]}=\left(S^{[,t]}\right)^{-1}\] (A.31) as the normaliser for the probability. This allows us to redefine the recursive updates for the responsibilities as follows: \[\gamma_{z}^{[m,n,t]}=\hat{F}_{z}^{[,t]}\hat{B}_{z}^{[,t]},\] (A.32) and \[\eta_{z^{\prime},z}^{[m,n,t]}=C^{[,t]}\hat{F}_{z^{\prime}}^{[,t-1]}\hat{B}_{z }^{[,t]}\Omega_{z^{\prime},z}P_{\widetilde{X}}^{[,]}\left(t,z\right),\] (A.33) where: \[\hat{F}_{z}^{[m,n,t]} =C^{[,t]}\hat{F}_{z}^{[,t]}\] (A.34) \[\hat{F}_{z}^{[m,n,t]} =\begin{cases}P_{\widetilde{X}}^{[,]}\left(1,z\right)\pi_{z}& \text{if}\quad t=1,\\ P_{\widetilde{X}}^{[,t]}\left(t,z\right)\sum_{z^{\prime}=1}^{|Z|}\hat{F}_{z^{ \prime}}^{[,t-1]}\Omega_{z^{\prime},z}&\text{otherwise}\end{cases}\] (A.35) \[C^{[m,n,t]} =\left(\sum_{z^{\prime}=1}^{|Z|}\hat{F}_{z^{\prime}}^{[,t]}\right) ^{-1}\] (A.36) \[\hat{B}_{z}^{[m,n,t]} =\begin{cases}1&\text{if}\quad t=T,\\ C^{[,t+1]}\sum_{z^{\prime}=1}^{|Z|}\Omega_{z,z^{\prime}}P_{\widetilde{X}}^{[, ]}\left(t+1,z\right)\hat{B}_{z^{\prime}}^{[,t+1]}&\text{otherwise}\end{cases}.\] (A.37) Through the normalisers \(C\), we also compute the observed data log-likelihood: \[\log\left(P\left(\widetilde{X};\Theta\right)\right)=-\sum_{m=1}^{M}\sum_{n=1} ^{N}\sum_{t=1}^{T}\log\left[C^{[,t]}\right].\] (A.38) ### M-Step We re-arrange the \(\mathcal{Q}\)-function to expand and split all terms according to the parameter involved (to reduce clutter we collapse the sum over \(M/N\) and ignore constant terms): \[\mathcal{Q}\left(\Theta,\hat{\Theta}\right)= \sum_{m,n}\sum_{z=1}^{|Z|}\gamma_{z}^{[,1]}\log\left(\pi_{z} \right)+\sum_{z=1}^{|Z|}\left(\alpha_{z}^{\pi}-1\right)\log\left(\pi_{z}\right)\] \[+\sum_{m,n}\sum_{t=2}^{T}\sum_{z^{\prime},z}\eta_{z^{\prime},z}^ {[,t]}\log\left(\Omega_{z^{\prime},z}\right)+\sum_{z^{\prime},z}\left(\alpha_ {z^{\prime},z}^{\Omega}-1\right)\log\left(\Omega_{z^{\prime},z}\right)\] \[+\sum_{m,n}\sum_{t=1}^{T}\sum_{z=1}^{|Z|}\gamma_{z}^{[,t]}\sum_{k =1}^{K}\sum_{x=1}^{|\widetilde{X}|}\widetilde{X}_{k,x}^{[,t]}\log\left(\Psi_{ k,z,x}\right)+\sum_{z=1}^{|Z|}\sum_{k=1}^{K}\sum_{x=1}^{|\widetilde{X}|} \left(\alpha_{k,z,x}^{\Psi}-1\right)\log\left(\Psi_{k,z,x}\right)\] \[+Const\] (A.39) #### a.5.1 Maximising for \(\pi\) Since we have a constraint (\(\pi\) must be a valid probability that sums to 1) we maximise the constrained Lagrangian: \[\Lambda=\mathcal{Q}+\lambda\left(\sum_{z^{\prime}=1}^{|Z|}\pi_{z^{\prime}}-1 \right).\] (A.40) We maximise this by taking the derivative with respect to \(\pi_{z}\) and setting it to 0 (note that we can zero-out all terms involving \(z^{\prime}\neq z\) which are constant with respect to \(\pi_{z}\)): \[\frac{\partial\Lambda}{\partial\pi_{z}} =\frac{1}{\pi_{z}}\left(\sum_{m=1}^{M}\sum_{n=1}^{N}\gamma_{z}^{[ m,n,1]}+\alpha_{z}^{\pi}-1\right)+\lambda=0\] (A.41) \[\lambda\pi_{z} =-\sum_{m=1}^{M}\sum_{n=1}^{N}\gamma_{z}^{[m,n,1]}-\alpha_{z}^{\pi }+1\] (A.42) Summing the above over \(z\): \[\lambda =-\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{z^{\prime}=1}^{|Z|}\gamma_{z^ {\prime}}^{[m,n,1]}-\sum_{z^{\prime}=1}^{|Z|}\alpha_{z^{\prime}}^{\pi}+|Z|\] \[=-\sum_{m=1}^{M}N^{m}-\sum_{z^{\prime}=1}^{|Z|}\alpha_{z^{\prime }}^{\pi}+|Z|\] (A.43) In the above we have made use of the fact that both \(\pi_{z}\) and \(\gamma_{z}^{[n]}\) sum to 1 over \(z^{\prime}\). Substituting Eq. (A.43) for \(\lambda\) in Eq. (A.42) we get the maximum-a-posteriori estimate for \(\hat{\pi}_{z}\): \[\hat{\pi}_{z}=\frac{\sum_{m=1}^{M}\sum_{n=1}^{N}\gamma_{z}^{[m,n,1]}+\alpha_{z }^{\pi}-1}{\sum_{m=1}^{M}N^{m}+\sum_{z^{\prime}=1}^{|Z|}\alpha_{z^{\prime}}^{ \pi}-|Z|}.\] (A.44) #### a.5.2 Maximising for \(\Psi\) We follow a similar constrained optimisation procedure for \(\Psi\), with the Lagrangian: \[\Lambda=\mathcal{Q}+\sum_{k^{\prime},z^{\prime}}\lambda_{k^{\prime},z^{\prime }}\left(\sum_{x^{\prime}}\Psi_{k^{\prime},z^{\prime},x^{\prime}}-1\right)\] (A.45) Taking the derivative of Eq. (A.45) with respect to \(\Psi_{k,z,x}\) and setting it to 0 (ignoring constant terms): \[\frac{\partial\Lambda}{\partial\Psi_{k,z,x}} =\frac{1}{\Psi_{k,z,x}}\left(\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t= 1}^{T}\gamma_{z}^{[m,n,t]}\widetilde{X}_{k,x}^{[m,n,t]}+\alpha_{k,z,x}^{\Psi} -1\right)+\lambda_{k,z}=0\] (A.46) \[\lambda_{k,z}\Psi_{k,z,x} =-\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t=1}^{T}\gamma_{z}^{[m,n,t]} \widetilde{X}_{k,x}^{[m,n,t]}-\alpha_{k,z,x}^{\Psi}+1\] (A.47) Again, summing this over \(x^{\prime}\) yields: \[\lambda_{k,z}=-\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t=1}^{T}\gamma_{z}^{[m,n,t] }\sum_{x^{\prime}=1}^{|\widetilde{X}|}\widetilde{X}_{k,x^{\prime}}^{[m,n,t]} -\sum_{x^{\prime}=1}^{|\widetilde{X}|}\alpha_{k,x,x^{\prime}}^{\Psi}+| \widetilde{X}|\] (A.48) Substituting Eq. (A.48) back into Eq. (A.47) gives us: \[\hat{\Psi}_{k,z,x}=\frac{\sum_{m=1}\sum_{n=1}^{N}\sum_{t=1}^{T}\gamma_{z}^{[ m,n,t]}\widetilde{X}_{k,x}^{[m,n,t]}+\alpha_{k,z,x}^{\Psi}-1}{\sum_{m=1}^{M} \sum_{n=1}^{N}\sum_{t=1}^{T}\gamma_{z}^{[m,n,t]}\widetilde{X}_{k,x^{\prime}} ^{[m,n,t]}+\sum_{x^{\prime}=1}^{|\widetilde{X}|}\alpha_{k,z,x^{\prime}}^{\Psi} -|\widetilde{X}|}.\] (A.49) #### a.5.3 Maximising for \(\Omega\) As always, this is a constrained optimisation by virtue of the need for valid probabilities. We start from the Lagrangian: \[\Lambda =\mathcal{Q}+\sum_{z^{\prime}=1}^{|Z|}\lambda_{z^{\prime}}\left( \sum_{z^{\prime}=1}^{|Z|}\Omega_{z^{\dagger},z^{\ast}}-1\right)\] (A.50) \[\frac{\partial\Lambda}{\partial\Omega_{z^{\prime},z}} =\frac{1}{\Omega_{z^{\prime},z}}\left(\sum_{m=1}^{M}\sum_{n=1}^{ N}\sum_{t=2}^{T}\eta_{z^{\prime},z^{\prime}}^{[,t]}+\left(\alpha_{z^{\prime},z}^{ \Omega}-1\right)\right)+\lambda_{z^{\prime}}\] (A.51) \[\lambda_{z^{\prime}}\Omega_{z^{\prime},z} =-\left(\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t=2}^{T}\eta_{z^{ \prime},z}^{[,t]}+\left(\alpha_{z^{\prime},z}^{\Omega}-1\right)\right)\] (A.52) \[\lambda_{z^{\prime}} =-\left(\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t=2}^{T}\sum_{z^{\ast} =1}^{|Z|}\eta_{z^{\prime},z^{\ast}}^{[,t]}+\sum_{z^{\ast}=1}^{|Z|}\alpha_{z^{ \prime},z^{\ast}}^{\Omega}-|Z|\right)\] (A.53) which after incorporating into the previous equation gives the maximum-a-posteriori update: \[\hat{\Omega}_{z^{\prime},z}=\frac{\sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t=2}^{T} \eta_{z^{\prime},z}^{[,t]}+\left(\alpha_{z^{\prime},z}^{\Omega}-1\right)}{ \sum_{m=1}^{M}\sum_{n=1}^{N}\sum_{t=2}^{T}\sum_{z^{\ast}=1}^{|Z|}\eta_{z^{ \prime},z^{\ast}}^{[,t]}+\sum_{z^{\ast}=1}^{|Z|}\alpha_{z^{\prime},z^{\ast}}^ {\Omega}-|Z|}\] (A.54) ## Appendix B Elaboration on the Datasets We describe our derived datasets in more detail. ### A Note on the original Data In line with the Reduction strategy of the 3Rs [46] we reuse existing data available through MLC at MRC Harwell. To this end, an understanding of the raw data helps in our discussion to follow. #### b.1.1 Husbandry The data pertains to male and female mice from several strains (although we focus on male mice from the C57BL/6NTac strain). Mice of the same sex and strain are housed in groups of three as a unique cage throughout their lifetime. To reduce the possibility of impacting social behaviour [13], the mice have no distinguishing external visual markings: instead, they were microchipped with unique RFID tags placed in the lower part of their abdomen. All recordings happen in the group's own home-cage, thus minimising disruption to their life-cycle. Apart from the mice, the cage contains a food and drink hopper, bedding and a movable tunnel. For each cage (group of three mice), three to four day continuous recordings are performed when the mice are 3-months, 7-months, 1-year and 18-months old. During monitoring, the mice are kept on a standard 12-hour light/dark cycle with lights-on at 07:00 and lights-off at 19:00. #### b.1.2 Data Modalities The recordings come in two modalities: _video_ and _position_. The recordings are split into 30-minute _segments_ to be more manageable, with RFID and video synchronised accordingly. Experiments are thus identified uniquely by the cage-id to which they pertain, the age group at which they are recorded and the segment number. VideoAn infra-red camera captures video at 25 frames per second from a side-mounted viewpoint and stores it as compressed \(1280\times 720\) greyscale (single-channel) MP4 files. Understandably, the hopper itself is opaque and this impacts the lighting (and ability to resolve objects) in the lower right quadrant. As regards cage elements, the hopper itself is static, and the mice can feed either from the left or right entry-points. The water-spout is on the left of the hopper towards the back of the cage from the provided viewpoint. The bedding itself consists of shavings and is highly dynamic, with the mice occasionally burrowing underneath it. Similarly, the cardboard tunnel roll can be moved around or cheved and varies in appearance throughout recordings. Position:The mice are uniquely identified through an RFID implant. Mice within the same cage are sorted in ascending order by their identifier and denoted Red/Green and Blue for visualisation and reference purposes. The baseplate contains 18 receivers, arranged in a \(3\times 6\) grid. The antennas are successively scanned in numerical order to test for the presence of a mouse: the baseplate does on average 2.5 full-scans per-second. ### IMADGE We aimed to provide general tools for analysing mouse behaviour in group settings. The IMADGE is our curated selection of data, including automatically-generated localisation and behaviour labels for the mice in the cage to allow us to answer the proposed research questions. The dataset also forms the basis for the ABODe dataset (Appendix B.3). #### b.2.1 Data Selection Demographics:We use data exclusively from the male C57BL/6NTac at the 3-month and 1-year age groups. This choice was motivated by the goal of having as much data as possible. The most prevalent single group in the MLC at MRC Harwelldataset was the male C57BL/6NTac recorded at 1-year of age. Contingent on the above choice, a related demographic was sought to provide some variability (e.g. for testing our anomaly detection schemes). On the advice of the biologists at MLC at MRC Harwell, and in an effort to minimise statistical shift for the algorithms to work on (e.g. same fur colour), we picked the same male C57BL/6NTac strain, but at the 3-month old time point. After discarding some recordings with non-standard setups, we ended up with 15 cages from the Adult (1-year) and 10 from the Young (3-month) age groups. Particularly, nine of the cages exist in both subsets and thus are useful for comparing behaviour dynamics longitudinally. Choice of SegmentsThe mice under study (C57BL/6NTac) are crepuscular, meaning that they are most active during dawn/dusk: i.e. the periods at which light turns to dark or _vice versa_. This is particularly relevant, because changes in the onset/offset of activity around these times can be very good early predictors of e.g. neurodegenerative conditions [50]. Consequently, we selected segments that overlap to _any_ extent with the morning (06:00-08:00) and evening (18:00-20:00) periods, resulting in generally 2\(\%\) hour recording runs. This gave us 6 runs per-cage. #### b.2.2 Derived Data Apart from the existing video and (RFID) position data, IMADGE exposes additional information: the localisation of each mouse in the video, an indication of whether it is observable or not, and a classification score for its behaviour. Common Frame of Reference:Using pre-recorded calibration videos available with the raw data, we annotated fixed points on the base-plate and optimised a similarity transform to map videos from different cages into the same coordinate system. The minimal rotation component and the lack of a shear component were particularly relevant for modelling BBoxes around mice, as it allowed us to retain axis-aligned BBoxes which are required for most deep-learning models. Granularity:The segments from each cage are grouped into recording periods around a light-to-dark or dark-to-light transition, which we refer to as a _Run_. The basic unit of processing is the BTI which is one-second in duration (25 video frames). This was chosen to balance expressivity of the behaviours (reducing the probability that a BTI spans Figure B.1: Example Video frames from our data, showing (a) the raw frame and (b) and enhanced visual using CLAHE [49]. In (b) the hopper is marked in yellow and the water spout in purple, while the (RFID) mouse positions are projected into image space and overlaid as red, green and blue dots. multiple behaviours) against imposing an excessive effort in annotation (as used in ABODe, Appendix B.3.1, for training behaviour classifiers). Mouse Position:The RFID-based mouse position per-BTI is summarised in two fields: the mode of the pickups within the BTI and the absolute number of antenna cross-overs. The BBoxes for each mouse are generated per-frame using the TIM [36], running on each segment in turn. The per-BTI BBox is obtained by averaging the top-left/bottom-right coordinates throughout the BTI. Mouse Behaviour:The main modality of IMADGE is the per-mouse behaviour, obtained automatically by our ALM. The observability of each mouse in each BTI is first determined: behaviour classification is then carried out on samples deemed Observable. The behaviour is according to one of seven labels: Immobile, Feeding, Drinking, Self-Grooming, Allo-Grooming, Locomotion and Other. Behaviours are mutually exclusive within the BTI, but we retain the full probability score over all labels rather than a single class label. ### ABODe Our analysis pipeline required a mouse behaviour dataset that can be used to train models to automatically classify behaviours of interest, thus allowing us to scale behaviour analysis to larger datasets. Our answer to this need is the ABODe, based on a subset of recordings from the adult subset of IMADGE. #### b.3.1 Behaviour Schema The development of the behaviour schema was a well thought-out process, involving feedback from the biologists at MLC at MRC Harwelland our own experience in annotation processes. Modality:To simplify our classification and analysis, the behaviour of each mouse is defined at regular BTIs. Moreover, we enforce that each BTI for each mouse is characterised by exactly one behaviour: this implies both exhaustibility and mutual exclusivity of behaviours. The BTIs are one-second in length. Identification of the mice is through the position information (RFID). This was a conscious decision (rather than using the BBox localisation from an automated method, e.g. TIM [36]) as it decouples the behaviour annotations from the performance of upstream components. Behaviours:The schema admits nine behaviours and three other labels, as shown in Tab. B.1. In particular, labels Hidden, Unidentifiable, Tentative and Other ensure that the annotator can specify a label in every instance, and clarify the source of any ambiguity. Observability:The Hidden label, while treated as mutually exclusive with respect to the other behaviours for the purpose of annotation, actually represents a hierarchical label space. Technically, Hidden mice are doing any of the other behaviours, but we cannot tell which -- any subsequent modelling might benefit from treating these differently. We thus sought to further specify the observability of the mice as a label-space in its own right as shown in Tab. B.2. #### b.3.2 Annotation Process The annotations were carried out using the BORIS software [51]: this was chosen for its versatility, familiarity to the MLC at MRC Harwellteam and open-source implementation. Recruitment:Given the time constraints of the project, we were only able to recruit a single expert animal care technician (henceforth the _phenotype_) to do our annotations. This limited the scale of our dataset, and possibly quality (no multi-annotator agreement) but also simplified the data curation process and ensures consistency. To mitigate the shortcomings, we: (a) carried out a short training phase for the phenotype with a set of clips that were simultaneously annotated by the phenotype and ourselves, (b) designed some automated sanity checks to be run on annotations (see below), and, (c) re-annotated the observability labels ourselves. Modality:Although behaviour is defined per BTI, annotating in this manner is not efficient for humans: instead, the annotator was tasked with specifying intervals of the specific behaviour, defined by the start and end-point respectively. Similarly, the length of the clip was limited to two-minute snippets: these are long enough that they encompass several behaviours but are more manageable than the 30-minute snippets, and also provide more variability (as it allows us to sample from more cages). Quality Control:To train the phenotype, we provided a batch of four (manually chosen) snippets, which were also annotated by ourselves -- this enabled the phenotype to be inducted into using BORIS and in navigating the annotation schema, providing feedback as required. Following the annotation of each production batch, we also ran the labellings through a set of automated checks which guarded against some common errors. These were reported back to the phenotype, although they had very limited time to act on the feedback which impacted on the resulting data quality. Re-Annotating Observability:The main data quality issue related to the misinterpretation of Hidden by the phenotype, leading to over-use of the Hidden label. To rectify this, we undertook to re-visit the samples labelled as Hidden and clarify the observability as per the schema in Tab. B.2. Samples which the phenotype had labelled as anything other than Hidden (except for Unidentifiable samples which were ignored as ambiguous) were retained as Observable-- we have no reason to believe that the phenotype reported a behaviour when it should have been Hidden. The only exception was when there was a clear misidentification of the mice, which was rectified (we had access to the entire segment which provided longer-term identity cues for ambiguous conditions). Note that our annotation relates to the observability (or otherwise): however, when converting a previously Hidden sample to Observable, we provided a "_best-guess_" annotation of behaviour. These best-guess annotations are clearly marked, allowing us to defer to the superior expertise of the phenotype in differentiating between actual behaviours where this is critical (e.g. for training models).
2306.13275
Can Continual Learning Improve Long-Tailed Recognition? Toward a Unified Framework
The Long-Tailed Recognition (LTR) problem emerges in the context of learning from highly imbalanced datasets, in which the number of samples among different classes is heavily skewed. LTR methods aim to accurately learn a dataset comprising both a larger Head set and a smaller Tail set. We propose a theorem where under the assumption of strong convexity of the loss function, the weights of a learner trained on the full dataset are within an upper bound of the weights of the same learner trained strictly on the Head. Next, we assert that by treating the learning of the Head and Tail as two separate and sequential steps, Continual Learning (CL) methods can effectively update the weights of the learner to learn the Tail without forgetting the Head. First, we validate our theoretical findings with various experiments on the toy MNIST-LT dataset. We then evaluate the efficacy of several CL strategies on multiple imbalanced variations of two standard LTR benchmarks (CIFAR100-LT and CIFAR10-LT), and show that standard CL methods achieve strong performance gains in comparison to baselines and approach solutions that have been tailor-made for LTR. We also assess the applicability of CL techniques on real-world data by exploring CL on the naturally imbalanced Caltech256 dataset and demonstrate its superiority over state-of-the-art classifiers. Our work not only unifies LTR and CL but also paves the way for leveraging advances in CL methods to tackle the LTR challenge more effectively.
Mahdiyar Molahasani, Michael Greenspan, Ali Etemad
2023-06-23T03:05:33Z
http://arxiv.org/abs/2306.13275v1
# Can Continual Learning Improve Long-Tailed Recognition? Toward a Unified Framework ###### Abstract The Long-Tailed Recognition (LTR) problem emerges in the context of learning from highly imbalanced datasets, in which the number of samples among different classes is heavily skewed. LTR methods aim to accurately learn a dataset comprising both a larger Head set and a smaller Tail set. We propose a theorem where under the assumption of strong convexity of the loss function, the weights of a learner trained on the full dataset are within an upper bound of the weights of the same learner trained strictly on the Head. Next, we assert that by treating the learning of the Head and Tail as two separate and sequential steps, Continual Learning (CL) methods can effectively update the weights of the learner to learn the Tail without forgetting the Head. First, we validate our theoretical findings with various experiments on the toy MNIST-LT dataset. We then evaluate the efficacy of several CL strategies on multiple imbalanced variations of two standard LTR benchmarks (CIFAR100-LT and CIFAR10-LT), and show that standard CL methods achieve strong performance gains in comparison to baselines and approach solutions that have been tailor-made for LTR. We also assess the applicability of CL techniques on real-world data by exploring CL on the naturally imbalanced Caltech256 dataset and demonstrate its superiority over state-of-the-art classifiers. Our work not only unifies LTR and CL but also paves the way for leveraging advances in CL methods to tackle the LTR challenge more effectively. Continual Learning Long-Tailed Recognition Imbalanced Learning ## 1 Introduction Data in real-world scenarios often exhibits long-tailed distributions [1, 2, 3, 4], where the number of samples in some classes (Head set) is significantly larger than in other classes (Tail set). This imbalance can lead to less than optimal performance in deep learning models. This problem is known as Long-Tailed Recognition (LTR), which can be described as training a model on highly imbalanced data and attempting to achieve high accuracy on a balanced test set [3]. Given that the size of the Head set is substantially larger than the Tail set, samples from the Head generally dominate the loss and determine the gradient. Consequently, samples from the Tail are less impactful, leading to strong performance in Head classes but a significant decline in the performance of the Tail classes [5]. Numerous studies have sought to mitigate this issue by balancing training data through over-sampling the Tail classes [6, 7, 8]. Alternatively, a feature extractor can be trained using the Head set and employed for transfer learning to train the Tail classifier [9, 10, 11, 12]. As another solution, the loss or gradients have been regularized during training [13, 14, 15]. Recently, weight balancing has been proposed as a method for penalizing excessive weight growth during training, thus forcing per-class weight norms to maintain more uniform magnitudes [5]. In this paper, we present and prove a theorem stating that under the precondition of strong convexity of the loss function, the weights obtained by a learner when trained on the entire dataset are confined within an upper bound in relation to the weights achieved by the same learner when trained solely on the Head set. We derive that this upper bound is proportional to the imbalance factor of the long-tailed dataset and inversely proportional to the strong convexity parameter of the loss function. As a result of this theorem, we demonstrate that learning the whole dataset can be broken down into two sequential tasks, i.e., learning the Head followed by the Tail. We therefore propose that Continual Learning (CL) methods can be leveraged to update the weights to learn the second task (Tail) without experiencing forgetting of the first task (Head), which often occurs when a model is retrained. Consequently, we take an interesting step towards unifying these two frameworks (LTR and CL). We validate our theory using four datasets, MNIST-LT, CIFAR100-LT, CIFAR10-LT, and Caltech256. First, we use the toy MNIST-LT dataset and show that the actual distance between weight vectors when trained on either the Head or the entire dataset aligns closely with our theoretical predictions. Next, to further assess the efficacy of employing CL in tackling LTR, we apply a range of CL methods on the LTR problem using CIFAR100-LT and CIFAR10-LT, with varying imbalance factors. The results indicate that CL methods are indeed capable of achieving effective performances as compared to baselines and state-of-the-art LTR models. To underscore the advantages of utilizing CL for LTR, we conduct an additional experiment in which we perform classification on a naturally imbalanced dataset (Caltech256) using a prominent CL method, which outperforms previous non-CL efforts. In addition, we offer a discussion on the implications of employing CL for LTR, the limitations of our study, and its broader impact. Our contributions in this paper can be summarized as follows: * We propose a theorem where under the assumption of strong convexity of the loss function, the distance between the weights of a learner trained on the full dataset and the weights of the same learner trained strictly on the Head set, are within an upper bound, which is inversely proportional to the imbalance factor. * Building on this theorem, we propose a new perspective whereby CL solutions can be used to address the LTR problem. * To substantiate our proposed method, we conduct a series of comprehensive experiments that demonstrate the effectiveness of CL techniques in tackling LTR. The results showcase that using standard CL solutions, strong performance gains are achieved on long-tailed scenarios. ## 2 Related Work **Long-Tailed Recognition.** Real-world datasets often exhibit imbalanced distributions, with some classes appearing more frequently than others. Training a model on such imbalanced data can result in poor performance on the rare classes. LTR addresses this issue by enabling models to perform well on both Head and Tail classes [13]. LTR approaches can be broadly categorized into three primary groups: data distribution re-balancing, class-balanced losses, and transfer learning from Head to Tail [16]. Data distribution re-balancing techniques include over-sampling the Tail [6; 17], under-sampling the Head [18], and class-balanced sampling [19; 20]. Class-balanced loss approaches modify the loss function to treat each sample differently, e.g., including class distribution-based loss [13; 14; 21], focal loss [22], and Bayesian uncertainty [23]. Finally, transfer learning techniques leverage features learned from the Head to improve learning on the Tail [24; 9]. Although numerous prior works have addressed LTR, few provide a mathematical analysis of the training process using imbalanced data [25; 26]. These works demonstrate that the Head is learned more quickly than the Tail, primarily focusing on the training dynamics. In contrast, our theoretical analysis studies the convergence point of training within the LTR framework. **Continual Learning.** CL addresses the challenge of adapting a deep learning model to new tasks (e.g., new classes or distributions) while maintaining performance on the previously learned tasks. The main challenge to address by CL methods is the mitigation of catastrophic forgetting, i.e., forgetting the previous tasks as the new tasks are learned. CL methods are typically grouped into three categories: expansion-based, regularization-based, and memory-based approaches. Expansion-based CL methods utilize a distinct subset of parameters for learning each task [27; 28; 29]. Regularization-based techniques penalize significant changes in crucial network parameters (relative to previous tasks) by incorporating a regularization term in the loss function [30; 31; 32; 33; 34]. Memory-based approaches employ a replay memory to store a limited number of samples from previous tasks, which are then used in future training to minimize forgetting [35; 36; 37]. ## 3 Proposed Method ### Approach Let us assume an LTR problem and a learner, denoted as \(\theta\). Initially, the learner is trained on a highly imbalanced dataset \(\mathcal{D}\), as shown in Fig. 1, where \(\theta_{i}\) is the initialized model in the weight space. Owing to the larger number of Head samples in each iteration, they dominate the evolution of the gradients, resulting in a learner that performs significantly better on the Head set than on the Tail set at the end of training. This process leads the parameters to converge to \(\theta^{*}\). To mitigate this issue, we propose to reformulate the LTR problem as a sequential problem consisting of two tasks: learning the Head and Tail classes separately. Given that the learner already demonstrates strong performance on the Head set, it primarily needs to focus on learning the second task (Tail set). We propose a theorem showing that under a strongly convex loss function, \(\theta^{*}\) lies within a bounded radius \(r\) of the learner's weights \(\theta^{*}_{H}\) when trained exclusively on the Head set \(\mathcal{D}_{H}\), where \(r\) is proportional to the strong convexity of the loss function and inversely proportional to the imbalance factor. \(\psi_{H}\) represents an area within the weight space where the network performs well on the Head set. However, once the learner attempts to learn these two tasks sequentially, it will encounter another problem known as catastrophic forgetting. Catastrophic forgetting occurs when a deep learning model is trained to perform a new task, but forgets the previous one [38]. Training initially on the Head set followed by training on the Tail set results in \(\theta^{*}_{T}\), which exhibits catastrophic forgetting. The ideal weights \(\theta^{*}_{HT}\) for learning both Head and Tail sets lie in the intersection of \(\psi_{H}\) and \(\psi_{T}\), denoted by \(\psi_{HT}\). To prevent catastrophic forgetting of the first task (Head set) while learning the second task (Tail set), CL techniques can be employed, allowing the model to learn the Tail set without compromising its performance on the Head set. By re-framing LTR as two sequential tasks (learning Head set \(\mathcal{D}_{H}\) followed by Tail set \(\mathcal{D}_{T}\)), we can utilize CL to learn the second task (updating the weights towards \(\psi_{T}\) with CL(\(\mathcal{D}_{T}\)) without forgetting the first task (staying in \(\psi_{H}\)), ultimately performing well on both Head and Tail sets (ending up in \(\psi_{HT}\)). ### Problem Formulation LTR aims to address the challenge of learning from highly imbalanced data. This occurs when the training data \(\mathcal{D}\) contains more samples in some classes (the Head set) and fewer in others (the Tail set). Let \(\mathcal{D}_{H}\) and \(\mathcal{D}_{T}\) represent the subsets of \(\mathcal{D}\) corresponding to the Head set and Tail set, respectively. The imbalance factor _IF_ quantifies the severity of this issue in a dataset: \[IF=\frac{|\mathcal{D}_{c^{\max}}|}{|\mathcal{D}_{c^{\min}}|}, \tag{1}\] where \(c\) represents the class index, \(|\mathcal{D}_{c}|\) denotes the cardinality of each class, \(c^{\max}=\arg\max\ |\mathcal{D}_{c}|\), and \(c^{\min}=\arg\min\ |\mathcal{D}_{c}|\), such that \(\mathcal{D}_{c^{\max}}\in\mathcal{D}_{H}\) and \(\mathcal{D}_{c^{\min}}\in\mathcal{D}_{T}\). **Definition 1**: A dataset is deemed _long-tailed_ when \(|\mathcal{D}_{c^{\max}}|\gg|\mathcal{D}_{c^{\min}}|\) or, in other words, \(IF\,\gg\,1\). When a model is trained on such a dataset and its performance is assessed on a uniformly distributed test set (i.e. \(|\mathcal{D}_{c}|=k\) for each class \(\mathcal{D}_{c}\) within the test set), the problem is referred to as _Long-Tailed Recognition_. ### Training on Long-tailed Distribution In this section, we derive the conditions in which CL can be applied to a long-tailed scenario. **Lemma 1**: If \(|f(x)-g(x)|\leq\delta\) and both \(f(x)\) and \(g(x)\) are strongly convex then: \[\|x_{g}-x_{f}\|^{2}\leq\frac{4\delta}{\mu_{f}+\mu_{g}}, \tag{2}\] where \(x_{g}\) and \(x_{f}\) are \(\arg\min f(x)\) and \(\arg\min g(x)\), respectively. The proof of this lemma is presented in Appendix A.1. **Theorem 1:** Assume that a logistic regression model with parameters \(\theta\) is trained using regularized cross-entropy loss in an LTR setting. Then, \(\left\|\theta^{*}-\theta^{*}_{H}\right\|^{2}\leq\frac{4\delta}{\mu_{H}+\mu_{g}}\), where \(\theta^{*}\) represents the parameter vector obtained after training, Figure 1: An overview of learning under the LTR scenario and our proposed algorithm is presented. Detailed description provided in text. denotes the parameter vector when the model is trained solely on the Head set, \(\delta\) is the maximum difference between the loss of the learner using the entire dataset or the Head set for any value of \(\theta\), and \(\mu_{H}\) and \(\mu\) are the strong convexity parameters of the loss computed on either the Head set or the entire dataset. **Proof:** The model is trained on the entire dataset \(\mathcal{D}\) by minimizing the loss function \(\mathcal{L}\): \[\mathcal{L}(\mathcal{D})=\frac{1}{|\mathcal{D}|}\left(\sum_{i=1}^{|\mathcal{D }_{H}|}\ell(\mathcal{D}_{H}^{i})+\sum_{i=1}^{|\mathcal{D}_{T}|}\ell(\mathcal{ D}_{T}^{i})\right), \tag{3}\] where \(\ell(\mathcal{D}^{i})\) is the loss of each individual sample. By substituting \(\mathcal{L}(\mathcal{D}_{H})=\frac{1}{|\mathcal{D}H|}\sum_{i=1}^{|\mathcal{D} _{H}|}\ell(\mathcal{D}_{H}^{i})\) and \(\mathcal{L}(\mathcal{D}_{T})=\frac{1}{|\mathcal{D}T|}\sum_{i=1}^{|\mathcal{D} _{T}|}\ell(\mathcal{D}_{T}^{i})\) : \[\mathcal{L}(\mathcal{D})=\frac{|\mathcal{D}_{H}|}{|\mathcal{D}|}\mathcal{L}( \mathcal{D}_{H})+\frac{|\mathcal{D}_{T}|}{|\mathcal{D}|}\mathcal{L}(\mathcal{ D}_{T}). \tag{4}\] We define \(\gamma=\frac{IF}{1+IF}\), which falls within the range of \([0.5,1)\). We can rewrite Eq. 4 as: \[\mathcal{L}(\mathcal{D})=\gamma\mathcal{L}(\mathcal{D}_{H})+(1-\gamma) \mathcal{L}(\mathcal{D}_{T}). \tag{5}\] Since \(IF\gg 0\) in LTR, we can conclude that the value of \(\gamma\) approaches one. Consequently, \(\mathcal{L}(\mathcal{D})\) approaches \(\mathcal{L}(\mathcal{D}_{H})\) for all \(\theta\) values. Let \(\delta\) be defined as the maximum difference of the losses: \[|\mathcal{L}(\mathcal{D})-\mathcal{L}(\mathcal{D}_{H})|\leq\delta. \tag{6}\] From Eq. 5, it follows that \(\lim\limits_{IF\gg 0}\delta=0\). One of the most effective losses for the LTR problem is the regularized cross-entropy loss. This loss is the cross-entropy with an additional regularization term that prevents weights from growing excessively: \[\mathcal{L}(\mathcal{D},\theta)=-\frac{1}{N}\sum_{i=1}^{N}y_{i}\log\left(P(f( \theta,x_{i}))\right)+\frac{\mu}{2}\|\theta\|^{2},(x_{i},y_{i})\in\mathcal{D}. \tag{7}\] This loss improves generalizability by reducing overfitting and achieves state-of-the-art performance when dealing with LTR scenarios [5]. Moreover, as our model is logistic regression, this loss is strongly convex since \(\nabla^{2}\mathcal{L}(\beta,\theta)\geq\mu\). From the definition of strong convexity [39], it therefore follows that: \[\mathcal{L}(x_{1})\geq\mathcal{L}(x_{2})+\nabla\mathcal{L}(x_{2})^{T}(x_{1}-x_ {2})+\frac{\mu_{\mathcal{L}}}{2}\|x_{1}-x_{2}\|^{2}, \tag{8}\] where \(\mu_{\mathcal{L}}\) is the strong convexity parameter. Applying Lemma 1 to Eqs. 6 and 8 yields: \[\|\theta^{*}-\theta_{H}^{*}\|^{2}\leq\frac{4\delta}{\mu_{H}+\mu}, \tag{9}\] where \(\theta^{*}\) and \(\theta_{H}^{*}\) are \(\arg\min\mathcal{L}\) and \(\arg\min\mathcal{L}_{H}\), respectively. As a result, when the model is trained on a long-tailed dataset, the network parameter \(\theta\) converges to a point close to the weights of the model when it was only trained on the Head set \(\theta_{H}\). **Remark 1:** Under a more relaxed assumption, where \(\mathcal{L}(\mathcal{D},\theta)\) is strictly (but not strongly) convex, the upper bound can be calculated using Lemma 2. **Lemma 2:** If \(|f(x)-g(x)|\leq\delta\) and both \(f(x)\) and \(g(x)\) are strictly convex then: \[\|x_{g}-x_{f}\|^{2}\leq\frac{4\delta}{\lambda_{f}+\lambda_{g}}, \tag{10}\] where \(x_{g}\) and \(x_{f}\) are \(\arg\min f(x)\) and \(\arg\min g(x)\), and \(\lambda_{f}+\lambda_{g}\) are the minimum eigenvalues of the hessian matrices of \(f(x)\) and \(g(x)\) respectively. The proof for this lemma is provided in Appendix A.2. Using this lemma, the upper bound of \(\|\theta^{*}-\theta_{H}^{*}\|^{2}\) is expressed as \(\|\theta^{*}-\theta_{H}^{*}\|^{2}<=\frac{4\delta}{\lambda_{f}+\lambda_{g}}\). To ensure that this upper bound is limited and approaches zero when \(\delta\to 0\), the minimum eigenvalues of the Hessians of both loss functions should have lower bounds, which is again another definition of strong convexity. ### CL for LTR A general CL problem can be formulated as follows [40]. A model is exposed to streams of training samples \((x_{t},y_{t})\), where \(t\) represents the time step. The set of data labels \(\mathcal{Y}_{t}=\bigcup_{i=1}^{t}y_{i}\) has been seen by the network previously, up to the current timestep \(t\). The objective at any timestep is to find a mapping \(f_{\theta}:x\to y\) that accurately maps sample \(x\) to \(\mathcal{Y}_{t}\cup y_{t+1}\), where \(y_{t+1}\) is the set of new unseen labels. We have shown in Eq. 9 that when the model is trained on highly imbalanced data, the weights \(\theta^{*}\) will be very close to those weights \(\theta_{H}^{*}\) when it is only trained on the Head set. As a result, the model can be considered as \(f_{\theta_{H}^{*}}:x\to y\) where \(\mathcal{Y}_{t}=\mathcal{D}_{H}\). The objective is to learn \(f_{\theta}:x\to y\) which could accurately predict the entire dataset \(\mathcal{D}\). Thus, if we consider the last set \(\mathcal{D}_{T}\) as \(y_{t+1}\), the objective of the LTR problem would be equivalent to the objective of CL, which is to estimate \(f_{\theta_{t}}\): \[f_{\theta_{t}}:x\to y\quad s.t.\quad y\in\mathcal{Y}_{t}\cup y_{t+1},\; \mathcal{Y}_{t}=\mathcal{D}_{H},\;y_{t+1}=\mathcal{D}_{T}. \tag{11}\] This approach unifies the two domains, so that an LTR problem can be treated as a CL problem. ## 4 Experiments and Results ### Experiment Setup **Datasets.** First, we use the **MNIST-LT**[41] toy dataset with different _IF_ values and strong convexity parameters to study the behavior of the upper bound and compliance with our theorem. Next, to evaluate the performance of CL in addressing LTR, we employ two widely used LTR datasets: **CIFAR100-LT** and **CIFAR10-LT**[13]. These datasets represent long-tailed versions of the original CIFAR100 and CIFAR10 datasets, maintaining the same number of classes while decreasing the number of samples per class using an exponential function. Finally, to highlight the benefits of using CL for LTR, we carry out additional experiments using the naturally skewed **Caltech256** dataset [42]. **Implementation Details.** We adhere to the experiment setup described in [5; 4]. We use ResNet-32 as the model [43] and directly adopt the results of comparable experiments from [5; 4] for comparisons. The LTR methods selected for comparison are state-of-the-art solutions in the area. All training was conducted using an NVIDIA RTX 3090 GPU with 24GB VRAM. The details of the implementation specifics are provided in Appendix B. **Evaluation.** For the LTR datasets (MNIST-LT, CIFAR100-LT, CIFAR10-LT), we first train the model on the long-tailed imbalanced training set and then evaluate it on the balanced test set, following the evaluation protocol of [5]. For Caltech256, we use the entire training set for training and assess the model's performance on the entire test set, retaining its original distribution. All reported values represent classification accuracy percentages. ### Results **Upper bound.** To study the upper bound under LTR settings, we first train a logistic regression model on MNIST-LT with varying _IF_ and \(\mu\) values. Initially, the model is trained using \(\mathcal{L}(\mathcal{D})\). Subsequently, the model is trained from scratch using \(\mathcal{L}(\mathcal{D}_{H})\). Finally, we calculate the distance between the acquired sets of weights (\(\|\theta^{*}-\theta_{H}^{*}\|\)). The results are illustrated in Fig. 2. As expected from Eq. 9, increasing either the _IF_ or strong convexity (\(\mu\)) results in a reduced distance, indicating that the weights of the model trained using \(\mathcal{D}\) approach the weights when it is solely trained using \(\mathcal{D}_{H}\). To verify the upper bound in Eq. 9, we then calculate the estimated upper bound for each \(\gamma\) and \(\mu\) using Eq. 5in Appendix A.1. It is important to note that this upper bound is tighter compared to Eq. 9. We compare the upper bound with the actual distance in Fig. 3 and show that for all _IF_ and \(\mu\) values, the measured distance is lower than the theoretical upper bound. **LTR benchmarks.** To demonstrate the efficacy of CL approaches in addressing the LTR challenge, we apply three commonly used CL strategies, LwF [34], EWC [33], and GPM [30], in addition to the modified version of EWC (where we calculate the Fisher value based on the loss rather than the output of the model), on LTR benchmark datasets, CIFAR100-LT and CIFAR10-LT. The number of samples in each class decreases exponentially according to _IF_, where class 1 has the maximum number of samples and class 100 contains the least number of samples, as illustrated in Appendix C. The results are presented in Table 1 and Table 2, along with the performance of existing LTR solutions, specifically designed and implemented for this problem. Moreover, we present two baselines by training the ResNet32 encoder on the imbalanced data, with and without a class-balanced loss term. The accuracies presented in the tables represent the average per-class accuracies. We observe that CL methods indeed provide an effective solution for LTR, as predicted by our proposed theorem. We acknowledge that the CL approaches may not yield the top-performing solutions in contrast to certain existing LTR methods. However, when compared to the baselines, the CL methods still demonstrate a considerable improvement in performance. The superior performance of the select LTR methods can be credited to their tailored design for this particular benchmark, along with the likelihood that the strong convexity assumption may not hold perfectly for this experiment. Here, let's discuss three key concepts in the context of CL: catastrophic forgetting, backward transfer, and forward transfer [58]. As mentioned earlier, catastrophic forgetting occurs when the performance of a class declines after retraining. Despite the use of CL methods, which are designed to mitigate this forgetting, a certain degree of forgetting is still inevitable. Forward transfer is the improvement in performance on a new task after employing CL, which is the central aim of retraining in CL. Finally, backward transfer is a beneficial side-effect where retraining on new samples can actually enhance the model's performance on the previous tasks. Now, let's discuss Fig. 4, which presents the difference in per-class accuracy of the best CL method (GPM) versus the baseline network. The analysis is based on CIFAR100-LT with an _IF_ of 100. The figure is divided into three regions corresponding to the scenarios discussed above: catastrophic forgetting (red), backward transfer (blue), and forward transfer (green). The red region in the figure represents classes that undergo catastrophic forgetting, while the green region represents the Tail samples (with a class index larger than 60), which demonstrate improved performance, or forward transfer. We observe that using GPM as a CL solution for LTR results in very effective improvements the per-class accuracy of the Tail (forward transfer). Interestingly, despite the absence of Head data in the retraining process, 40 out of 60 Head classes see some level of improvement after the model is exposed to the Tail samples (backward transfer). This result emphasizes the remarkable potential of CL methods in enhancing the performance on both new and previous tasks. Next, rather than employing the baseline for computing per-class accuracy differences, we compare the CL method, GPM, with an LTR model, WD, that exhibits similar overall accuracy. The outcomes are depicted in Fig. 5(a). In this figure, the red bars denote classes where WD outperforms GPM, whereas the blue bars indicate the classes where GPM excels. We observe that GPM performs generally better on the Tail, whereas WD outperforms in Head. On average, WD's accuracy on Head classes is 4.5% higher, while GPM achieves a 9.5% higher accuracy on Tail samples. Here, we analyze the difference in per-class accuracies of GPM, Modified EWC (which exhibits similar but slightly better performance than EWC), and LwF with respect to each other, and present the results in Figs. 5(b), (c), and (d). Among these three CL methods, GPM demonstrates the best results on the Tail, particularly in classes 60 to 80. LwF performs better when data is extremely limited (classes 90 to 100). The best method for Head classes is Modified EWC (outperforming GPM in 40 out of 60 Head classes), as a result of both minimizing instances of catastrophic forgetting and promoting backward transfer. These comparisons highlight that each CL method exhibits distinct behaviors when applied to the LTR problem. An interesting phenomenon observed when training models on highly imbalanced data is the presence of artificially large weights in neurons corresponding to the Head classes [5]. The LTR solution, WD, addresses this problem by penalizing weight growth using weight decay. Figure 4: The difference in per-class accuracy of GPM and the baseline model. Figure 5: The difference in per-class accuracy of (a) GPM and WD, (b) GPM and LwF, (c) LwF and Modified EWC, and (d) GPM and Modified EWC. One way to assess the network's ability to handle LTR is by analyzing the bias in per-class weight norms. To this end, we present the per-class weight norms of the Baseline, WD, and GPM models in Fig. 6. The figure reveals a significant imbalance in the weight norms of the Baseline model, which is naively trained on the imbalanced dataset. In contrast, the WD and GPM models exhibit more uniform weight norms across different classes. Interestingly, although GPM starts with the heavily imbalanced weights of the Baseline model, it converges towards a more uniform weight distribution without any explicit penalty on weight growth. Unlike many other CL methods that restrict the plasticity of crucial weights, GPM only constrains the direction of the weight update in the weight space, enabling the model to converge to a more balanced weight distribution. This further demonstrates the effectiveness of CL in addressing LTR. **Real World data.** In LTR benchmarks, datasets are modified to exhibit a skewed distribution of samples among various classes. However, such imbalanced class distributions are naturally observed in real-world data as well. To evaluate the efficacy of CL techniques on non-LTR benchmark datasets, we utilize the Caltech256 dataset [42], which consists of 256 distinct classes representing everyday objects. The largest class comprises 827 samples, while the smallest class contains only 80 samples, exhibiting an _IF_ of over 10. Here, we employ the CL solution, Modified EWC, and compare its performance to state-of-the-art methods on this dataset for object classification. The results are presented in Table 3. We observe that CL outperforms the state-of-the-art on this dataset, demonstrating the strong potential of using CL in dealing with long-tailed real-world datasets. **Limitations.** Strong convexity is a key assumption in our theorem, which determines an upper bound for the distance between the weights of a learner trained on the full dataset and the weights of the same learner trained solely on the Head. This assumption offers a solid theoretical foundation for our method, showcasing the feasibility of using CL techniques to address the LTR problem. However, as many deep learning models in practice employ non-convex loss functions that potentially limit the theorem's applicability to specific cases, it is crucial to highlight that our experimental results are not strictly dependent on the strong convexity condition. In fact, our method exhibits impressive performance even under more relaxed conditions, indicating its robustness and adaptability. **Broader Impact.** Dealing with imbalanced data is of paramount importance in ensuring fairness and reducing bias in AI applications, particularly in cases where the underrepresented classes correspond to minority groups. The long-tailed distribution of real-world data poses a significant challenge in achieving equitable performance for both common and rare cases. This paper's proposed algorithm, which addresses the LTR problem through the lens of CL, holds great potential in mitigating the adverse effects of class imbalance on model performance. By effectively learning from both the Head and the Tail, the proposed method can enhance the performance on underrepresented classes, leading to more fair and accurate AI models across various domains. ## 5 Conclusion and Future Work We presented a novel perspective on addressing the LTR problem by drawing connections with CL. We have provided a theoretical foundation to support this connection by analyzing the convergence behavior of models trained on LTR datasets and establishing an upper bound for the distance between the weights obtained from training on the entire dataset and those trained only on the Head. Our experimental results on benchmark datasets like MNIST-LT, CIFAR100-LT, and CIFAR10-LT verify our theoretical findings and demonstrate the effectiveness of our approach in achieving effective performances as compared to baselines and state-of-the-art LTR solutions. We also showcase the applicability of CL techniques to real-world data by employing CL on the naturally imbalanced Caltech256 dataset and comparing its performance to existing methods. Future research directions include exploring other CL strategies that can further improve the performance on LTR models, investigating the impact of varying the degree of imbalance in the dataset, and extending our approach to more complex and diverse real-world scenarios. Moreover, the quantification of the \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Backbone Architecture} \\ & Inception V4 & ResNet 101 \\ \hline \(L^{2}-FE\)[62] & 84.1\% & 85.3\% \\ \(L^{2}\)[62] & 85.8\% & 87.2\% \\ \(L^{2}-SP\)[62] & 85.3\% & 87.2\% \\ DELTA [62] & 86.8\% & 88.7\% \\ TransTailor [63] & - & 87.3\% \\ \hline Continual Learning & 87.56\% & 88.9\% \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of CL compared with SOTA models. Figure 6: Per-class weight norms of the baseline, GPM, and WD. impact of the strong convexity assumption can be explored. Ultimately, our findings could help researchers design more robust and scalable solutions for learning from highly imbalanced data, enabling more accurate predictions and generalizations across a wide range of applications. **Acknowledgements.** We would like to thank Geotab Inc., the City of Kingston, and NSERC for their support of this work.
2304.12712
Magnetism-induced band-edge shift as mechanism for magnetoconductance in CrPS$_4$ transistors
Transistors realized on 2D antiferromagnetic semiconductor CrPS$_4$ exhibit large magnetoconductance, due to magnetic-field-induced changes in magnetic state. The microscopic mechanism coupling conductance and magnetic state is not understood. We identify it by analyzing the evolution of the parameters determining the transistor behavior -- carrier mobility and threshold voltage -- with temperature and magnetic field. For temperatures T near the N\'eel temperature $T_N$, the magnetoconductance originates from a mobility increase due to the applied magnetic field that reduces spin fluctuation induced disorder. For $T << T_N$, instead, what changes is the threshold voltage, so that increasing the field at fixed gate voltage increases the density of accumulated electrons. The phenomenon is explained by a conduction band-edge shift correctly predicted by \emph{ab-initio} calculations. Our results demonstrate that the bandstructure of CrPS$_4$ depends on its magnetic state and reveal a mechanism for magnetoconductance that had not been identified earlier.
Fan Wu, Marco Gibertini, Kenji Watanabe, Takashi Taniguchi, Ignacio Gutiérrez-Lezama, Nicolas Ubrig, Alberto F. Morpurgo
2023-04-25T10:50:26Z
http://arxiv.org/abs/2304.12712v2
# Magnetism-induced band-edge shift as mechanism for magnetoconductance in CrPS\({}_{4}\) transistors ###### Abstract Transistors realized on multilayers of 2D antiferromagnetic semiconductor CrPS\({}_{4}\) exhibit large, gate-tunable low-temperature magnetoconductance, due to changes in magnetic state induced by the applied magnetic field. The microscopic mechanism coupling the conductance to the magnetic state is however not understood. We identify this mechanism by analyzing the evolution with temperature and magnetic field of the parameters determining the transistor behavior, the carrier mobility and threshold voltage. We find that for temperatures \(T\) close to the _Neel_ temperature \(T_{\rm N}\), the magnetoconductance originates from the increase in mobility due to cooling or the applied magnetic field, which reduce disorder originating from spin fluctuations. For \(T<<T_{\rm N}\), the mechanism is entirely different: the mobility is field and temperature independent, and what changes is the threshold voltage, so that increasing the field at fixed gate voltage increases the density of accumulated electrons. The change in threshold voltage is due to a shift in the conduction band-edge as confirmed by _ab-initio_ calculations that capture the magnitude of the effect. Our results demonstrate that the bandstructure of CrPS\({}_{4}\) depends on its magnetic state and reveal a mechanism for magnetoconductance in transistors that had not been identified earlier and that is of general validity for magnetic semiconductors. + Footnote †: preprint: APS/123-QED Many fascinating phenomena observed in 2D magnetic materials arise from the interplay between the magnetic state of the material and processes characteristic of semiconductor physics [1; 2; 3]. Examples include giant magnetoconductance in tunnel barriers [4; 5; 6; 7], the electrostatic control of magnetic phase boundaries [8; 9; 10; 11], or the dependence of the wavelength of emitted light on the magnetic state [12]. More predictions have been made and remain to be validated, such as the realization of gate-tunable half-metals [13; 14; 15], or the possibility of engineering skyrmion-like textures to control electron dynamics in magnetic moire stacks [16; 17; 18]. Assessing the validity of these predictions and understanding in detail the phenomena already observed is however difficult, because most 2D magnetic semiconductors studied so far have extremely narrow electronic bandwidth [19; 20]. These materials are therefore expected to behave differently from conventional semiconductors, because in narrow bandwidth systems electron-electron interactions and disorder typically play a dominant role. As a result, it is not clear a priori whether conventional band theory can be used to describe the properties of 2D magnetic semiconductors as it does for common semiconducting compounds. The effect of the narrow bandwidth also has a pronounced impact on the transport properties, which is why the realization of field-effect transistors enabling the controlled and systematic investigation of transport as a function of electron density has proven difficult on 2D magnetic semiconductors [4; 5; 6; 7; 21; 22]. Only recently, first materials have been identified with a bandwidth of 1 eV or larger [23; 24; 25; 26], allowing transistors to operate well below the magnetic critical temperature and enabling systematic studies of magnetotransport [27; 28; 29]. For two of these compounds (CrSBr [27] and NiI\({}_{2}\)[28]) only a rather small magnetoconductance was observed. In transistors based on multilayers of _van der Waals_ antiferromagnetic semiconductor CrPS\({}_{4}\) (see Fig. 1a,b) -the most recently reported compound- the magnetoconductance was instead found to be very large (reaching up to \(\approx~{}10000\) %) and strongly dependent on the gate voltage (as we reproduce in Fig. 1c,d for convenience) [29]. It was clearly established that the measured magnetoconductance is determined by how the magnetic state of CrPS\({}_{4}\) evolves when a magnetic field is applied, but its precise microscopic origin could not be identified. At this stage, it is not even clear whether the observed change in magnetoconductance can be attributed to a change in bandstructure as the magnetic state of the material evolves. Here, we analyze the transport properties of transistors realized on the layered antiferromagnetic semiconduc tor CrPS\({}_{4}\), and demonstrate that the large, gate-tunable magnetoconductance observed at low temperature originates from the dependence of the bandstructure on the magnetic state. More specifically, we find that the dominant mechanism responsible for the observed magnetoconductance at temperature \(T\simeq T_{N}\) and at \(T\ll T_{N}\) is different. In the vicinity of \(T_{\rm N}\), transport is dominated by the effect of spin-fluctuations that limit the mobility of charge carriers, as it commonly happens in magnetic conductors. For \(T\ll T_{\rm N}\), instead, the mobility does not depend on temperature or magnetic field. In this regime, the large observed magnetoconductance originates entirely from the shift of the conduction band-edge to lower energy, which results -at fixed applied gate voltage- in an increase in the density of accumulated electrons. We show that such an effect is expected from _ab-initio_ calculations, which predict the magnitude of the band-edge shift between the antiferromagnetic state at \(H=0\) and the ferromagnetic state at high field to be comparable to the value that we estimate from experimental data. These conclusions demonstrate that the electronic band-structure of CrPS\({}_{4}\) does depend on the magnetic state, and that this dependence results in a microscopic mechanism that can generate very large, gate-tunable magnetoconductance in magnetic semiconductors when the Fermi level is sufficiently close to the conduction band-edge. The overall experimental phenomenology of the magnetotransport response of CrPS\({}_{4}\) transistors has been very recently reported in Ref. [29]. That work focuses on presenting all key experimental observations - particularly interesting, as they differ from those previously reported for CrSBr and NiI\({}_{2}\) transistors- without a detailed analysis of the physical mechanisms responsible for the observed magnetotransport. Here we focus on the analysis of the transistor response to identify these mechanisms, using the same device configurations and fabrications as in Ref. [29], to which we refer the reader for further experimental details. We gain new insight by analyzing systematically the evolution with temperature and applied magnetic field of the parameters that determine the transistor operation. The square conductance of a field-effect transistor \[G_{\square}=\mu\cdot C\cdot(V_{\rm G}-V_{\rm TH}). \tag{1}\] is a function of the gate voltage, \(V_{\rm G}\), and depends on three parameters, each carrying information about different microscopic properties [33]. The capacitance to the gate electrode \(C\) is determined by the in-series connection of the geometrical capacitance \(C_{\rm G}\) and of the quantum capacitance \(C_{\rm Q}\) (\(1/C=1/C_{\rm G}+1/C_{\rm Q}\)), with \(C_{\rm Q}\) proportional to the density of states. Measuring \(C\) may therefore allow the density of states in the different magnetic phases of the material to be determined. In the devices considered here, however, the geometrical capacitance is small and the effect of the quantum capacitance is negligible (_i.e._, \(C=C_{\rm G}\)). The thresh Figure 2: (a) Transfer curves of a transistor based on a 10 nm CrPS\({}_{4}\) multilayer, measured at \(\mu_{0}H=0\), at \(T=2\), 40, 50, and 60 K (as indicated in legend). (b) \(T\)-dependence of the field-effect mobility \(\mu\) extracted from the transconductance (see Equation 1) at \(V_{\rm G}\) of +100 V, exhibiting a fourfold increase as \(T\) is lowered below \(T_{\rm N}\). (c) \(T\)-dependence of the the threshold voltage, obtained by extrapolating the square conductance in a to zero. All data presented in the main text have been measured on this device. Additional data from other devices are shown in the supplementary information. Figure 1: (a) Crystal structure of layered CrPS\({}_{4}\); the blue, yellow, and orange spheres represent Cr, S, and P atoms, respectively. CrPS\({}_{4}\) has an antiferromagnetic ground state (A-type) formed by individual layers uniformly magnetized in the out-of-plane direction (\(T_{\rm N}\approx 35\) K) [30; 31; 32]. Panels (b)-(d) summarize the key features of CrPS\({}_{4}\)-transistors reported in Ref. [29]. (b) Transfer curves (\(G_{\square}\)-vs\({}_{\rm G}\)) of a 10 nm thick CrPS\({}_{4}\) FET device at 200 and 2 K. The inset shows the device schematics (a hBN-encapsulated CrPS\({}_{4}\) multilayer –with thickness ranging from 6 to 10 nm in different devices– is contacted by graphene stripes and the gate voltage \(V_{\rm G}\) is applied across a 285 nm SiO\({}_{2}\) insulating layer). (c) Magnetoconductance (\(\delta G=\frac{G(\mu_{0}H)-G(0~{}T)}{G(0~{}T)}\)) measured at \(T=2\) K for fixed \(V_{\rm G}\) values in the 50-100 V range, in 10 V steps. (d) The magnetoconductance at \(\mu_{0}H=10\) T and \(T=2\) K depends exponentially on \(V_{\rm G}\). old voltage \(V_{\rm TH}\) is determined by the energetic position of the conduction band-edge (\(V_{G}=V_{\rm TH}\) corresponds to having the Fermi level \(E_{F}\) aligned with the conduction band-edge) and variations in \(V_{\rm TH}\) with magnetic field can therefore signal a change in bandstructure. Finally, the carrier mobility \(\mu\) provides information about scattering processes and disorder mechanisms affecting charge carriers. As well-established, for electrons in magnetic conductors, spin disorder is commonly found to be a dominant mechanism determining the mobility, with better spin alignment leading to higher mobility values. To analyze the properties of CrPS\({}_{4}\) transistors, we start by looking at the transfer curves (\(G_{\square}\)-vs-\(V_{G}\); details about the device fabrication can be found in Ref [29]). Fig. 2a shows the transfer curves measured for different values of \(T\) ranging from above \(T_{N}\simeq 35\) K to 2 K for a 10 nm thick device (all data shown in the main text has been measured on this same device; data from additional devices can be found in the supplementary information). Fig. 2b shows that the mobility extracted from the transconductance (\(\mu=\frac{1}{C}\frac{\partial G_{\square}}{\partial V_{\rm G}}\)) increases by a factor of 4 as \(T\) is decreased below \(T_{N}\), and eventually saturates at low \(T\). Concomitantly, the threshold voltage (obtained by extrapolating to zero the square conductance measured as a function of \(V_{\rm G}\)) increases gradually by approximately 10 V as \(T\) is lowered from 60 K to 2 K (Fig. 2c). Both trends are easily understood in terms of established concepts. The mobility starts increasing at \(T_{\rm N}\), because ordering of the spins in the magnetic states suppresses spin fluctuations, thereby effectively decreasing the magnitude of disorder experienced by electrons. The modest increase in \(V_{\rm TH}\) upon cooling is due to the freeze out of charge carriers into the dopants where they originate from, and can be expected for CrPS\({}_{4}\) that is indeed unintentionally doped. These conclusions are fully consistent with the dependence of \(\mu\) and \(V_{\rm TH}\) on applied magnetic field \(\mu_{0}H\), for temperatures \(T\) near (above or just below) \(T_{\rm N}\). Transfer curves at two different temperatures (50 K and 30 K) are shown in Fig. 3a and 3b for different values of applied magnetic field, ranging from 0 to 10 T. For both temperatures, the magnetic field causes the conductance to increase by 5-to-10 times (see, _e.g._, \(G_{\square}\) at \(V_{\rm G}=+100\) V, in Fig. 3a and 3b), with the increase originating from the transconductance \(\frac{\partial G_{\square}}{\partial V_{\rm G}}\). Indeed, Fig. 3c shows that \(\mu\) increases by nearly the same amount as the conductance as \(\mu_{0}H\) is increased from 0 to 10 T, whereas the threshold voltage changes by less than 10 % (Fig. 3d). This is again expected, because the application of the magnetic field aligns the spins and reduces the disorder experienced by charge carriers (associated to spin fluctuations), for both \(T>T_{\rm N}\) in the paramagnetic state of CrPS\({}_{4}\), and just below \(T_{\rm N}\). Having established that magnetotransport near \(T_{\rm N}\) is determined by the influence of spin fluctuations on electron mobility, we now discuss the magnetic field dependence of the conductance for \(T\ll T_{\rm N}\), whose behavior is strikingly different. To illustrate the difference, Fig. 4a shows the transfer curves of a CrPS\({}_{4}\) transistor measured at \(T=2\) K for different applied magnetic fields, from which it is apparent that changing the magnetic field induces a large shift in \(V_{\rm TH}\). Figure 3: (a, b) Transfer curves of the device discussed in Fig. 2, for different values of \(\mu_{0}H\) between 0 and 10 T, at \(T=50\) and \(T=30\) K, respectively above and below \(T_{\rm N}\). (c) Magnetic field dependence of \(\mu\) obtained from the device transconductance at \(V_{\rm G}=+100\) V, for three different temperatures close to \(T_{\rm N}\)(30 K, 40 K, and 50 K for the red, yellow and blue symbols), showing an increase of nearly one order of magnitude. (d) Dependence of \(V_{\rm TH}\) on magnetic field at the measured at the same three temperatures, showing a variation of less than 10 % as \(\mu_{0}H\) is increased from 0 to 10 T. Figure 4: (a) Transfer curves of the same device whose data are presented in Fig. 2 and 3, measured at \(T=2\) K (_i.e._, \(T<<T_{\rm N}\)) for different, fixed values of \(\mu_{0}H\) ranging from 0 to 10 T. The red dashed lines show the extrapolation to zero of the square conductance, from which we determine \(V_{\rm TH}\). (b) The same transfer curves as in (a) plotted versus \(V_{\rm G}-V_{TH}(\mu_{0}H)\) collapse on top of each other. (c) Magnetic field dependence of \(V_{\rm TH}\) (open orange symbols) and \(\mu\) (light blue symbols) extracted from panel (a): the applied magnetic field causes a substantial shift of \(V_{\rm TH}\) (nearly 30 V), leaving the mobility unchanged (identical behavior is seen in multiple devices; see supplementary information). is shifted by the corresponding threshold voltage -_i.e._, when plotting the data as a function of \(V_{\rm G}-V_{\rm TH}(H)\)-all curves collapse. The collapse directly implies that the transconductance is independent of the applied magnetic field, so that the mobility is also magnetic field independent. It follows that over the entire range of magnetic fields explored, the dependence of the conductance on field originates from the shift of \(V_{\rm TH}\) with \(H\), as illustrated quantitatively by the plots of \(V_{\rm TH}\) and \(\mu_{0}H\) in Fig. 4c. The threshold voltage downshifts by 30 V as the magnetic field is increased from 0 to 8 T (_i.e._, the spin-flip field of CrPS\({}_{4}\) at \(T=2\) K) and saturates past that, while the mobility remains constant at \(\mu\simeq 0.8\) cm\({}^{2}\)/Vs. Even though different devices show somewhat different mobility values (reaching up to \(\mu\simeq 6\) cm\({}^{2}\)/Vs; see supplementary information), the observed downshift in threshold voltage is robust and virtually identical in all cases (if normalized to the value of the gate capacitance). The very different nature of magnetotransport for \(T\approx T_{\rm N}\) and for \(T\ll T_{\rm N}\) originates from the distinct microscopic mechanisms that cause the conductance to depend on applied field in the two temperature regimes. At high temperature magnetoconductance occurs because the magnetic field decreases spin-induced disorder experienced by charge carriers. At low temperature, however, the same mechanism becomes inactive, because the antiferromagnetic state of CrPS\({}_{4}\) is fully developed, and spins are already ordered at \(H=0\). Finding that the magnetoconductance originates from the shift in threshold voltage indicates that what changes upon applying the magnetic field is the energetic position of the conduction band-edge (see schematic illustration in Fig. 5a and 5b). Therefore, increasing \(H\) at fixed \(V_{\rm G}\) leads to an increase in the density \(\Delta n\) of accumulated electrons (\(\Delta n=C\cdot(V_{\rm G}-\Delta V_{\rm TH})/e\)) contributing to transport, and results in a larger conductance. The effect is sizable, because a shift in \(V_{\rm TH}\) of 30 V corresponds to a variation in electron density in the transistor channel of \(\Delta n~{}=~{}2.2\cdot 10^{12}\) cm\({}^{-2}\). This mechanism also explains the exponential dependence of the magnetoconductance on gate voltage observed experimentally (see Fig. 1d). Such a dependence is observed when transistors operate in the sub-threshold regime, with \(V_{\rm G}\) large enough to accumulate mobile carriers, but still such that \(V_{\rm G}<V_{\rm TH}\). A shift in \(V_{\rm TH}\) induced by the magnetic field is then effectively equivalent to changing \(V_{\rm G}\) in a regime where the conductance depends exponentially on gate voltage (the exponential dependence of the current on gate voltage is why one commonly defines the subthreshold swing \(S={\rm d}V_{\rm G}/{\rm d}(\log I)\) to characterize the sub-threshold device behavior). To further substantiate the validity of our conclusion, we performed _ab-initio_ calculations of the bandstructure of CrPS\({}_{4}\) with respectively antiferromagnetic and ferromagnetic spin configurations (corresponding to the experimental situations at \(\mu_{0}H=0\) and \(\mu_{0}H>8\) T), to determine the position of the conduction band-edge in the different magnetic phases. The results of these calculations for bilayer CrPS\({}_{4}\) are shown in Fig. 5c. The conduction band-edge in the ferromagnetic phase is indeed lower than in the antiferromagnetic one by approximately 30 meV. The effect is robust, as comparable values are obtained irrespective multilayer of thickness. These calculations also give the value of the effective mass -and hence of the density of states in the transistor channel-from which the shift in band-edge energy \(\Delta E\) can be estimated from the measured shift in accumulated charge density using the relation \(\Delta n=\frac{m^{*}}{2\pi\hbar^{2}}\cdot\Delta E\) (\(m^{*}=0.64m_{0}\) is the effective mass of electrons in CrPS\({}_{4}\), \(m_{0}\) is the free electron mass, and \(\hbar\) is Planck's constant). We find \(\Delta E\simeq 15\) meV using \(\Delta n=C\cdot(V_{\rm TH}(0~{}T)-V_{\rm TH}(8~{}T))\), a value that compares well to the calculated one, if the precision of the calculations to determine \(\Delta E\) and the determination on the value estimated from the experiments are considered. There are two important aspects of the experimental results presented here that should be retained. The first is that our results demonstrate that the bandstructure of CrPS\({}_{4}\) depends on the magnetic state of the material. This is not obvious a priori, because -as we already mentioned in the introduction- many 2D magnetic semiconductors investigated in the first generation of experiments (the Chromium trihalides, CrX\({}_{3}\) with X=Cl, Br, and I [4; 5; 6; 7; 21; 34; 35], MnPS\({}_{3}\)[36], VI\({}_{3}\)[22; 37] and more) have extremely narrow bandwidths that cause their behavior to deviate from that expected for conventional semiconductors described by band theory. In CrPS\({}_{4}\) the band Figure 5: (a, b) Schematic representation of the energy band diagram for our CrPS\({}_{4}\) transistors, illustrating the shift of the Fermi level Er relative to the conduction band-edge \(\mathsf{E_{C}}\) as the magnetic state changes from antiferromagnetic (AFM, left) to ferromagnetic (FM, right) ground state (the blue and red lines represent the spin-up and spin-down conduction bands). (c) First-principles calculation of the bandstructure of CrPS\({}_{4}\) bilayers in the antiferromagnetic (AFM) and ferromagnetic (FM) states. Blue and red colors represent the spin-up and spin-down states, respectively. The conduction band-edge (green and purple dashed lines) in the ferromagnetic phase is 33 meV lower than in the antiferromagnetic phase. width is larger, approximately 1 eV (which is also why transistors work properly down to low temperature), and our results provide a first indication that band theory is suitable to describe the interplay between electronic and magnetic properties. The second conclusion relates to the identified mechanism of magnetoconductance. A band shift had not been considered earlier as a possible cause of sizable magnetoconductance, likely because most magnetotransport studies on magnetic conductors are commonly performed on metallic systems, in which the effect is irrelevant because the Fermi level is located deep inside a band. For magnetic semiconductors the situation is different, because the Fermi level is commonly located near a band-edge, and that is why a band-edge shift can have such large effects. It seems clear that the mechanism identified here is not an exclusive property of CrPS\({}_{4}\) but can play a role in many other magnetic semiconductors. For example, we expect that -under appropriate doping conditions- the band shift associated to the reduction of the bandgap upon changing the magnetic state that has been reported in optical studies of CrSBr [12] (another 2D magnetic semiconductor) and of EuCd\({}_{2}\)As\({}_{2}\)[38] (a 3D magnetic semiconductor) may also manifest itself in the presence of a large magnetoconductance. The findings reported here therefore have much broader relevance than for the sole case of CrPS\({}_{4}\). The authors gratefully acknowledge Alexandre Ferreira for continuous and valuable technical support. We thank Dipankar Jana, Clement Faugeras, and Marek Potemski for fruitful discussion. AFM gratefully acknowledges the Swiss National Science Foundation and the EU Graphene Flagship project for support. MG acknowledges support from the Italian Ministry for University and Research through the Levi-Montalcini program. K.W. and T.T. acknowledge support from JSPS KAKENHI (Grant Numbers 19H05790, 20H00354 and 21H05233).
2305.05147
Sparse sensor reconstruction of vortex-impinged airfoil wake with machine learning
Reconstruction of unsteady vortical flow fields from limited sensor measurements is challenging. We develop machine learning methods to reconstruct flow features from sparse sensor measurements during transient vortex-airfoil wake interaction using only a limited amount of training data. The present machine learning models accurately reconstruct the aerodynamic force coefficients, pressure distributions over airfoil surface, and two-dimensional vorticity field for a variety of untrained cases. Multi-layer perceptron is used for estimating aerodynamic forces and pressure profiles over the surface, establishing a nonlinear model between the pressure sensor measurements and the output variables. A combination of multi-layer perceptron with convolutional neural network is utilized to reconstruct the vortical wake. Furthermore, the use of transfer learning and long short-term memory algorithm combined in the training models greatly improves the reconstruction of transient wakes by embedding the dynamics. The present machine-learning methods are able to estimate the transient flow features while exhibiting robustness against noisy sensor measurements. Finally, appropriate sensor locations over different time periods are assessed for accurately estimating the wakes. The present study offers insights into the dynamics of vortex-airfoil interaction and the development of data-driven flow estimation.
Yonghong Zhong, Kai Fukami, Byungjin An, Kunihiko Taira
2023-05-09T03:19:59Z
http://arxiv.org/abs/2305.05147v1
# Sparse sensor reconstruction of vortex-impinged airfoil wake with machine learning ###### Abstract Reconstruction of unsteady vortical flow fields from limited sensor measurements is challenging. We develop machine learning methods to reconstruct flow features from sparse sensor measurements during transient vortex-airfoil wake interaction using only a limited amount of training data. The present machine learning models accurately reconstruct the aerodynamic force coefficients, pressure distributions over airfoil surface, and two-dimensional vorticity field for a variety of untrained cases. Multi-layer perceptron is used for estimating aerodynamic forces and pressure profiles over the surface, establishing a nonlinear model between the pressure sensor measurements and the output variables. A combination of multi-layer perceptron with convolutional neural network is utilized to reconstruct the vortical wake. Furthermore, the use of transfer learning and long short-term memory algorithm combined in the training models greatly improves the reconstruction of transient wakes by embedding the dynamics. The present machine-learning methods are able to estimate the transient flow features while exhibiting robustness against noisy sensor measurements. Finally, appropriate sensor locations over different time periods are assessed for accurately estimating the wakes. The present study offers insights into the dynamics of vortex-airfoil interaction and the development of data-driven flow estimation. Vortex-airfoil interaction, Machine learning, Flow reconstruction ## 1 Introduction Vortex-airfoil interaction is ubiquitous around fluid-based systems, including aircraft [1, 2, 3, 4], wind turbines [5], and pumps [6, 7]. Such interactions can cause unsteady loading, fatigue, and structural damage to these systems. For analyzing vortex-airfoil interactions, it is useful to assess the state of the flow from sparse measurements for understanding the governing dynamics [8], prediction of flow disturbance [9], and performing the wake flow control [10]. However, it is challenging to identify vortical structures during the vortex-airfoil interactions from sparse measurements due to its strong nonlinear dynamics and the high-degree of freedom required to describe the vortical flows. A number of studies have examined sparse state estimation for aerodynamics. In particular, linear techniques have been studied over the last several decades. For instance, gappy proper orthogonal decomposition [11] has been considered to obtain dominant flow features from spatially incomplete and sparse data sets [12]. Focusing on the characterization of flows and boundary layers near body surface, the applications of four-dimensional variational method [13], linear stochastic estimation [14], and Kalman filters [15] have also been explored. However, these techniques are constrained by their linear formulations, which poses challenges when the applications involve strongly nonlinear dynamics. To overcome such limitations, nonlinear machine learning approaches have been considered as a promising approach in analyzing fluid flows from sparse information. Nonlinear machine learning techniques have shown to be useful in estimating and modeling high-dimensional flow [16]. For example, Pawar et al. [17, 18] applied a physics-guided machine-learning framework to estimate the lift coefficient of a variety of airfoils. Hui et al. [19] utilized a signed distance function-assisted convolutional neural network (CNN) to predict the pressure distribution over an airfoil surface. For flow field reconstructions, Erichson et al. [20] proposed a shallow decoder based on multi-layer perceptron (MLP) for a circular cylinder wake, the sea surface temperature, and forced isotropic turbulence. Fukami et al. [21] proposed a CNN-based method to reconstruct the global turbulent flow field from sparse sensors that can be in motion or change in numbers. In addition to the aforementioned efforts, there are various machine-learning-based flow reconstruction techniques based on super-resolution analysis [22, 23, 24]. However, there are issues with utilizing nonlinear machine learning techniques for estimating unsteady fluid flows from limited sensor measurements. The most outstanding issue is the computational costs for using machine learning models are expensive. For neural network-based models with low-dimensional inputs to high-dimensional outputs, an enormous number of interior parameters (weights) are required. To determine the internal parameters, generally, thousands of flow (or sensor) snapshots are required, which causes a large computational burden in terms of both training costs and data storage. In our case, if a variety of unsteady flow fields is needed to be accurately reconstructed, storage and computing costs can rise significantly if the problem is approached naively. From this aspect, it is crucial to develop a method that can qualitatively reconstruct a flow field with a small amount of training data and a reduced number of tuning parameters. In addition, generalizable models promote a reduction in cost. Most machine learning models can only be used for specific flow fields, for example, a single model trained with a laminar flow may not be applicable to use to reconstruct turbulent flow fields. In fact, the data used for testing needs to be similar to the training data to achieve accurate results. If we need to consider different flows over a vast parameter space, it is almost impossible to perform experiments or simulations for each and every case. In this regard, the diversity of the training data needs to be considered so that a single model can effectively predict unsteady flow fields over a large range of parameters. In this study, we aim to develop machine learning methods that reconstruct dominant wake features from limited sensor measurements and a small set of training data sampled over a vast parameter space. Because the disturbance vortex can be of any size, strength, or position from the airfoil, a very large parameter space is needed to be explored to capture the complex vortex-impinged airfoil wake dynamics. In this case, the amount of data can be tremendously large. Instead of naively training machine learning models with all parameter combinations, we develop models that are trained with a few cases in the parameter space and use the models to estimate unseen cases. For the machine learning methods, we choose a multi-layer perceptron (MLP) to model the nonlinear relationship between the low-dimensional sensors inputs and the outputs, including the lift coefficient, drag coefficient, and surface pressure coefficient. Moreover, combining the convolutional neural networks and MLP allows the reconstruction of the vorticity field over time with modest computational costs. The transfer learning and long-short term memory further help in incorporating the dynamics of the transient flow, which reduces the required training data and improves the flow estimation. The current model is robust for a variety of wake scenarios separate from the training data. We also assess the influence of sensor numbers and placement on flow estimation. The present paper is organized as follows. The problem setup and data compilation are discussed in section 2. Flow physics of vortex-airfoil wake interactions are presented in section 3. Machine learning techniques utilized in this study are introduced in section 4. Results and discussion of machine learning-based flow reconstruction are presented in section 5. Concluding remarks are provided in section 6. ## 2 Data compilation The present objective is to develop a robust machine-learning model for highly disturbed flows around an airfoil from sparse pressure sensors and limited training data. Here, we consider transient flow over a NACA 0012 airfoil at an angle of attack of \(\alpha=12^{\circ}\) experiencing various types of vortical disturbances at a chord-based Reynolds number \(Re\equiv u_{\infty}c/\nu_{\infty}=400\) and a Mach number \(M_{\infty}\equiv u_{\infty}/a_{\infty}=0.1\). Here, \(u_{\infty}\) is the free-stream velocity, \(c\) is the chord length, \(\nu_{\infty}\) is the kinematic viscosity, and \(a_{\infty}\) is the freestream sonic speed. The simulated flows have been verified and validated with previous studies [25, 26, 27]. The compressible flow solver CharLES [28] is used to simulate the transient flows over the airfoil. For the present vortex-airfoil interaction problem, a single vortical disturbance is initially introduced upstream of the airfoil. This disturbance vortex is given as a compressible Taylor vortex [29], described by \[u_{\theta}=u_{\theta\max}\frac{r}{R}\mathrm{exp}\left[\frac{1}{2}\left(1-\frac {r^{2}}{R^{2}}\right)\right], \tag{1}\] where \(R\) is the radius, and \(u_{\theta\max}\) is the maximum rotational velocity of the vortex, as shown in figure 1. The vortex is initially introduced at \((x_{0},y_{0})\) with \(x_{0}=-2c\). The present vortex-airfoil interaction problem exhibits a variety of flow patterns, as shown in figure 2. A strong disturbance vortex produces strong unsteadiness in the flow field, and the larger the vortex is, the larger the region it influences. Apart from the radius and the strength, a vortex can either hit the airfoil at the leading edge and thus incite large fluctuations or pass through the airfoil without causing dramatic changes to the flow or aerodynamic characteristics. Detailed discussion on the flow is offered in section 3. The present study examines whether the flow field generated over the wide parameter space can be recovered with the machine-learning model trained with only a very few cases. In the present study, we choose eight sensors distributed on both sides of the airfoil surface to capture the vortex passing around an airfoil, as shown in figure 1. These sensors are labeled \(1\) to \(8\), with the respective \(x\)-locations of the sensors being \((0.00,0.26,0.48,0.72,0.99,0.23,0.46,0.71)c\). Three parameters that describe the disturbance vortex are Figure 1: a) The size and position of the vortical disturbance, and 8 uniform sensors are distributed on the airfoil surface; b) The velocity profile of the disturbance vortex. maximum rotational velocity (\(u_{\theta\max}\)), the radius (\(R\)), and the initial vertical location (\(y_{0}\)). The training data sets are comprised of \(u_{\theta\max}/u_{\infty}\in[-0.9,-0.7,-0.5,-0.3,-0.1,0.1,0.3,0.5,0.7,\\ 0.9]\), \(R/c\in[0.125,0.25,0.5,0.75,1]\), and \(y_{0}/c\in[-0.3,-0.1,0,0.1,0.3]\), respectively. Here, the positive value of \(u_{\theta\max}/u_{\infty}\) indicates a counterclockwise rotation. The maximum rotational velocity of the vortex \(u_{\theta\max}\) covers a range from \(0.1u_{\infty}\) to \(0.9u_{\infty}\). The choices for the vortex radius \(R\) and \(y_{0}\) are carefully determined so that vortices can pass over or below the airfoil while significantly influencing the airfoil wake. In section 5, we consider 25, 50, and 100 training cases out of the vast combinations of parameters, then test the models with untrained cases. Parameter combinations of test cases are randomly chosen over the aforementioned ranges. Note that the training data is a small proportion compared to the whole combinations of parameters. There are no test cases overlapping with the training cases. For each case, we collect 500 snapshots of the flow field for \(u_{\infty}t/c\in[0.85,5.1]\), which reflects the process from the vortex approaching the airfoil to moving away from the tailing edge. Here, \(u_{\infty}t/c=0\) refers to the initial time at which the vortex is at \(x_{0}/c=-2\). The snapshots at \(u_{\infty}t/c=[0,0.85]\) are not used in the present analysis to remove the start-up period of the simulation. For a single parameter set \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)\), the data sizes of aerodynamic force coefficients, pressure over surface, and two-dimensional vorticity field data amount to approximately 1MB, 15MB, and 500MB, respectively. If we use 100 training cases with all 500 snapshots of two-dimensional wake data, the training data size becomes approximately 50GB for a single machine learning model, which is quite large with respect to storage and computation. ## 3 Flow physics The present vortex-impinged airfoil wake exhibits rich dynamics influenced by the vortex velocity, size, and position. In this section, we present the flow physics induced by a variety of vortex disturbances. The maximum rotational velocity of the vortex disturbance is one of the most important characteristics affecting the vortex-airfoil interaction. Here, we investigate the influence of vortex largest velocity on \(C_{L}\), \(C_{D}\), and vorticity fields when \((R/c,y_{0}/c)=(0.5,0.1)\). As depicted in figure 3a) and b), a positive (counterclockwise) vortex generally induces a transient increase in \(C_{L}\) and \(C_{D}\) when it impinges on the leading edge of the airfoil. A secondary negative peak is then introduced when the center of the vortical disturbance passes the center of the airfoil. A similar but reversed trend is observed for a negative (clockwise) vortex. The initial decrease in lift is followed by the vortex tail-induced lift increase. For a positive vortex with two different magnitudes of the vortex rotational velocity, the first peaks of \(C_{L}\) are reached at nearly the same time, as presented in figure 3a). However, the magnitude difference causes the temporal shift for the secondary peak -- the peak with \(u_{\theta_{\max}}/u_{\infty}=0.7\) is reached at \(u_{\infty}t/c\approx 2.6\) while that with \(u_{\theta_{\max}}/u_{\infty}=0.3\) is achieved at \(u_{\infty}t/c\approx 3.0\). This is because a stronger positive vortex produces a stronger interaction with the pre-existing negative vorticity on the suction side of the airfoil, forming a large negative vortex that detaches from the airfoil afterwail afterwail. Similar to the positive disturbance cases, larger fluctuation induced by a stronger negative vortex gives rise to an earlier secondary peak. For \(C_{D}\), we observe a similar trend of the time history to the \(C_{L}\) for the positive disturbance, while the magnitudes of variation are much smaller than \(C_{L}\). The dependence of the flow field response on the vortex size is also examined, as shown in figure 4. We choose the same vortex strength and vertical position as \((u_{\theta\max}/u_{\infty},y_{0}/c)=(0.3,0.1)\) for comparison. The \(C_{L}\) and \(C_{D}\) histories experience the same trend of first increasing and then decreasing among different vortex sizes. By increasing the vortex size, the first peaks of \(C_{L}\) and \(C_{D}\) appear earlier because a vortex with a larger radius encounters the airfoil earlier. The changes in the vorticity fields caused by the different sizes of vortices are also presented in figure 4c). When a small-size vortical disturbance (\(R/c=0.125\)) impinges on the airfoil, the whole vortex passes over the suction side of the airfoil and induces mild fluctuation in the flow field. As the size of the vortex becomes larger, the vortex splits into two structures which advects over the suction side and the pressure side. The positive vorticity around the trailing edge is rolled up and interacts with the wakes, thus affecting the evolution of the wake region. In addition to the largest velocity and the size of the vortical disturbance, the transient dynamics are also strongly influenced by whether the disturbance vortex passes above or below the airfoil. Here, let we investigate three vertical positions of \(y_{0}/c=-0.3,0,0.3\) with a negative vortical disturbance \((u_{\theta\max}/u_{\infty},R/c)=(-0.5,0.5)\), as shown in figure 5. For \(y_{0}/c=0.3\), the disturbance passes over the airfoil, where a large portion of negative disturbance passes through the suction side of the airfoil, introducing a large jump in \(C_{D}\) as the first peak. For \(y_{0}/c=0\), the negative vortical disturbance is split into two parts as it passes around the airfoil. At \(u_{\infty}t/c=2.55\), the large positive vorticity attached on the pressure side of the airfoil produces the second peak in \(C_{L}\). For the case of \(y_{0}/c=-0.3\) where the disturbance passes below the airfoil, the variation is mostly dominated by the interaction along the pressure side of the airfoil, and the drop and the increment of \(C_{L}\) and \(C_{D}\) occur at the same time. ## 4 Methods We develop machine-learning models to estimate aerodynamic characteristics that cover a variety of force and wake dynamics from sparse sensors. Constructing a robust model suitable for the vast parameter space in figure 2 is challenging. To estimate different types of nonlinear wake responses from limited training data, we consider several Figure 3: Effect of the largest rotational velocity of vortical disturbance. a) lift coefficients, b) drag coefficients, and c) vorticity fields for vortical disturbances of \((R/c,y_{0}/c)=(0.5,0.1)\) and \(u_{\theta\max}/u_{\infty}=-0.7,-0.3,0.3\), and \(0.7\). strategies with regard to machine-learning model design and training methods for reproducing the transient dynamics. For all machine-learning models used in the current study, three-fold cross-validations are performed, ensuring the convergence of the estimations in terms of data distribution. An overview of the present machine-learning-based estimation approaches is shown in figures 6. The input is the sensor measurements \(s^{n\Delta t}\) spanning over \(n\Delta t\). We first consider a multi-layer perceptron (MLP) to build the relationship between the sensor measurements and the aerodynamic force coefficients over time. Since the degrees of freedom of the input and output are \(\mathcal{O}(1)\), we can easily employ a fully-connected neural network to construct such a relationship. Similarly, we also use a multi-layer perceptron (MLP) to estimate the pressure distribution over the airfoil surface. Figure 4: Effect of vortex size. a) lift coefficients, b) drag coefficient, and c) vorticity fields for vortical disturbances of \((u_{\theta\max}/u_{\infty},y_{0}/c)=(0.3,0.1)\) and \(R/c=0.125,0.25,0.5\), and \(0.75\). However, MLP can be challenging to use for problems with high degrees of freedom due to its fully-connected structure [21, 30]. To access the two-dimensional vorticity flow field (the degree of freedom \(\approx\mathcal{O}(10^{3}-10^{4})\)), a model which can effectively extract spatial information with a manageable computational cost is required. To address this point, we incorporate a two-dimensional convolutional neural network (CNN) to provide qualitative estimations while maintaining a low computational cost. When the MLP is coupled with the CNN, the machine-learning model can reconstruct the flow field from a limited number of sensor measurements. Moreover, due to the transient nature of the current vortex-airfoil interaction problem, accounting for the dynamics into model construction aids in accurate estimation. For this reason, the long short-term memory (LSTM) algorithm [31] assisted with transfer learning serves as an effective method to estimate the flow fields from time traces. Hence, we embed LSTM into the aforementioned MLP and MLP-CNN models. In what follows, we introduce the algorithms of these machine learning methods. ### Multi-layer perceptron In the present study, the input sensor measurements are first fed into a multi-layer perceptron (MLP) [32]. For the estimation of aerodynamic forces (section 5.1) and pressure distribution over the airfoil surface (section 5.2), the MLP \(\mathcal{M}\) is used as a function approximator between the input sensor measurements \(\mathbf{s}\) and the output variables \(\mathbf{q}\) such that Figure 5: Effect of vortex position. a) lift coefficients, b) drag coefficients, and c) vorticity fields for vortical disturbances of \((u_{\theta\max}/u_{\infty},R/c)=(-0.5,0.5)\) and \(y_{0}/c=0.3,0\), and \(-0.3\). \(\mathbf{q}\approx\mathcal{M}(\mathbf{s})\). For the estimation of the two-dimensional vorticity field \(\omega\) (section 5.3), MLP plays a role of a nonlinear function mapping the low-dimensional sensor information \(\mathbf{s}\in\mathbb{R}^{n_{*}}\) to the high-dimensional variable in the model. In addition, we incorporate LSTM [31] into the machine-learning models to capitalize on the dynamical information of sensors. In MLP, the input at layer \((l-1)\) is multiplied by weights \(\mathbf{W}\), then linearly combined, and passed through a nonlinear activation function \(\varphi\) as an output to the next layer \((l)\), \[q_{i}^{(l)}=\varphi(\sum_{j}W_{ij}^{(l)}q_{j}^{(l-1)}+b_{i}^{(l)}), \tag{2}\] Figure 6: Overview of the present estimation problems. The inputs are pressure sensor measurements on the airfoil surface, outputs are a) \(C_{D}\) or \(C_{L}\), b) \(C_{P}\), and c) vorticity field. \(C_{L}\), \(C_{D}\), and \(C_{P}\) are estimated using separate multi-layer perceptron models, vorticity field is estimated using the combination of multi-layer perception and convolutional neural network. Figure 7: a) A minimum unit of perceptron. b) Two-dimensional convolutional operation. where \(b\) is a bias added at each layer as illustrated in figure 7a). We utilize the ReLU function [33] for \(\varphi\), which is known to be effective for addressing the vanishing gradient problems in deep neural networks. For determining the weights \(W\), the Adam algorithm [34] is utilized. In the present model training, early stopping [35] with \(20\) training epochs is also applied to avoid overfitting the machine-learning model. ### Convolutional neural network Since the full flow field estimation requires a large number of spatial grid points (high spatial degrees of freedom), the computational burden is substantial for the direct application of MLP to the full flow field reconstruction [36, 37]. To address this issue, we combine MLP and a two-dimensional convolutional neural network (CNN) [38]. The CNN enables regression while greatly reducing computational costs through filter sharing. The two-dimensional convolutional operation is illustrated in figure 7b), whose internal procedure is expressed as \[q_{ijg}^{(l)}=\varphi\left(\sum_{l=1}^{F}\sum_{p=0}^{H-1}\sum_{q=0}^{H-1}h_{ polg}^{(l)}q_{i+p-C,j+q-C,l}^{(l)}+b_{g}^{(l)}\right), \tag{3}\] where \(C=\lfloor H/2\rfloor\), \(H\) is the width and height of the filter, \(F\) is the number of input channels, \(g\) is the number of output channels, \(b\) is the bias, and \(\varphi\) is the activation function. The input sensor measurements \(\mathbf{s}\in\mathbb{R}^{n_{\mathbf{s}}}\) are transformed to a high-dimensional representation \(\hat{\mathbf{q}}\in\mathbb{R}^{n_{\mathbf{q}}}\) through the MLP for the wake estimation. This representation \(\hat{\mathbf{q}}\in\mathbb{R}^{n_{\mathbf{q}}}\) is then reshaped into a two-dimensional matrix form \(\hat{\mathbf{q}}\in\mathbb{R}^{n_{\mathbf{s}}\times n_{\mathbf{\hat{q}}}}\) so that the data can be managed with a two-dimensional CNN, as illustrated in figure 6a). Through the CNN process in equation 3 and upsampling operation, the present model extracts the relationship between the input sensors and the vorticity field \(\omega\in\mathbb{R}^{n_{\mathbf{s}}\times n_{\mathbf{y}}}\). As with the MLP training, we apply the ReLU function [33] as the nonlinear activation function, the Adam algorithm [34] for updating filters, and early stopping [35] to prevent overfitting. ### Long short-term memory-assisted transfer learning To improve the present estimation, we also utilize the long short-term memory (LSTM) algorithm [31]. LSTM is one of the recurrent neural network methods, which is suitable for predicting temporal behaviors from time-series data. Since LSTM can hold the time-series data as memory inside the function referred to as cell, the implementation of LSTM can greatly help with the present problem that is dependent on past flow states due to its transient nature. An LSTM layer is constructed by four functions; a cell \(C\), an input gate \(d\), an output gate \(o\), and a forget gate \(g\). These functions play important roles in deciding how past information is incorporated to predict the output variables. The input gate \(d\) determines how much of the current information from the input of cell \(e_{t}\) is used for prediction, \[d_{t}=\sigma(W_{d}\cdot[\tilde{q}_{t-1},e_{t}]+\beta_{d}), \tag{4}\] where \(q\) is the output of cell, \(W\) and \(\beta\) represent the weights and the bias, respectively, for each gate denoted by its subscript; the subscripts \(t\) and \(t-1\) represent the time indices, and \(\sigma\) is the sigmoid function. Here, the concatenation of two inputs in a model is denoted as \([m,n]\). In parallel, the LSTM also considers how much of the past information is kept from the cell state at the previous cell state \(C_{t-1}\) using the forget gate \(g\), \[g_{t}=\sigma(W_{g}\cdot[\tilde{q}_{t-1},e_{t}]+\beta_{g}). \tag{5}\] With the temporal cell state at the current time step, \[\widetilde{C}_{t}=\tanh(W_{c}\cdot[\tilde{q}_{t-1},e_{t}]+\beta_{c}), \tag{6}\] and the previous cell state \(C_{t-1}\), the current cell state \(C_{t}\) is determined by balancing the input gate \(d\) and the forget gate \(g\), \[C_{t}=g_{t}C_{t-1}+d_{t}\widetilde{C}_{t}. \tag{7}\] Note that the sigmoid functions used for the input and the output gates play important roles in avoiding gradient vanishing problems. At the output of the LSTM layer, the amount of information at the cell state \(C_{t}\) being leveraged for short-term prediction (i.e., the output at the next step \(\tilde{q}_{t}\)) is assessed using the output gate \(o\) with \[o_{t} =\sigma(W_{o}\cdot[\tilde{q}_{t-1},e_{t}]+\beta_{o}), \tag{8}\] \[\tilde{q}_{t} =o_{t}\tanh(C_{t}). \tag{9}\] With this formulation, the LSTM is able to predict the variable at the next step \(\tilde{q}_{t}\) while considering the long-term memory influence with the concept of cell state \(C\). Here, we combine the high-dimensional representation of the input measurements obtained through the MLP \(\tilde{\mathbf{q}}^{n\Delta t}\) with two previous time sequences extracted by LSTMs \(\{\tilde{\mathbf{q}}^{(n-1)\Delta t},\tilde{\mathbf{q}}^{(n-2)\Delta t}\}\) such that \(\tilde{\mathbf{q}}=[\tilde{\mathbf{q}}^{n\Delta t}+\tilde{\mathbf{q}}^{(n-1)\Delta t}+ \tilde{\mathbf{q}}^{(n-2)\Delta t}]\), as illustrated in figure 8. This combined vector \(\tilde{\mathbf{q}}\) with three time steps is then provided to the MLP layer of the force estimation and the \(C_{p}\) estimation, or the two-dimensional CNN layer of the vorticity reconstruction task. Moreover, we utilize the concept of transfer learning for the LSTM-assisted network. Transfer learning can facilitate the training process by setting appropriate initial weights [39]. The present strategy of the LSTM-assisted transfer learning is graphically summarized in figure 8. In the present study, the weights of pre-trained MLP \(\mathbf{w}_{\mathcal{M}}\) are adopted as initial weights of the second model \(\mathcal{F}_{2}\) which has two sensor input gates \(\mathbf{s}^{(n-1)\Delta t}\) and \(\mathbf{s}^{n\Delta t}\). The high-dimensional feature of input sensor measurements \(\hat{\mathbf{q}}\) from the MLP part of the model is merged with that from LSTM. Once the training for the second model \(\mathcal{F}_{2}\) is completed, the optimized weights of the second model \(\mathbf{w}_{\mathcal{F}_{2}}\) are repeatedly transferred to the third model \(\mathcal{F}_{3}\) which considers sensor measurements at three different time steps \(\mathbf{s}^{(n-2)\Delta t}\), \(\mathbf{s}^{(n-1)\Delta t}\), and \(\mathbf{s}^{n\Delta t}\). The weight optimizations through these operations are mathematically expressed as \[\mathbf{w}_{\mathcal{F}_{1}} =\mathrm{argmin}_{\mathbf{w}_{\mathcal{F}_{1}}}||\mathbf{q}-\mathcal{F}_ {1}(\mathbf{s}^{n\Delta t};\mathbf{w})||_{2}, \tag{10}\] \[\mathbf{w}_{\mathcal{F}_{2}} =\mathrm{argmin}_{\mathbf{w}_{\mathcal{F}_{2}}}||\mathbf{q}-\mathcal{F}_ {2}([\mathbf{s}^{n\Delta t},\mathbf{s}^{(n-1)\Delta t}];\mathbf{w}_{\mathcal{F}_{2}}(\mathbf{ w}^{\prime}_{\mathcal{F}_{1}}))||_{2},\] (11) \[\mathbf{w}_{\mathcal{F}_{3}} =\mathrm{argmin}_{\mathbf{w}_{\mathcal{F}_{3}}}||\mathbf{q}-\mathcal{F}_ {3}([\mathbf{s}^{n\Delta t},\mathbf{s}^{(n-1)\Delta t},\mathbf{s}^{(n-2)\Delta t}];\mathbf{w}_ {\mathcal{F}_{3}}(\mathbf{w}^{\prime}_{\mathcal{F}_{2}}))||_{2}, \tag{12}\] where \(\mathbf{w}^{\prime}_{\mathcal{F}_{1}}\) denotes the weights assigned to the common part of the first MLP-CNN model and the second model, \(\mathbf{w}^{\prime}_{\mathcal{F}_{2}}\) represents the weights assigned to the common part of the second MLP-LSTM-CNN model and the third model, respectively. Since transfer learning can aid in the computational reduction by enabling fast convergence of Figure 8: Long short-term memory-assisted transfer learning. weights [40, 41], we can expect accurate flow reconstruction with minimal training costs using transfer learning with LSTM. ## 5 Results and Discussions ### Aerodynamic forces Let us first present the machine-learning-based estimation of \(C_{L}\) and \(C_{D}\) from the pressure sensor inputs. Here, we here prepare machine learning models \(\mathcal{F}\) for each coefficient such that \(C_{L}=\mathcal{F}_{L}(\mathbf{s}(t))\) and \(C_{D}=\mathcal{F}_{D}(\mathbf{s}(t))\). The estimation results for \(C_{D}\) and \(C_{L}\) are shown in figure 9. When training with only 50 training cases with each case having 50 snapshots, the model achieves a qualitative estimation of \(C_{D}\). Here, we denote the number of cases as \(n_{\text{case}}\), and the number of snapshots per case as \(n_{\text{ss}}\). We also quote the \(L_{2}\) error norm \(\varepsilon=||\mathbf{f}_{\text{Ref}}-\mathbf{f}_{\text{ML}}||_{2}/||\mathbf{f}_{\text{Ref} }-\overline{\mathbf{f}}||_{2}\), where \(\mathbf{f}_{\text{Ref}}\) and \(\mathbf{f}_{\text{ML}}\) are the reference and the machine-learning-based estimation, respectively, of variable \(\mathbf{f}\). Note that this error is normalized by the fluctuation of a variable from its steady state value \(\overline{\mathbf{f}}\). For \(C_{L}\) and \(C_{D}\), the error is measured over the time range \(u_{\infty}t/c=[0.85,5.1]\) for each case. The estimation results show that the positions of the peak and trough of \(C_{D}\) induced by the vortex-airfoil wake interaction are qualitatively predicted, yet the exact values are off from the DNS result. Increasing the number of training cases \(n_{\rm case}\) improves the estimation performance. Enhanced agreement between the estimation and DNS is also achieved when increasing the number of snapshots to 500, as illustrated in figure 8(a). The enhancement in the data diversity leads to a drastic decrease in the prediction error. In contrast to 50 training cases, utilizing 100 training cases yields a \(67\%\) deduction in test error. The reason why the expansion of training cases is beneficial for prediction performance is that the machine learning model can cover a larger parameter space, which assists in better predicting unseen test cases. Yet, considering the vast parameter space, 100 training cases are very few. To obtain an accurate reconstruction while using as little data as possible, we then incorporate the transfer-learning-based LSTM into the model for \(C_{D}\), as shown in figure 8(a). Due to the transient nature of the current vortex-airfoil interaction problem, the present transfer-learning-based LSTM is able to build a reliable connection between sensor input and output based on historical information. For all three examples, using the MLP-LSTM model gives rise to a 10% decrease in the estimation error. Estimation for \(C_{L}\) is presented in figure 8(b). Enhancement in the reconstruction of \(C_{L}\) from increasing the amount of training data is also shown in figure 8(b). The new MLP-LSTM model reduces the test error to 0.215, 0.158, and 0.148 for \((n_{\rm case},n_{\rm ss})=(50,50)\), \((n_{\rm case},n_{\rm ss})=(50,500)\) and \((n_{\rm case},n_{\rm ss})=(100,50)\), respectively. Similar to the \(C_{D}\) estimation, the transfer-learned-LSTM architecture is also useful in the estimation of \(C_{L}\). We also note that the reconstruction for \(C_{L}\) is usually better than \(C_{D}\), which is due to the variation of \(C_{L}\) over time being much larger than that of \(C_{D}\). ### Surface pressure distribution Next, we perform the MLP-based estimation of the pressure distribution \(C_{p}\) over the airfoil surfaces. Representative snapshots of the vorticity field for a test case when a vortical disturbance passes around the airfoil are shown in the first row of figures 10. Similar to the aerodynamics forces \(C_{L}\) and \(C_{D}\), the reconstruction performance is strongly influenced by the number of cases \(n_{\rm case}\), the number of snapshots per case \(n_{\rm ss}\), as well as whether transfer-learned LSTM is incorporated. When training the MLP-LSTM model with 50 cases and 250 snapshots per case, a qualitative reconstruction is achieved for \(C_{p}\). As shown in the first row of figure 10, the estimated \(C_{p}\) at both the upper and lower surfaces of the airfoil are in agreement with the DNS. As we increase the number of cases from 50 to 100 without utilizing transfer-learned LSTM, this machine-learning model also reconstructs \(C_{p}\) in a reasonable manner, as shown in the second row of figure 10. However, by comparing the reconstruction of \(C_{p}\) of \((n_{\rm case},n_{\rm ss})=(100,250)\) without LSTM against the results with LSTM implemented, it is found that the use of transfer-learned LSTM greatly improves reconstruction, achieving enhanced performance with only half of the training data. In order to further improve the estimation performance, increasing the number of snapshots from 250 to 500 for 100 training cases achieves a similar performance as the results of \((n_{\rm case},n_{\rm ss},{\rm LSTM})=(50,250,{\rm Y})\), as shown in the last row of figure 10. Note that although we are using 100 training cases and 500 snapshots per case, the training data is still small compared to the broad parameter space of the test cases. ### Vorticity field We employ the present machine learning techniques to reconstruct the two-dimensional vorticity field from sensor measurements on an airfoil using the MLP-CNN model. Analogous to the results in section 4.2, the combination of MLP and CNN is suitable to estimate the vortical flow from sensors. The reconstruction of the spatially discretized vorticity field \(\omega\in\mathbb{R}^{100\times 200}\) is summarized in figure 11. The present model successfully captures the vortical disturbance at \(u_{\infty}t/c=0.85\). The location and the strength of the vortex are well reconstructed. The interaction between the vortex disturbance and the flow field around the airfoil at \(u_{\infty}t/c=2.12\) is also reproduced well. This is approximately the time at which \(C_{L}\) and \(C_{D}\) drop to their minimum values, serving as an important dynamic transition point. However, the wakes behind the trailing edge at \(u_{\infty}t/c=5.10\) are not accurately reconstructed because these wake structures are farther away from the airfoil during this period. Sensors on the airfoil surface measure do not observe a sizeable change in pressure, making it difficult to reconstruct far-field wakes, which is expected. Let us now focus on the critical near-wake region around an airfoil since this region primarily determines the unsteady loading. Considering only the near-field region enables us to greatly reduce the size of training data and the associated computational costs. Results from training with a smaller region are described in figure 11. The windowed training model also provides improved estimations with lower error. The averaged \(L_{2}\) error for the test case reduces from 0.329 (large region training) to 0.261 (windowed region training), which also shows the influence of the region size to estimate the wake field with the modest computational cost. Moreover, the enhancement in reconstructing the vorticity field can also be achieved by increasing the amount of training data and utilizing transfer-learned LSTM, as summarized in figure 12. For all time series, a qualitative and insightful reconstruction of the vorticity field is achieved with as less as 10 snapshots per case, as shown in the second column in figure 12. When we apply the transfer-learned-LSTM to the same dataset, up to 33% reduction in \(L_{2}\) error is accomplished. Additionally, increasing the number of snapshots to 50 or increasing the data diversity by using 100 training cases produces further improvements. It is worth noting that the reconstruction accuracy for the interaction process is not uniform. For example, at \(u_{\infty}t/c=1.70\) and \(u_{\infty}t/c=2.55\) when the center of the disturbance is near the airfoil, the \(L_{2}\) errors are relatively high. Due to the high level of interaction, the complex morphological changes in the vorticity field result in an increased error. However, this does not indicate that the machine-learning model is not able to extract the crucial features of the flow field. Instead, the errors are partially due to the modest displacement of vortical structures. In addition, the reconstructed vorticity field at \(u_{\infty}t/c=5.10\) shows that the transfer-learned LSTM shows its superiority in estimating the small fluctuation behind the trailing edge compared to the enhancement of data amount or diversity. Based on the insights gained from this study, we deduce that when the influence from the disturbance is greater (strong disturbances with large sizes and the interactions around the airfoil), the accuracy of the reconstruction is improved. Here again, the transfer-learned LSTM greatly improves the estimation for the overall dynamic process. Figure 10: Machine-learning-based estimation of the pressure distribution over an airfoil surface. The pressure coefficient \(C_{p}\) at \(u_{\infty}t/c=\) a) \(0.85\), b) \(3.82\), and c) \(4.67\). Results are shown for the case of: \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)=(0.35,0.95,-0.15)\). ### Influence on the sensor positions Next, let us examine the estimation performance of the machine-learning models trained with different numbers and placements of the sensors. As shown in figure 13, we consider the uses of 8 sensors (case 1), 3 sensors around the leading edge (case 2), 3 sensors on the top surface (case 3), 3 sensors around the trailing edge (case 4), and 3 sensors on the bottom surface (case 5), respectively. With 8 sensors (case 1), the lowest \(L_{2}\) error is achieved compared to the other cases with 3 sensors, as expected. With 3 sensors, we observe that Case 5 with the bottom surface sensors usually presents a lower error than Cases 2 to 4 for the whole time range. This is likely because the sensors on the pressure side may sense the vortical structures approaching an airfoil easier and earlier than having sensors on the suction side. We also assess the estimation performance over time in figure 13. Before the vortical disturbance impinges on the airfoil (\(u_{\infty}t/c<2\)) and after the vortex moves away from the trailing edge of the airfoil (\(u_{\infty}t/c>4\)), we observe relatively low \(L_{2}\) error. For \(u_{\infty}t/c\in[2,4]\), due to the complex interactions between the disturbance and the airfoil, the estimation for this time period is more difficult than other times. However, we note that the present model still achieves qualitative reconstructions even for the strong vortex-airfoil wake interaction process, as depicted in figure 14. These reconstructed snapshots correspond to the moment \(u_{\infty}t/c=2.12\) for \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)=(0.65,0.40,0)\). This implies that monitoring not only the scalar error measurement but also the reconstructed flow fields is essential for appropriate assessments of machine-learning-based flow estimations. These results also provide practical insights into the choice of sensor locations. It is recommended that sensors are placed on the suction and pressure sides for the present problem. ### Robustness against noisy sensor measurements Let us evaluate the machine-learning model robustness against the noisy sensor measurements. We use the Gaussian noise \(\mathbf{n}\) for the sensor input \(\mathbf{s}\). Hence, the estimated output is expressed as \[\mathbf{q}_{n}=\mathcal{F}(\mathbf{s}+\mathbf{n}) \tag{13}\] where \(\mathbf{q}_{n}\) is the output of the model, \(\mathcal{F}\) is the model trained without noisy inputs, and \(\gamma=\|\mathbf{n}\|/\|\mathbf{s}\|\) is the magnitude of the noise. The estimation performance of \(C_{L}\), \(C_{D}\), \(C_{P}\), and vorticity field with noisy inputs (pressure measurements) are considered herein, as shown in figure 15. For all estimations, the error increases with the magnitude of the input noise, as expected. The reconstructed \(C_{L}\), \(C_{D}\), \(C_{P}\), and vorticity field are also shown in figures 16-18. Regarding the estimated \(C_{L}\) and \(C_{D}\) in figure 16, the reconstructed \(C_{L}\) and \(C_{D}\) present smooth curves without noisy input of \(\gamma=0\). With increasing \(\gamma\), \(C_{L}\) and \(C_{D}\) have high fluctuations resulting in a larger \(L_{2}\) error but with the overall trend well reproduced. Figure 11: Reconstructed vorticity flow field with large region training and windowed region training. Results shown for \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)=(0.65,0.40,0)\). The estimated \(C_{P}\) from noisy pressure inputs is depicted in figure 17. While the error solely increases with the noise magnitude, we find that the error at \(u_{\infty}t/c=2.55\) is larger than that at \(u_{\infty}t/c=0.85\). This is caused by the intense wake-vortex gust interaction at \(u_{\infty}t/c=2.55\) which induces rapid changes in the pressure distribution on the airfoil surface. Although the error reports approximately 0.5 with \(\gamma=0.178\), the whole trend of the \(C_{P}\) curve is well-estimated, supporting the robustness of the present machine-learning model. The reconstructed vorticity fields across the different levels of noisy inputs are also exhibited in figure 18. We show two cases of the vortical disturbance, \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)=(0.35,0.95,-0.15)\) and \((-0.60,0.91,-0.09)\). A large positive disturbance is introduced in the former case, while a negative vortical gust travels over the airfoil in the latter case. In the case of \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)=(0.35,0.95,-0.15)\), the estimated vorticity field retains the primary vortical features for \(\gamma\leq 0.178\). The estimated flow field deviates from the reference DNS field at \(\gamma=0.28\). For the case of \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)=(-0.60,0.91,-0.09)\), spurious negative structure attached to the trailing edge vortex emerges beyond \(\gamma=0.178\), albeit the overall flow is reconstructed well. At \(\gamma=0.28\), although the \(L_{2}\) error norm is relatively high, the main wake structures are nonetheless reconstructed. These results suggest that the present machine-learning models that incorporate dynamics are robust against noisy pressure measurements even with a small amount of training data. ## 6 Concluding remarks High-fidelity machine-learning-based reconstructions are developed for aerodynamic force coefficients, pressure distribution over the airfoil, and two-dimensional vorticity flows that experience an impact with a disturbance vortex. Such reconstruction using sparse sensor measurements and a modest amount of training data is extremely challenging due to the strong nonlinearities and the transient nature of flow fields which requires a vast parameter space to be Figure 12: Dependence of the reconstruction accuracy on the present enhancement methods with window training for vorticity wake problem. Results are shown for the case \((u_{\theta\max}/u_{\infty},R/c,y_{0}/c)=(0.72,0.64,-0.10)\). covered during the learning process. For accurate reconstruction, we developed machine learning models that are suitable for estimating the transient flow features. A multi-layer perceptron is chosen for its ability in constructing the nonlinear relation between limited sensor measurements and aerodynamic forces coefficient as well as pressure over the airfoil surface. A convolutional neural network coupled with MLP addresses the problem of estimating the vorticity fields with rich information in an efficient way with the filtering process. To better capture dynamical features in time, long short-term memory (LSTM)-assisted transfer learning is utilized via passing information from the historical scenarios, which is embedded in the aforementioned two model structures. Due to the transient nature of the vortex-airfoil interaction problem, the use of LSTM greatly assists in the improvement of the estimation with as few as 10 training snapshots. The main contribution of the present study is how time-varying flows with a vast parameter space are reconstructed accurately. For this study, the parameter space is comprised of maximum rotational velocity, radius, and position of the disturbance vortex. As shown in this paper, careful sampling of training data and incorporation of dynamics into the machine-learning model is important. Based on our study, we also showed that accurate reconstruction of vortical structures is easier to accomplish for high-intensity interaction processes between the vortical disturbance and the airfoil (strong vortex with large size, interacting close to the airfoil). In addition, we accessed proper sensor locations over different time periods. We expect that the present machine-learning-based reconstruction method will be useful in predicting and controlling flows associated with vortex-airfoil interactions in the future. Figure 14: Dependence of the reconstruction accuracy on the location of sensors for the reconstructed vorticity wake. \((n_{\text{case}},n_{\text{sss}},\text{LSTM})=(50,50,\text{Y})\) As a test case, we use \((u_{\theta\text{max}}/u_{\infty},R/c,y_{0}/c)=(0.65,0.40,0)\). Figure 13: Dependence of the \(L_{2}\) errors on the sensor positions. Cases 1 to 5 denote 8 uniform sensors, 3 leading edge sensors, 3 top surface sensors, 3 trailing edge sensors, and 3 bottom surface sensors, respectively. The machine-learning model with the condition of \((n_{\text{case}},n_{\text{sss}},\text{LSTM})=(50,50,\text{Y})\) is used. ## Acknowledgments YZ, KF and KT acknowledge Ebara Corporation for supporting this research. We are thankful to Akira Goto, Motohiko Nohmi, Masashi Obuchi, and Hiroyoshi Watanabe for enlightening discussions.
2310.01197
Temperature inhomogeneities in Mrk71 can not be discarded
In a very recent work, [1] claim that the scenario of temperature inhomogeneities proposed by [2] ($t2$ > 0) is not able to explain the O$^{2+}$/H$^{+}$ abundance discrepancy observed between the calculations based on the optical [OIII] collisional excited lines (CELs) and the OII recombination lines (RLs) in the star forming galaxy Mrk71. In this work, we show that conclusions of [1] depend on several assumptions on the absolute flux calibration, reddening correction and the adopted electron density. In fact, using the data of [1] in a different way and even considering their 1{\sigma} uncertainties, it is possible to reach the opposite conclusion, consistent with $t2$ = $0.097 ^{+0.008}_{-0.009}$. Therefore, the existence of temperature inhomogeneities causing the O$^{2+}$/H$^{+}$ abundance discrepancy in Mrk71 can not be ruled out.
J. Eduardo Méndez-Delgado, César Esteban, Jorge García-Rojas, Kathryn Kreckel, Manuel Peimbert
2023-10-02T13:33:05Z
http://arxiv.org/abs/2310.01197v1
# Temperature inhomogeneities in Mrk 71 can not be discarded ###### Abstract In a very recent work, [1] claim that the scenario of temperature inhomogeneities proposed by [2] (\(t^{2}>0\)) is not able to explain the O\({}^{2+}\)/H\({}^{+}\) abundance discrepancy observed between the calculations based on the optical [O III] collisional excited lines (CELs) and the O II recombination lines (RLs) in the star forming galaxy Mrk 71. In this work, we show that conclusions of [1] depend on several assumptions on the absolute flux calibration, reddening correction and the adopted electron density. In fact, using the data of [1] in a different way and even considering their \(1\sigma\) uncertainties, it is possible to reach the opposite conclusion, consistent with \(t^{2}=0.097^{+0.008}_{-0.009}\). Therefore, the existence of temperature inhomogeneities causing the O\({}^{2+}\)/H\({}^{+}\) abundance discrepancy in Mrk 71 can not be ruled out. [1] tested the presence of temperature inhomogeneities in the star forming galaxy Mrk 71. To carry out their analysis, [1] used an optical spectrum from the Keck Cosmic Web Imager (KCWI) at the W. M. Keck Observatory as well as IR spectra from the Far Infrared Field-Imaging Line Spectrometer (FIFI-LS) at the Stratospheric Observatory for Infrared Astronomy (SOFIA) and from the Photodetector Array Camera and Spectrometer (PACS) at the Herschel Space Observatory. By comparing the different H I line flux ratios with the theoretical predictions, they infer a reddening constant \(c(\mathrm{H}\beta)=0.09\pm 0.04\), considering the reddening curve of [3] with \(R_{V}=3.1\). They derive the electron density (\(n_{e}\)) of the gas with three indicators: [O II] \(\lambda 3726/\lambda 3729\), [O III] \(\lambda 52\mu\)m/\(\lambda 88\mu\)m and the O II V1 RL multiplet. On the other hand, they derive the \(T_{e}\) by considering [O III] \(\lambda 4363/\lambda 4959\) as well as [O III] \(\lambda 4959/\lambda 52\mu\)m and \(\lambda 4959/\lambda 88\mu\)m. From the comparison of the O\({}^{2+}\)/H\({}^{+}\) abundance derived both with optical [O III] CELs and O II RLs and assuming that the abundance discrepancy (AD) is produced by temperature variations, they infer a \(t^{2}\sim 0.1\) (see Eq. (12) from [2]). This result would imply that both the derived temperature from [O III] \(\lambda 4959/\lambda 52\mu\)m and \(\lambda 4959/\lambda 88\mu\)m should be \(\sim 3000\) K lower than what is obtained from [O III] \(\lambda 4363/\lambda 4959\). However, [1] found a good consistency between their calculations of \(T_{e}\) based on [O III] \(\lambda 4959/\lambda 52\mu\)m, \(\lambda 4959/\lambda 88\mu\)m and \(\lambda 4363/\lambda 4959\) and therefore they claim the absence of significant temperature fluctuations in Mrk 71. The results obtained by [1] are highly dependent on the accuracy of the absolute flux calibration between the three instruments, since there are no H I detections in the IR data that could be used to normalize the spectra. Considering that the KCWI observations were taken under non-photometric conditions [1], that the FIFI-LS observations present telluric features [4] and that the PACS observations were carried out in the "un-chopped" mode and show detector response variations [5], the absolute flux calibration between the three different kinds of data is not straightforward. In fact, the comparison of [C II] \(\lambda 158\mu\)m, detected both in FIFI-LS and PACS reveals a difference of \(\sim 15\%\) between the flux calibrated data of both instruments even after the PACS detector response variations correction. Possible systematic differences between the optical and IR-spectra are not analyzed or quantified by [1]. The difference between the FIFI-LS and PACS spectra implies the existence of a systematic bias in the flux of at least one of the [O III] IR CELs used. Dividing the flux difference of \(\sim 15\%\) quadratically and including it in the uncertainty bars does not properly treat the systematic error, as it impacts differently [O III] \(\lambda 4959/\lambda 52\mu\)m than [O III] \(\lambda 4959/\lambda 88\mu\)m, given their different \(n_{\mathrm{e}}\)-dependence. Considering that both [O III] IR CELs have similar fluxes, a more robust way to reduce the impact of the flux bias is to use the sum of their fluxes instead, comparing \(T_{\mathrm{e}}\) derived from [O III] \(\lambda 4959/\lambda\lambda 52+88\mu\)m with the value obtained using \(\lambda 4363/\lambda 4959\). The flux systematic difference also calls into question the density derived from [O III] \(\lambda 52\mu\)m/\(\lambda 88\mu\)m with the reported absolute fluxes. In Fig. 1 we present the resulting plasma diagnostics considering the line fluxes and uncertainties reported by [1] under three values of \(c(\mathrm{H}\beta)\), all consistent within the reported \(1\sigma\). In all cases, the reddening curve of [3] with \(R_{V}=3.1\) was used. The atomic data used were the default ones from PyNeb [6] in its version 1.1.16, the same ones assumed by [1]. As shown in Fig. 1, depending on the \(c(\mathrm{H}\beta)\) and the \(n_{e}\) adopted, it is possible to be consistent either with the absence of temperature inhomogeneities (as [1] concluded) or the opposite case, predicted by \(t^{2}=0.097^{+0.008}_{-0.009}\). The adoption of a density value close to \(n_{e}(\mathrm{[O\,II]}\)\(\lambda 3726/\lambda 3729)\)=\(160\pm 10\mathrm{\;cm^{-3}}\) instead of the available \(n_{e}(\mathrm{O\,II})\)=\(310\pm 50\mathrm{\;cm^{-3}}\) was not justified by [1]. Both values could be considered typical within the range of densities found in previous studies of Mrk 71 [7; 8; 9]. The most evident problem with the \(n_{e}\) value derived from \(\mathrm{[O\,II]}\)\(\lambda 3726/\lambda 3729\) by [1] is that panels (c) and (d) of their Fig. 2 show that the spectral resolution of KCWI is insufficient to have at least a partial separation of the \(\mathrm{[O\,II]}\) doublet. Therefore, this density value strongly depends on the assumptions imposed on the Gaussian deblend required to measure the lines separately with uncertainty bars of \(\sim 0.6\%\). In order to reinforce their arguments on the absence of strong density and temperature fluctuations in Mrk 71, [1] mention that in a sample of regions, among which Mrk 71 does not appear, [10] found differences of only \(\sim 1000\) K between \(\mathrm{[O\,III]}\)\(\lambda 1666/\lambda 5007\) and \(\mathrm{[O\,III]}\) and \(\lambda 4363/\lambda 5007\) and that this is not sufficient for the temperature fluctuations scenario to explain the observed AD factor. However, such an assertion can be proven to be incorrect. Considering Eq. 15 from [2]: \(T_{e}(\mathrm{[O\,III]}\lambda 1666/\lambda 5007)-T_{e}(\mathrm{[O\,III]} \lambda 4363/\lambda 5007)\mathrm{[K]}=12330\times t^{2}\). For the most common value of \(t^{2}\sim 0.04\) found for the star forming regions [11; 12], the temperature difference would be of \(\sim 500\) K. A very extreme value of \(t^{2}=0.097^{+0.008}_{-0.009}\), would imply a difference of \(1200\pm 110\) K. This exercise demonstrates that the temperature differences found by [10] can comprise typical and extreme values of \(t^{2}\). We conclude that based on the data and the analysis presented by [1], one cannot be conclusive on the presence or absence of temperature inhomogeneities in Mrk 71, since both interpretations are possible even within their estimated \(1\sigma\) uncertainties. To be conclusive in this regard, it is necessary to consider the possible presence of density variations that could introduce systematic biases on the \(n_{e}\) diagnostics even if the line intensity ratios are well measured [13]. \(n_{e}\)-biases could affect determinations of \(\mathrm{O^{2+}/H^{+}}\) based on \(\mathrm{[O\,III]}\)\(\lambda\lambda 52+88\mu\)m in a much higher extent than those based on \(\mathrm{O\,II}\) V1. Observational evidence of temperature and density inhomogeneities in star-forming regions (including Mrk 71) is presented by [14] and [15], respectively. Figure 1: **Using the data presented by [1] it is possible to show either the absence of temperature inhomogeneities or the opposite case.** Each panel shows a plasma diagnostic plot considering a slightly different reddening constant, \(c(\mathrm{H}\beta)\), all consistent within the \(1\sigma\) uncertainties. According to the \(t^{2}\)-paradigm proposed by [2], in the presence of temperature inhomogeneities, [O III] \(\lambda 4959/\lambda\lambda 52+88\mu\)m (green band) should be lower than [O III] \(\lambda 4363/\lambda 4959\) (red band), matching the value predicted by the O II RLs (black band). Depending on \(c(\mathrm{H}\beta)\) and the adopted electron density (\(n_{e}\)), it is possible to argue either against the existence of temperature fluctuations (as concluded by [1]) or the opposite case. Acknowledgments.JEM-D thanks the help provided by V. Gomez-Llanos in managing the assignment of colors in the PyNeb plasma diagnostics and to O. V. Egorov for fruitful discussions. Authors' contributions.JEM-D lead the analysis and writing of the manuscript. CE, JG-R, KK and MP provided critical feedback and modified the text. Conflict of interest/Competing interests.The authors declare that they have no competing financial interests. Data availability.All the data discussed here was presented by [1]. Code availability.Our results use the PyNeb code, publicly available on GitHub. [https://github.com/Morisset/PyNeb_devel](https://github.com/Morisset/PyNeb_devel) Funding.JEM-D and KK gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the form of an Emmy Noether Research Group (grant number KR4598/2-1, PI Kreckel). CE and JG-R acknowledge support from the Agencia Estatal de Investigacion del Ministerio de Ciencia e Innovacion (AEI-MCINN) under grant _Espectroscopia de campo integral de regiones H II locales. Modelos para el estudio de regiones H II extragalacticas_ with reference 10.13039/501100011033 and support under grant P/308614 financed by funds transferred from the Spanish Ministry of Science, Innovation and Universities, charged to the General State Budgets and with funds transferred from the General Budgets of the Autonomous Community of the Canary Islands by the MCIU. JG-R acknowledges support from an Advanced Fellowship under the Severo Ochoa excellence program CEX2019-000920-S and financial support from the Canarian Agency for Research, Innovation and Information Society (ACIISI), of the Canary Islands Government, and the European Regional Development Fund (ERDF), under grant with reference ProID2021010074. Additional Information.Correspondence should be addressed to JEM-D: [email protected]
2303.06918
A fully automatized method for the unambiguous wavelength-by-wavelength determination of the thickness and optical property of a very thin film with a transparent range
Spectroscopic ellipsometry is a powerful method with high surface sensitivity that can be used to monitor the growth of even sub-monolayer film. However, the analysis of ultrathin films is complicated by the correlation of the dielectric constant and the thickness. This problem is usually resolved by fixing one or the other value, limiting the information that can be extracted. Here, we propose a method to determine unambiguously the refractive index, extinction coefficient and thickness of a film when a transparent range is available in the energy range investigated. We decompose the analysis in three steps. First, the thickness of the film is determined from the transparent range of the film. Then, knowing the thickness of the layer, an initial estimation of the refractive index and extinction coefficient is made based on a first-order Taylor expansion of the ellipsometric ratio. Finally, using this estimation, a numerical regression is done to ensure the convergence of the fit towards the solution. A theoretical example of the method is given for two different thicknesses of TiO2 films. Finally, the method is applied to the experimental data measured during the atomic layer deposition of a thin film of Hf0.5Zr0.5O2 grown on Si. The thickness, refractive index and extinction coefficient are retrieved with a high precision in the energy range of 3.5 - 6.5 eV. A detailed analysis is presented on the accuracy of the retrieved values and their dependency on random and systematic errors for different energy ranges.
Florian Maudet, Charlotte Van Dijck, Muhammad Hamid Raza, Catherine Dubourdieu
2023-03-13T08:36:09Z
http://arxiv.org/abs/2303.06918v1
A fully automatized method for the unambiguous wavelength-by-wavelength determination of the thickness and optical property of a very thin film with a transparent range ###### Abstract Spectroscopic ellipsometry is a powerful method with high surface sensitivity that can be used to monitor the growth of even sub-monolayer film. However, the analysis of ultrathin films is complicated by the correlation of the dielectric constant and the thickness. This problem is usually resolved by fixing one or the other value, limiting the information that can be extracted. Here, we propose a method to determine unambiguously the refractive index, extinction coefficient and thickness of a film when a transparent range is available in the energy range investigated. We decompose the analysis in three steps. First, the thickness of the film is determined from the transparent range of the film. Then, knowing the thickness of the layer, an initial estimation of the refractive index and extinction coefficient is made based on a first-order Taylor expansion of the ellipsometric ratio. Finally, using this estimation, a numerical regression is done to ensure the convergence of the fit towards the solution. A theoretical example of the method is given for two different thicknesses of TiO\({}_{2}\) films. Finally, the method is applied to the experimental data measured during the atomic layer deposition of a thin film of Hf\({}_{0.5}\)Zr\({}_{0.5}\)O\({}_{2}\) grown on Si. The thickness, refractive index and extinction coefficient are retrieved with a high precision in the energy range of \(3.5-6.5\) eV. A detailed analysis is presented on the accuracy of the retrieved values and their dependency on random and systematic errors for different energy ranges. ## 1 Introduction Spectroscopic ellipsometry is an optical, non-destructive, characterization method commonly used to precisely monitor the growth of thin films both in research and industry [1]. This method relies on the measurement of the complex ellipsometric ratio, \(\rho\), that characterize the changes in polarization after a linearly polarized light interacts with a sample. Remarkably the measurement is made without the need to calibrate the background intensity as opposed for example to spectrophotometry, an aspect that consequently enhance the reliability of the measurement [2]. Furthermore owing to its high sensitivity to surface change and low footprint, the method is particularly suited for in-situ study [3]. In the simple case of an isotropic three-phase configuration consisting of the ambient (with complex refractive index \(n_{a}\)), thin film (\(n_{f}\)) and substrate (\(n_{s}\)), ellipsometry allows to determine the unknown thickness \(d_{f}\) and dielectric constant of the thin film \(\varepsilon_{f}=\left(n_{f}\right)^{2}=\left(n_{f}+ik_{f}\right)^{2}\) where \(n_{f}\), \(n_{f}\) and \(k_{f}\) are the complex refractive index, real refractive index and extinction coefficient of the thin film respectively. To do so in most cases the retrieval of \(d_{f}\) and \(n_{f}\) from the measured \(\rho\) is made by developing an optical model assuming a certain dispersion property of the dielectric constant [1]. Indeed due to the non-linearity of the optical equations no direct inversion can be made [4]. The information is retrieved by varying the parameters of the optical model to minimize the Mean Square Error (MSE) that characterizes the error between the measured and modeled data. Therefore, a prior estimation of the optical properties is needed to ensure the convergence of the model towards a realistic solution. This method can lead to incorrect optical properties if spectral features of the film, that were not anticipated and therefore absent from the model, are overlooked. This is the case for example for a sample with sub-bandgap absorption features that would not have been taken into account with a simple Tauc-Lorentz model [5]. Furthermore, in the case of a very thin film (\(\frac{d_{f}}{\lambda}\ll 1\) with \(\lambda\) the wavelength) this approach cannot be applied as \(\tilde{n}_{f}\) and \(d_{f}\) become strongly correlated [2]. This is particularly problematic for the very first steps of the growth of a thin film. This issue is usually overcome by fixing either \(d_{f}\) or \(\tilde{n}_{f}\). For example, to study an atomic layer deposited (ALD) film it is common to fix the optical property of a growing film to the bulk value and to recover information on the film growth from the thickness evolution [6, 7, 8, 9]. However, in addition to preventing us from retrieving information on the dielectric constant of the film, this method leads to incorrect values of the thickness in the case of ultrathin films (typically below 10nm) as \(\tilde{n}_{f}\) depends on the film thickness (an ultrathin film of e.g. 0.8 nm has a refractive index different from the one of the bulk). A method to avoid this issue is to use complementary measurements to disambiguate \(d_{f}\) and \(\tilde{n}_{f}\), such as measuring the mass of the deposited material with a quartz crystal microbalance [10]. Another method was developed relying on the simultaneous measurement of changes in the reflected intensity and \(\rho\) to disambiguate \(d_{f}\) and \(\tilde{n}_{f}\)[11]. A drawback is the need for a precise measurement of the intensity as it dominates the measurement accuracy [11]. Another approach solely relying on the measurement of \(\rho\) was developed by minimizing the presence of artefacts from the substrate in the dielectric constant for incorrect thicknesses [12, 13]. This method allows to unambiguously determine the thickness when a substrate presents sharp feature, i.e. with a high variation with energy, like a critical point. An a priori knowledge of \(\tilde{n}_{f}\) is, however, necessary to ensure the convergence towards the correct solution as multiple solutions of \(\tilde{n}_{f}\) coexist for a given \(d_{f}\)[4]. Finally, when the material is transparent (\(k_{f}=0\)), \(\tilde{n}_{f}\) is a real at least on part of the investigated spectral range, a direct inversion of the thickness and refractive index can be made [14]. The method was extended recently to take into account the error on \(d_{f}\), enhancing the accuracy [15]. Knowing \(d_{f}\), \(\tilde{n}_{f}\) can then be calculated for the whole spectral range by mapping all existing solutions and selecting the solution that is physically reasonable [16]. This step is computationally intensive and requires a manual selection of the solution to ensure that a physical solution is found. It limits the applicability for a real time analysis of a growing film. Therefore, a point-by-point method to unambiguously determine \(d_{f}\) and \(\tilde{n}_{f}\) without prior assumption on \(\tilde{n}_{f}\) or manual selection of the solution is desirable to study the growth of very thin films _in situ_ and in real time. Here, we propose to address this issue by developing a fully automated method to determine unambiguously \(d_{f}\),\(n_{f}\) and \(k_{f}\) of a very thin film where a transparent range is available. The method is well suited to study the growth of dielectrics or semiconductors with a bandgap that lies in the measured range. We decompose the analysis in three steps. First, the thickness of the film is determined following the procedure developed by F. L. McCrakin and M. Gilliot _et al._[14, 15]. Then, knowing the thickness of the layer, the refractive index and extinction coefficient can be retrieved for the whole spectral range without any prior knowledge on the film using a first order Taylor expansion of the ellipsometric ratio \(\rho\) as proposed by G.H. Jung et al. [17]. Finally, using this first order approximation to ensure convergence toward the correct solution, a wavelength-by-wavelength regression is made. A theoretical example is presented for a thin film of TiO\({}_{2}\). A detailed analysis and discussion on the error of \(d_{f}\),\(n_{f}\) and \(k_{f}\) for different thicknesses of TiO\({}_{2}\) is presented. Finally, the method is demonstrated for the practical example of a very thin film of Hf\({}_{0.5}\)Zr\({}_{0.5}\)O\({}_{2}\) (HZO) grown by ALD. ## II. Model and method to unambiguously determine \(\tilde{n}_{f}\) and \(d_{f}\) The method can be divided in three parts: a first part to determine the thickness of the film, a second one to have an estimation of the optical properties from the calculated thickness and a third one that uses this estimation to ensure convergence towards the actual solution by a numerical regression. We consider here the simple case of an isotropic three-layer configuration: ambient (\(\tilde{n}_{a}\)), thin film (\(\tilde{n}_{f}\)) and substrate (\(\tilde{n}_{S}\)) as illustrated on Figure 1. Figure 1: Schematic of an ellipsometry measurement on a bare substrate (a) and in three-phase ambient/thin film/substrate configuration (b) ### Thickness determination from a McCrakin inversion: We remind below the procedure that was originally presented by F. L. McCrakin to evaluate the thickness of a transparent thin film and further developed recently [14, 16]. As we are considering a transparent thin film, \(\tilde{n}_{f}=n_{f}\). The experimentally measured ellipsometric ratio is given by: [1] \[\rho_{e}=\frac{r_{p}}{r_{s}}=\tan\psi_{e}\;e^{-i\Delta_{e}}\;\;\;\;\;\;\;\;(1)\] where \(\psi_{e}\) and \(\Delta_{e}\) are the measured ellipsometric angles, \(r_{p}\) and \(r_{s}\) are the p and s polarized complex reflection coefficients of the stack respectively. These coefficients are given by: \[r_{p}=\frac{r_{af,p}+r_{fsub,p}X}{1+r_{af,p}r_{fsub,p}X}\;\text{and}\;r_{s}= \frac{r_{afs}+r_{fsubs}X}{1+r_{af,s}r_{fsub,s}X}\;\;\;\;(2)\] \[\text{with}\;\;X=e^{\frac{j\pi a\tilde{d}_{f}\sqrt{\tilde{n}_{f}^{2}-n_{a}z^{ i}sin^{2}\theta_{i}}}{\lambda}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; From Eq. (6), for a given \(n_{f}\) two solutions of \(X\) can be found. Consequently, using those solutions two values of \(\tilde{d}_{f}(n_{f})\) can be calculated. The main idea of the procedure is that, as the thickness must be a real number, the correct value of \(n_{f}\) is the one that cancels out the imaginary part of the thickness, such that: \[Im\big{(}\tilde{d}_{f}(n_{f})\big{)}=0\ (7)\] These two values of \(\tilde{d}_{f}(n_{f})\) can be numerically computed for the whole wavelength (or energy) range with the method described in [16]. To do so, the values of \(\tilde{d}_{f}\) are computed for a broad range of \(n_{f}\) values, typically \(n_{f}=[1-10]\) with 100 steps. Approximated values of \(n_{f}\) are given by those values corresponding to the change in the sign of the imaginary part of \(\tilde{d}_{f}\). From these initial approximations, the precise values of \(n_{f}\) are then finally computed using an algorithm to find the root of Eq. (7). The algorithm used in this work is the Newton-Raphson method [18]. The sign ambiguity in the solution of Eq. (6) is solved by keeping the solution that makes physical sense (\(d_{f}>0\)). As this computation is made for all the measured wavelengths, \(d_{f}\) can be presented as a function of energy (\(E\)). Although the measurements are usually made with a fixed wavelength step, the results will be discussed as a function of energy since this scale is more relevant to discuss the material properties like the bandgap. Obviously \(d_{f}\) should be constant for all energies. However, two causes can explain the energy dependency of \(d_{f}\): - This computation is made with the hypothesis that \(k_{f}=0\), therefore \(d_{f}\) will vary in the energy range where this is not true. - Measurements inevitably contain errors which cause variation of \(d_{f}\) with the energy. Evaluating the impact of the errors of the measurement on \(d_{f}\) is thus critical to evaluate the range of energy where it can be accurately determined.[15] The error of \(d_{f}\) can be calculated from the propagation error formula as follows [19]: \[\sigma_{d_{f}}=\left\{\begin{array}{c}\left[\left(\frac{\partial d_{f}}{ \partial\psi}\right)^{2}\sigma_{\psi}^{2}+\left(\frac{\partial d_{f}}{ \partial\Delta}\right)^{2}\sigma_{\Delta}^{2}+\left(\frac{\partial d_{f}}{ \partial\theta_{1}}\right)^{2}\sigma_{\theta_{i}}^{2}+\left(\frac{\partial d_ {f}}{\partial n_{sub}}\right)^{2}\sigma_{n_{sub}}^{2}+\left(\frac{\partial d_ {f}}{\partial k_{sub}}\right)^{2}\sigma_{k_{sub}}^{2}+\left(\frac{\partial d_ {f}}{\partial\lambda}\right)^{2}\sigma_{\lambda}^{2}\\ +2\left(\frac{\partial d_{f}}{\partial\psi}\right)\left(\frac{\partial d_{f}}{ \partial n_{sub}}\right)\sigma_{\psi n_{sub}}^{2}+2\left(\frac{\partial d_{f} }{\partial\lambda}\right)\left(\frac{\partial d_{f}}{\partial n_{sub}}\right) \sigma_{\Delta n_{sub}}^{2}\\ +2\left(\frac{\partial d_{f}}{\partial\psi}\right)\left(\frac{\partial d_{f}}{ \partial k_{sub}}\right)\sigma_{\psi k_{sub}}^{2}+2\left(\frac{\partial d_{f }}{\partial\lambda}\right)\left(\frac{\partial d_{f}}{\partial k_{sub}}\right) \sigma_{\Delta k_{sub}}^{2}\\ +2\left(\frac{\partial d_{f}}{\partial\psi}\right)\left(\frac{\partial d_{f}}{ \partial\theta_{i}}\right)\sigma_{\psi\theta_{i}}^{2}+2\left(\frac{\partial d _{f}}{\partial\lambda}\right)\left(\frac{\partial d_{f}}{\partial\theta_{i}} \right)\sigma_{\Delta\theta_{i}}^{2}\\ +2\left(\frac{\partial d_{f}}{\partial\psi}\right)\left(\frac{\partial d_{f}}{ \partial\lambda}\right)\sigma_{\psi\lambda}+2\left(\frac{\partial d_{f}}{ \partial\lambda}\right)\left(\frac{\partial d_{f}}{\partial\lambda}\right) \sigma_{\Delta\lambda}\end{array}\right. \tag{8}\] where the \(\sigma_{j}\) is the standard deviation of the associated parameter \(j\) and \(\sigma_{xy}^{\ 2}\) is the covariance of the parameters \(x,y\). Therefore, by looking at the energy range where \(\sigma_{d_{f}}\)and \(\left|\frac{\partial d}{\partial E}\right|\) are minimum, the thickness can be accurately evaluated. In practice, the thickness of the film is determined as a weighted average of \(d_{f}(E)\) with the weights \(w(E)\). The weights are calculated to minimize the values of \(\sigma_{d_{f}}\)and \(\left|\frac{\partial d}{\partial E}\right|\) using the following function: \[w(E)=\frac{1}{\left|\frac{\partial d_{f}(E)}{\partial E}\right|\sigma_{d_{f}(E )}} \tag{9}\] Although it was presented here for the case of a three-phase configuration it can also be applied for a multi-layer stack where one layer is unknown. The only restriction of this method is that the thin film should exhibit a transparency range within the measurement range [16]. ### Determination of \(\vec{n}_{f}\) from first order Taylor expansion Knowing \(d_{f}\) is not sufficient to disambiguate \(n_{f}\) and \(k_{f}\) from ellipsometric measurements since multiple solutions of \(\vec{n}_{f}\) coexist for a given thickness [4]. A method was proposed recently by G.H. Jung _et al._[17] to approximate \(\vec{n}_{f}\) without any _a priori_ knowledge in the case of very thin films \(\left(\frac{d_{f}}{\lambda}\ll 1\right)\). It relies on the first-order Taylor expansion of \(\rho\). They evidenced that, in such a configuration, \(\vec{n}_{f}\) can be approximated by: \[\vec{n}_{f}^{2}\approx\frac{1}{2}\Big{(}\vec{n}_{a}^{2}+\vec{n}_{sub}^{2}+ \frac{\delta_{\rho}}{\alpha}\Big{)}\pm\frac{1}{2}\sqrt{\left(\vec{n}_{a}^{2} +\vec{n}_{sub}^{2}+\frac{\delta_{\rho}}{\alpha}\right)^{2}-4\vec{n}_{a}^{2} \vec{n}_{sub}^{2}} \tag{10}\] \[\text{with, }\alpha=4i\frac{2\pi}{\lambda}d_{f}\,\frac{\vec{n}_{a}\vec{n}_{sub}^{2 }\cos(\theta_{i})\sin^{2}(\theta_{i})}{(\vec{n}_{a}-\vec{n}_{sub}^{2})(\vec{n }_{sub}^{2}-\vec{n}^{2}+(\vec{n}_{a}^{2}+\vec{n}_{sub}^{2})\cos(2\theta_{i}))} \tag{11}\] \[\text{and }\delta_{\rho}=\frac{\rho_{\text{e}}-\rho_{sub}}{\rho_{sub}} \tag{12}\] where \(\rho_{sub}\)is the ellipsometric ratio of the substrate before thin film deposition (Figure 1(a)). It can be either measured before thin film deposition or simulated from the known \(\vec{n}_{sub}\). The ambiguity in the sign can be removed by choosing the solution that is the closest to the refractive index as determined from Eq. (7) in the transparent range of the film. Numerical regression using Newton-Raphson algorithm from the \(\vec{n}_{f}\) first order Taylor expansion Since the aforementioned method to determine \(\vec{n}_{f}\) is a first order approximation, there will necessarily be a residual error between the modeled ellipsometric ratio \(\rho_{m}\) (that can be calculated from Eq. (5)) and \(\rho_{e}\). To minimize it, a regression can be made to refine the determination of \(\hat{n}_{f}\). The Newton-Raphson method can be used, as described here in matrix form for simplicity: We define the initial vector \(V_{f,0}\) as: \[V_{f,0}=\begin{pmatrix}n_{f,0}\\ k_{f,0}\end{pmatrix} \tag{13}\] where \(n_{f,0}\) and \(k_{f,0}\) are the values calculated from Eq. (10). Their values can then be refined in an iterative process by: \[V_{f,j+1}=V_{f,j}+J_{df,j}^{-1}.AV_{d,j} \tag{14}\] \[\text{with the error vector }\Delta V_{d,j}=\begin{pmatrix}Re(\rho_{e})-Re(\rho_{m}) _{j}\\ Im(\rho_{e})-Im(\rho_{m})_{j}\end{pmatrix} \tag{15}\] \[\text{and the Jacobian matrix }J_{df}=\begin{pmatrix}\frac{\partial Re(\rho_{m})}{ \partial n}&\frac{\partial Re(\rho_{m})}{\partial k}\\ \frac{\partial Im(\rho_{m})}{\partial n}&\frac{\partial Im(\rho_{m})}{ \partial k}\end{pmatrix} \tag{16}\] The algorithm is repeated until convergence. At this step, \(d_{f}\), \(n_{f}\) and \(k_{f}\) are determined unambiguously for a very thin film. Finally, an important aspect of this method is to estimate the error \(\sigma_{\hat{n}_{f}}\) on \(n_{f}\) and \(k_{f}\) to be able to discriminate physical features of the spectra from a measurement artefact. We apply again the propagation error formula leading to the following error expression: [19] \[\sigma_{\hat{n}_{f}}=\begin{pmatrix}\begin{pmatrix}\frac{\partial\hat{n}_{f}}{ \partial\psi}\end{pmatrix}^{2}\sigma_{\psi}^{2}+\begin{pmatrix}\frac{\partial \hat{n}_{f}}{\partial\lambda}\end{pmatrix}^{2}\sigma_{\lambda}^{2}+\begin{pmatrix} \frac{\partial\hat{n}_{f}}{\partial d_{f}}\end{pmatrix}^{2}\sigma_{d_{f}}^{2}+ \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial n_{sub}}\end{pmatrix}^{2} \sigma_{nsub}^{2}+\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial k_{sub} }\end{pmatrix}^{2}\sigma_{ksub}^{2}+\begin{pmatrix}\frac{\partial\hat{n}_{f}} {\partial\lambda}\end{pmatrix}^{2}\sigma_{\lambda}^{2}\\ +2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\psi}\end{pmatrix}\begin{pmatrix} \frac{\partial\hat{n}_{f}}{\partial\theta_{i}}\end{pmatrix}\sigma_{\psi\theta _{i}}+2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\lambda}\end{pmatrix} \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\theta_{i}}\end{pmatrix} \sigma_{\partial\theta_{i}}\\ +2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\psi}\end{pmatrix}\begin{pmatrix} \frac{\partial\hat{n}_{f}}{\partial d_{f}}\end{pmatrix}\sigma_{\psi d_{f}}+2 \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\lambda}\end{pmatrix} \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial d_{f}}\end{pmatrix} \sigma_{\Delta d_{f}}+2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial d_{f} }\end{pmatrix}\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\theta_{i}} \end{pmatrix}\sigma_{d_{f}\theta_{i}}\\ +2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\psi}\end{pmatrix} \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\theta_{sub}}\end{pmatrix} \sigma_{\psi nsub}+2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\lambda} \end{pmatrix}\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\theta_{i}} \end{pmatrix}\sigma_{\Delta nsub}\\ +2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\psi}\end{pmatrix} \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial k_{sub}}\end{pmatrix} \sigma_{\psi ksub}+2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\lambda} \end{pmatrix}\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial k_{sub}} \end{pmatrix}\sigma_{\Delta ksub}\\ +2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial d_{f}}\end{pmatrix} \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial n_{sub}}\end{pmatrix} \sigma_{d_{f}n}\end{pmatrix}+2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial d _{f}}\end{pmatrix}\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial k_{sub}} \end{pmatrix}\sigma_{d_{f}k_{sub}}\\ +2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\psi}\end{pmatrix} \begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\lambda}\end{pmatrix} \sigma_{\psi\lambda}+2\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\lambda} \end{pmatrix}\begin{pmatrix}\frac{\partial\hat{n}_{f}}{\partial\lambda}\end{pmatrix} \sigma_{d_{f}\lambda}\end{pmatrix} \tag{17}\] It must be noted that, here, all errors are considered as random noise produced by the measurement. This is not the case for example for an error on \(\theta_{i}\) that should be regarded as a systematic error, a fixed deviation inherent to the measurement configuration and which does not depend on the energy. However, considering only random errors allows us to evaluate their impact on different parts of the energy spectrum. The thin film thickness that will lead to correct values of \(\widehat{n}_{f}\) with this method depends on the error of the first order approximation. If this error is too large the numerical regression will converge towards incorrect solution. This point-by-point method allows to explicitly evaluate the accuracy of ellipsometry on the determination of \(d_{f},n_{f}\) and \(k_{f}\) for the whole spectral range. This information is useful in itself as it can be used for example to evaluate if a spectral feature like a small absorption below the bandgap have a physical origin or if it sits in a range of low accuracy and could then be associated with the error of measurement. The possibility to analyze the error on the whole spectral range is an asset of this method as a similar evaluation is a hard task to do when the analysis is made with a modeled dispersion law like with Tauc-Lorentz oscillators. The algorithm is available in form of a python code at [https://doi.org/10.5281/zenodo.7722620](https://doi.org/10.5281/zenodo.7722620) ## 3. Theoretical example: very thin film of TiO\({}_{2}\) on Si To illustrate the method, we first evaluate it with the theoretical case of a very thin film of \(d_{f}=1.00\ nm\) of amorphous TiO\({}_{2}\) on silicon substrate for the energy range of 0.77-6.20 eV simulated for an incident angle of \(\theta_{i}=65^{\circ}\). TiO\({}_{2}\) is chosen as an example because it has a transparent range in the energy range considered here. Its dispersion law is modeled by a Tauc-Lorentz model (A = 256.08 eV, Br=1.77 eV, E\({}_{0}\) = 4.00 eV, E\({}_{\text{g}}\) = 3.40eV, \(\varepsilon_{\infty}=1\)) and the optical properties of the silicon substrate are taken from reference [20]. To evaluate the uncertainty of the measurement, we consider here relatively low but realistic errors of \(\sigma_{\psi}=\sigma_{\Delta}=\sigma_{\theta_{i}}=0.01^{\circ}\,\sigma_{\lambda} =0.1\ nm\) and errors of \(\sigma_{n_{sub}}=\sigma_{k_{sub}}=0.001\). On Figure 2 (a) the thickness resulting from the McCrakin inversion of this stack is presented together with the calculated thickness from Eq. (9) with their respective error. As expected, the thickness from the McCrakin inversion shows an energy dependence for values that are above the bandgap of TiO\({}_{2}\) (\(k_{f}\neq 0\)). Indeed, for this part of the spectrum the hypothesis of a transparent film is not valid, and this range can therefore not be used to determine the thickness. This inversion is therefore already providing interesting information on the dielectric constant of the film that can be used to confirm the values of \(\tilde{\pi}_{f}\). A region with \(k_{f}\neq 0\) should reflect in a dispersive \(d_{f}\). Below the bandgap the inversion leads to an exact match with the actual value of the thickness. The error distribution with energy also provides valuable information. It shows that the error on \(d_{f}\) exponentially increases with decreasing energy in the low energy range (\(2.0-0.8\) eV). This is due to the fact that, for decreasing energy, the difference between \(\rho_{sub}\) and \(\rho_{e}\) is decreasing, therefore leading to a higher sensitivity to the measurement parameter errors \(\sigma_{\psi^{\prime}}\), \(\sigma_{\Delta}\,\sigma_{\theta_{i}}\)\(\sigma_{\lambda}\) and \(\sigma_{n_{sub}}\) and to errors in \(\rho_{e}\). The best energy range to accurately determine \(d_{f}\) is therefore, in this case, 2.0 - 3.4 eV. Using Eq. (9) and the error evaluation in Eq.(8), a precise determination of \(d_{f}=1.00\,\pm\,0.03\) nm is achieved. From the thickness value, a first estimation of \(n_{f}\) and \(k_{f}\) is then made from Eq. (10) and presented on Figure 2 (b) and (c) (red curves). The initial estimation is a relatively good approximation of the dispersion law of TiO\({}_{2}\). However, we observe a higher difference between the actual values and estimated ones for a higher energy than for a lower Figure 2: (a) Thickness dependence on energy for a thin film of 1 nm (black dots) TiO\({}_{2}\) on silicon as determined by a McCrakin inversion method (red line) and calculated from Eq. (9) (blue dashed line). Dispersion law of the refractive index (b) and extinction coefficient (c) of TiO\({}_{2}\) (black dots) initial estimation from Eq. (10) (red line) and after the numerical regression (blue line). The colored areas are the calculated errors on the respective values. energy range where both values converge. The estimation is based on the first order Taylor expansion, relying on the hypothesis that \(\left(\frac{d_{f}}{\lambda}\ll 1\right)\), hence the error will be increasingly small for decreasing energy (increasing wavelength) as the first order expansion becomes a more accurate approximation. Using this first estimation, the error is then minimized by reducing the error between the measured and modeled \(\rho\) values with Eqs. (13)-(16). The values of \(n_{f}\) and \(k_{f}\) after the numerical regression are presented on Figure 2 (b) and (c) (blue curves) with their respective error represented by the colored areas. After the numerical regression the dispersion law of TiO\({}_{2}\) can be perfectly recovered. Regarding the errors on \(n_{f}\) and \(k_{f}\), a relatively low error is observed in both cases in the high energy range (> 3.5 eV) while a large one is observed for the low energy range (< 3.5 eV). Indeed, for decreasing energy, \(\delta_{\rho}\) is decreasing leading to a higher sensitivity to the errors. Around 3 eV, the observed jump in the error is due to the proximity of the two solutions expressed in Eq. (10). Indeed, a small variation of the initial value of the numerical regression will lead to the divergence of the fit towards one solution or the other. Consequently, the error on \(n_{f}\) and \(k_{f}\) is large in the low energy range. We then considered a thin TiO\({}_{2}\) film of \(d_{f}=5.00\)\(nm\) keeping everything else the same. The results are presented on Figure 3. Figure 3: (a) Thickness dependence on energy for a thin film of 5.00 nm (black dots) TiO\({}_{2}\) on Silicon as determined by a McCrakin inversion method (red line) and calculated from Eq. (9) (blue dashed line). Dispersion law of the refractive index (b) and extinction coefficient (c) of TiO\({}_{2}\) (black dots) initial estimation from Eq. (10) (red line) and after numerical regression (blue line). The colored areas are the calculated errors on the respective values. Due to the increased thickness the error on the McCrakin inversion is reduced and an accurate determination can therefore be made in a larger energy range from 1.5 to 3.4 eV (Figure 3 (a)). The calculated thickness from Eq. (9) is \(d_{f}=5.00\,\pm\,0.02\,\mathrm{nm}\). The error on the thickness is much smaller than in the previous case (\(d_{f}=1.00\) nm) due to a lower dependence on the measurement error as 5 nm leads to a larger difference on \(\delta_{\rho}\). The initial estimation of \(n_{f}\) and \(k_{f}\) is, however, quite different from the actual value (Figure 3 (b) and (c)). This is expected as the first-order Taylor expansion leads to a larger error for thicker films. The regression, however, leads to a very accurate determination of \(n_{f}\) and \(k_{f}\) with a very low error on the considered energy range also due to a higher \(\delta_{\rho}\). With these two examples, we show that \(d_{f}\), \(n_{f}\) and \(k_{f}\) can be unambiguously determined with the determination of the thickness being made without any assumption on the \(n_{f}\) value. However, in the case of ultrathin films (1 nm), the \(n_{f}\) and \(k_{f}\) values cannot be determined below a given energy (here 3.4 eV) as the error becomes too large. Note that, above a certain film thickness, the method will lead to incorrect values of \(\tilde{n}_{f}\) due to a high error of the first order approximation. The initial values \(n_{f}\) and \(k_{f}\), would then lead indeed the numerical regression to converge towards one of the incorrect solutions. In this example of a thin film of TiO\({}_{2}\) on Si, a thickness larger than 11 nm leads to an incorrect convergence of \(n_{f}\) and \(k_{f}\). ## IV Experimental example: thin film of HZO on Si As an experimental example we applied the method to the study of a thin Hf\({}_{0.5}\)Zr\({}_{0.5}\)O\({}_{2}\) film grown by atomic layer deposition on RCA cleaned silicon (cf. Experimental details). As a native oxide of SiO\({}_{x}\) is present on the surface of Si, the SiO\({}_{x}\)/Si substrate was measured before deposition to determine \(\rho_{sub}\). Then, in order to calculate \(\tilde{n}_{sub}\), the pseudo dielectric constant function was calculated[1]. This method allows us to replace a sample that consists of multiple layers by a pseudo dielectric constant that represents the dielectric property of this stack and can thus be considered as the dielectric constant of a new semi-infinite substrate[21]. The errors \(\sigma_{\psi}\), \(\sigma_{\Delta}\) were determined from five measurements of \(\psi\) and \(\Delta\) on the sample. The error \(\sigma_{\theta_{i}}=0.1^{\circ}\) on the angle offset was estimated from five measurements of a 25 nm SiO\({}_{2}\) reference sample on Si. The error \(\sigma_{\tilde{n}_{sub}}\) on \(\tilde{n}_{sub}\)was calculated from the measurements of five RCA cleaned substrates. The resulting calculated thickness \(d_{f}\) is presented on Figure 4 (a). Three regions are observed for the thickness from the McCrakin inversion (Figure 4 (a)). First, there is an energy-dependent region at high energy (\(\sim\)5.2 - 6.5eV). This region corresponds to the non-transparent range of the thin film and cannot be used for the determination of the thickness. Then, a region of constant thickness with low error (\(\sim\)2.5 - 5.2eV) is observed from which the thickness can be accurately determined. A third region is observed at low energy (\(\sim\)0.7 - 2.5eV). In this region, the thickness values show a much larger error and also evolve with both a contribution of a higher scattering and an exponential increase for decreasing energy. As the thickness values are not solely randomly scattered, we can conclude that the observed exponential increase of the thickness comes from systematic errors, such as a constant offset in \(\theta_{i}\), or an error on the dispersion law of \(n_{sub}\). A detailed analysis of this region could be done to exploit this artefact to correct for the systematic errors, for example considering adding an offset of 0.1\({}^{\circ}\) to the angle of incidence to minimize the thickness evolution with energy. This is, however, outside the scope of this paper. Using Eq. (9) and (8) we calculate the thickness of the film and its corresponding error to be \(d_{f}=5.27\,\pm\,0.06\,\mathrm{nm}\). We can therefore reach a very low uncertainty on the measured thickness thanks to the presence of an energy range with high accuracy in the McCrakin inversion (\(\sim\)2.5-5.2eV). Figure 4: (a) Thickness dependence on energy for a \(\sim\)5 nm HZO film deposited on a RCA cleaned Silicon substrate as determined by a McCrakin inversion method (red line) and calculated from Eq.(9) (blue line) - Dispersion curves of (b) the refractive index and (c) extinction coefficient determined from the proposed method (blue lines) together with the curves for a 20 nm HZO film (black dots). The colored areas are the calculated error on the respective values. On Figure 4 (b) and (c) the resulting \(n_{f}\)and \(k_{f}\) values are presented together with the values of a 20 nm thick HZO film. For high energies (> 3.0 eV), we show that both \(n_{f}\) and \(k_{f}\) have dispersion curves similar to their thicker counterpart and that high accuracy (\(\sigma_{\bar{n}_{sub}}\leq 0.01+0.002\)i) is achieved in both cases. The dispersion law is similar to a dispersion modeled by a Tauc-Lorentz function. Such a dispersion law is characteristic of amorphous materials for which the bandgap is present in the measured spectral region [1]. Note that we obtain such a dispersion law here with this point-by-point method without relying on a model of the dispersion law. The refractive index of the very thin film is lower (2.11 at an energy of 4.0 eV) than the one of the 20 nm film reference (2.21 at an energy of 4.0 eV), which is attributed to a lower density [1]. Using a Bruggeman effective medium approximation, assuming the film is composed of HZO and nanometric air inclusions, we calculate that the very thin HZO film exhibits a density of around 85% that of the 20 nm reference [22]. This is understood by the ALD growth mechanism that tends to generate gaps for the first step of the growth due to steric hindrance [23]. Moreover, from the extinction coefficient a shift towards higher energy of the exponential rise is observed for the thinner film compared to the 20 nm reference film. This shift leads to a higher band gap (5.2 eV) compared to the reference sample (4.9 eV). The increased band gap of the very thin film can be explained by a quantum confinement effect. Indeed, if the dimension of a material is of the same magnitude as the de Broglie wavelength of the electron wave function it will generate a quantum confinement effect. This effect has already been observed during the growth of very thin films [24]. For energies below \(\sim 3\) eV the error becomes large, which does not allow to conclude on the optical properties of the HZO thin film. At these energies, similarly to the determination of the thickness, the dispersion laws present errors that are mostly produced from systematic errors of the measurement. Therefore, at the present state, the proposed method can be applied to accurately measure \(d_{f}\) and \(\bar{n}_{f}\) of very thin films for energy range higher than \(\sim 3\) eV. It should be noted that the measurement of a thicker film will widen the energy range where \(\bar{n}_{f}\) can be retrieved with a high accuracy, as evidenced previously (section III). ## V Conclusion In this paper we demonstrated a fully automated method to unambiguously determine the thickness, refractive index and extinction coefficient with high accuracy of a very thin film for energies typically larger than \(\sim 3\)eV. The method is developed for thin films which present a range of transparency and which are deposited on a substrate with known optical properties in the investigated energy range. The method is decomposed in three steps. First, the thickness is estimated from a McCrakin inversion by carefully looking at the energy range with minimal error and thickness dispersion. Second, a first estimation of the refractive index and extinction coefficient is done based on a first order Talyor expansion of \(\rho\). Finally, from this initial estimation and the calculated thickness, convergence towards the correct solution of \(\vec{n}_{f}\) is ensured with a wavelength-by-wavelength numerical regression. We applied the method to a thin HZO film grown by ALD on a silicon substrate and retrieved its optical properties without any model assumption with a high precision in the energy range of 3.0 - 6.7 eV. A high precision (\(\leq\)0.5 A) on the determination of the film thickness was also shown. Calculation of the errors enabled to discriminate physical features from artefacts due to systematic or random errors giving additional information on the sensitivity of the measurement on the whole spectral range. Information on the sensitivity of the measurement in various spectral regions cannot be easily obtained with a standard approach, such as the minimization of the MSE by optimization of the parameters of a Cauchy or Tauc-Lorentz model. With these models it is harder to determine if a spectral feature has a physical origin and should be considered in the model. The proposed method in this work presents a clear advantage in this regard. Exploiting non-physical the exponential dispersions of the thickness, refractive index and extinction coefficient in the low energy range could improve the accuracy of the measurement. This is especially true in the low energy range where sensitivity on the error is the highest. The need of a transparency range for the thickness determination can be restrictive; it prevents for example from studying the first stages of the growth of a metallic compound. However, the constrain in the first step of the method could be overcome by other methods to disambiguate the thickness from optical properties. One example is the study of the presence of artefacts in the dielectric constant of the substrate.[12, 13] The method proposed in this paper is particularly suited for the study of very thin films of oxides, semiconductors or 2D materials, either _ex situ_ or _in situ_ and in real time particularly during the first stages of the growth. Disambiguating the determination of the thickness from the dielectric properties can genuinely improve the information that can be retrieved from spectroscopic ellipsometry measurements. ## VI Experimental details Prior to thin film deposition, the silicon substrate is cleaned following a standard RCA procedure to remove organic and ionic contaminations and to obtain a defined oxygen-terminated SiO\({}_{x}\) surface with the following steps: SC1: 10min, 70-80\({}^{\circ}\)C, (5:1:1) H2O + NH\({}_{4}\)OH (29% weight) + H\({}_{2}\)O\({}_{2}\) (30% in solution), HF dip: 15s HF 1%, SC2: 10 min, 70-80\({}^{\circ}\)C, (6:1:1) H2O + 1 HCL (37% weight) + 1 H\({}_{2}\)O\({}_{2}\) (30% in sol.). The samples are rinsed in H2O and N\({}_{2}\) blow-dried after each step. This standard RCA cleaning process results in a 1.0 nm (\(\pm\)0.1nm) chemical oxide SiO\({}_{x}\) layer. Before the deposition, the Si/SiO\({}_{x}\) stack is measured by ellipsometry to determine \(\rho_{sub}\) and define the pseudo-dielectric function that can be used as the substrate dielectric constant \(\vec{n}_{sub}\). The thin film of H2O is deposited on top of the cleaned Si substrate by atomic layer deposition at 250 \({}^{\circ}\)C using tetrakis(ethylmethylamino) zirconium (TEMA-Hf) and tetrakis(ethylmethylamino) zirconium (TEMA-Zr) as precursors and deionized H2O as oxidant. The depositions are performed using an "Oxford FlexAI" ALD system. The spectroscopic ellipsometry measurements are made with a Woollam M2000 at an incidence angle of 60\({}^{\circ}\), for a wavelength range of 192-1690 nm corresponding to an energy range of 0.73-6.46 eV. The bulk value of \(\widetilde{n}_{HZO}\) is determined from a Tauc-Lorentz model on a 20 nm thin film deposited with the same conditions. ###### Acknowledgements. The experimental work was performed in the framework of GraFOx II, a Leibniz- ScienceCampus partially funded by the Leibniz Association. ## Author Declarations ### Conflict of interest The authors have no conflicts to disclose. ### Author contributions F. Maudet: Conceptualization (lead); Data Curation (lead); Investigation (lead); Formal Analysis (lead); Methodology (lead); Project Administration (lead); Software (lead); Supervision (lead); Validation (lead); Writing- original draft preparation (lead); Writing- review and editing (equal). C. V. Dijck: Investigation (supporting); Writing - review & editing (supporting). M. H. Raza: Investigation (supporting); Writing- review & editing (supporting). C. Dubourdieu: Conceptualization (supporting); Funding acquisition (lead); Writing- review and editing (lead). ## Data Availability An archive file with the necessary material of the findings of this study is openly available in Zenodo at [https://doi.org/10.5281/zenodo.7722620](https://doi.org/10.5281/zenodo.7722620). The archive contains: a python code of the algorithm of the proposed method, two Jupyter notebooks used to generate the data in the paper alongside with the experimental and reference data used for the analysis.
2305.13725
Conversational Recommendation as Retrieval: A Simple, Strong Baseline
Conversational recommendation systems (CRS) aim to recommend suitable items to users through natural language conversation. However, most CRS approaches do not effectively utilize the signal provided by these conversations. They rely heavily on explicit external knowledge e.g., knowledge graphs to augment the models' understanding of the items and attributes, which is quite hard to scale. To alleviate this, we propose an alternative information retrieval (IR)-styled approach to the CRS item recommendation task, where we represent conversations as queries and items as documents to be retrieved. We expand the document representation used for retrieval with conversations from the training set. With a simple BM25-based retriever, we show that our task formulation compares favorably with much more complex baselines using complex external knowledge on a popular CRS benchmark. We demonstrate further improvements using user-centric modeling and data augmentation to counter the cold start problem for CRSs.
Raghav Gupta, Renat Aksitov, Samrat Phatale, Simral Chaudhary, Harrison Lee, Abhinav Rastogi
2023-05-23T06:21:31Z
http://arxiv.org/abs/2305.13725v1
# Conversational Recommendation as Retrieval: A Simple, Strong Baseline ###### Abstract Conversational recommendation systems (CRS) aim to recommend suitable items to users through natural language conversation. However, most CRS approaches do not effectively utilize the signal provided by these conversations. They rely heavily on explicit external knowledge e.g., knowledge graphs to augment the models' understanding of the items and attributes, which is quite hard to scale. To alleviate this, we propose an alternative information retrieval (IR)-styled approach to the CRS item recommendation task, where we represent conversations as queries and items as documents to be retrieved. We expand the document representation used for retrieval with conversations from the training set. With a simple BM25-based retriever, we show that our task formulation compares favorably with much more complex baselines using complex external knowledge on a popular CRS benchmark. We demonstrate further improvements using user-centric modeling and data augmentation to counter the cold start problem for CRSs. ## 1 Introduction Recommendation systems have become ubiquitous in recent years given the explosion in massive item catalogues across applications. In general, a recommendation system learns user preference from historical user-item interactions, and then recommends items of user's preference. In contrast, CRSs directly extract user preferences from live dialog history to precisely address the users' needs. An example dialogue from the popular ReDial benchmark (Li et al., 2018) for CRSs is shown in Table 1: the CRS' task is to recommend items (in this case, movies) based on the user's indicated preference. Generally, a CRS integrates two modules: a **dialogue module** which generates natural language responses to interact with users, and a **recommendation module** which recommends desirable items to users using the dialog context and external knowledge. We focus on the latter module in this work: we posit that once the correct item to recommend is identified, newer pretrained language models (PLMs) can easily generate fluent agent responses. It is notable that the conversational context provides sufficient signal to make good recommendations (Yang et al., 2021). E.g., in Table 1, attributes about the items to recommend (e.g., genre and cast, in red) provide potentially sufficient information to the model to recommend relevant items. Most approaches to CRS rely heavily on external knowledge sources, such as knowledge graphs (KGs) and reviews (Lu et al., 2021). Such approaches require specific sub-modules to encode information from these sources like graph neural networks (Kipf and Welling, 2016), which are hard to scale with catalog additions. Existing approaches require either re-training the entire system when the KG structure changes (Dettmers et al., 2018) or adding complex architectures on top to adapt (Wu et al., 2022). Newer approaches utilize PLMs (Radford et al., Lewis et al., 2020), but they often encode item information in model parameters, making it hard to scale to new items without retraining. Looking for a fast, more scalable approach, we re-formulate the item recommendation task for \begin{table} \begin{tabular}{|p{42.7pt}|p{284.5pt}|} \hline \hline **Role** & **Message** \\ \hline \hline User & Hello! I am looking for some movies. \\ \hline Agent & What kinds of movie do you like? I like animated movies such as Frozen (2013). \\ \hline Rec. item & Frozen (2013) \\ \hline User & I do not like animated films. I would love to see a movie like Pretty Woman (1990) starring Julia Roberts. Know any that are similar? \\ \hline Agent & Pretty Woman (1990) was a good one. If you are in it for Julia Roberts you can try Runaway Bride (1999). \\ \hline Rec. item & Runaway Bride (1999) \\ \hline \end{tabular} \end{table} Table 1: An example dialogue from ReDial. The items to recommend are in blue, with their inferred attributes in red. The ground truth recommended items for agent utterances are also shown. CRSs as an information retrieval (IR) task, with recommendation-seeking conversations as queries and items to recommend as documents. The document content for retrieval is constructed using plain text metadata for the item paired with conversations where the said item is recommended, in order to enhance semantic overlap between the queries which are themselves conversations. We apply a standard non-parametric retrieval baseline - BM25 - to this task and show that the resulting model is fast and extensible without requiring complex external knowledge or architectures, while presenting improvements over more complex item recommendation baselines. Our contributions are summarized as follows: * We present an alternate formulation of the CRS recommendation task as a retrieval task. * We apply BM25 to this task, resulting in a simple, strong model with little training time and reduced reliance on external knowledge. * We further improve the model using user-centric modeling, show that the model is extensible to new items without retraining, and demonstrate a simple data augmentation method that alleviates the cold start problem for CRSs. ## 2 Related Work Conversational recommendation systems constitute an emerging research area, helped by datasets like REDIAL Li et al. (2018), TG-REDIAL Zhou et al. (2020), INSPIRED Hayati et al. (2020), DuRecDial Liu et al. (2020, 2021), and CPCD Chaganty et al. (2023). We next describe the recommender module architectures of CRS baselines. ReDial Li et al. (2018) uses an autoencoder to generate recommendations. CRSs commonly use knowledge graphs (KGs) for better understanding of the item catalog: DBpedia Auer et al. (2007) is a popular choice of KG. KBRD Chen et al. (2019) uses item-oriented KGs, while KCSF Zhou et al. (2020) further incorporates a word-based KG Speer et al. (2017). CR-Walker Ma et al. (2021) performs tree-structured reasoning on the KG, CRFR Zhou et al. (2021) does reinforcement learning and multi-hop reasoning on the KG. Unic CRS Wang et al. (2022) uses knowledge-added prompt tuning with and KG & a fixed PLM. Some methods also incorporate user information: COLA Lin et al. (2022) uses collaborative filtering to build a user-item graph, and Li et al. (2022) aims to find lookalike users for user-aware predictions. Eschewing KGs, MESE Yang et al. (2022) trains an item encoder to convert flat item metadata to embeddings then used by a PLM, and TSCR Zou et al. (2022) trains a transformer with a Cloze task modified for recommendations. Most above approaches, however, either rely on complex models with KGs and/or need to be retrained for new items, which is very frequent in present-day item catalogs. ## 3 Model We formally define the item recommendation task, followed by our retrieval framework, details of the BM25 retrieval model used, and finally our user-aware recommendation method on top of BM25. ### Conversational Item Recommendation A CRS allows the user to retrieve relevant items from an item catalog \(V=\{v_{1},v_{2}\cdots v_{N}\}\) through dialog. In a conversation, let \(a\) be an agent response containing an item(s) from \(V\) recommended to the user. Let \(d_{t}=\{u_{1},u_{2},\cdots\,u_{t}\}\) be the \(t\) turns of the conversation context preceding \(a\), where each turn can be spoken by the user or the agent. We model the recommendation task as masked item prediction, similar to Zou et al. (2022). For each agent response \(a\) where an item \(v_{i}\in V\) is recommended, we mask the mention of \(v_{i}\) in \(a\) i.e. replace it with the special token [REC], yielding the masked agent response \(a^{\prime}\). We now create training examples with input \(q=d_{t}\oplus a^{\prime}\) and ground truth \(v_{i}\) (\(\oplus\) denotes string concatenation). We define \(Q^{train}\) and \(Q^{test}\) as the set of all conversational contexts \(q=d_{t}\oplus a^{\prime}\) with an item to predict, from the training and test sets respectively. For each item \(v_{i}\), we also define \(\mathbf{Q^{train}_{v_{i}}}\subset Q^{train}\) as the set of all conversational contexts in \(Q^{train}\) where \(v_{i}\) is the ground truth item to recommend. ### Item Recommendation as Retrieval Information retrieval (IR) systems are aimed at recommending documents to users based on the relevance of the document's content to the user query. We reformulate masked item prediction as a retrieval task with \(Q^{train}\) or \(Q^{test}\) as the set of queries to calculate relevance to, and \(V\) as the set of items/documents to recommend from. To match a query \(q\in Q^{test}\) to a document/item \(v_{i}\in V\), we define the document's content using two sources: **metadata** in plaintext about item \(v_{i}\), and \(\mathbf{Q^{train}_{v_{i}}}\) i.e. all conversational contexts from the training set where \(v_{i}\) is the recommended item, concatenated together, similar to document expansion (Nogueira et al., 2019). Our motivation for adding \(Q^{train}_{v_{i}}\) to the document representation is that it is easier to match queries (which are conversations) to conversations instead of plain metadata since conversations can be sparse in meaningful keywords. For an item \(v_{i}\) we create a document as: \[Doc(v_{i})=Metadata(v_{i})\oplus Q_{v_{i}} \tag{1}\] For test set prediction, we can now apply retrieval to recommend the most relevant document \(Doc(v_{i}),v_{i}\in V\), for each test set query \(q\in Q^{test}\). ### Retrieval Model: BM25 BM25 (Robertson et al., 2009) is a commonly used sparse, bag-of-words ranking function. It produces a similarity score for a given document, \(doc\) and a query, \(q\), by matching keywords efficiently with an inverted index of the set of documents. Briefly, for each keyword in each document, we compute and store their term frequencies (TF) and inverse document frequencies (IDF) in an index. For an input query, we compute a match score for each query keyword with each document using a function of TF and IDF, and sum this score over all keywords in the query. This yields a similarity score for the query with each document, which is used to rank the documents for relevance to the query. ### User Selection Our IR formulation also gives us a simple way to incorporate user information for item recommendation. Let \(U=\{u_{1},u_{2}\ldots u_{J}\}\) be the set of all users in the dataset. Each conversation context in \(Q^{train}\) be associated with a user \(u_{j}\in U\). We use a simple algorithm for user-aware recommendations: * For each user \(u\in U\), we obtain the set of items they like based on conversations in \(Q^{train}\), and also construct a unique BM25 index for each user \(u_{j}\) using only conversations associated with \(u_{j}\). * For a test set query \(q\in Q^{test}\), we identify movies liked by the seeker in the current \(q\), and use it to find the \(M\) most similar users in the training set. * We now compute and add up similarity scores for the query with all documents based on the per-user BM25 indices for these \(M\) selected users. * Finally, we linearly combine these user-specific similarity scores per document with the similarity scores from the BM25 index in Section 3.3, and use these combined scores to rank all documents. ## 4 Experiments ### Dataset and Evaluation ReDial (Li et al., 2018) is a popular benchmark of annotated dialogues where a seeker requests movie suggestions from an agent. Figure 1 shows an example. It contains 956 users, 51,699 movie mentions, 10,006 dialogues, and 182,150 utterances. For evaluation, we reuse Recall@\(k\) (or R@\(k\)) as our evaluation metric for ReDial from prior work. It evaluates whether the target human-recommended item appears in the top-\(k\) items produced by the recommendation system. We compare against baselines introduced in Section 2. ### Training For movie recommendations, we extract metadata from _IMDb.com_ to populate \(Metadata(v_{i})\) for movies \(v_{i}\in V\), which includes the movie's brief plot and names of the director and actors. Parameters \(k_{1}\) and \(b\) for BM25 are set to 1.6 and 0.7 respectively. For user selection, we select the \(K=5\) most similar users, and linearly combine the user-specific BM25 scores with the overall BM25 scores with a coefficient of 0.05 on the former. Constructing the BM25 index on the ReDial training set and inference on the test set took ~5 minutes on a CPU (+10 minutes for the user selection method). Alongside BM25 with and without user selection, we also experiment with a BM25 variant without metadata i.e. using only past conversation contexts as the document content for a movie/item. \begin{table} \begin{tabular}{l|c|c|c} \hline **Model** & **R@1** & **R@10** & **R@50** \\ \hline ReDial (Li et al., 2018) & 2.3 & 12.9 & 28.7 \\ KBRD* (Chen et al., 2019) & 3.0 & 16.4 & 33.8 \\ KGSF* (Zhou et al., 2020) & 3.9 & 18.3 & 37.8 \\ CR-Walker* (Ma et al., 2021) & 4.0 & 18.7 & 37.6 \\ CRFR* (Zhou et al., 2021) & 4.0 & 20.2 & 39.9 \\ COLA* (Lin et al., 2022) & 4.8 & 22.1 & 42.6 \\ UniCRS* (Wang et al., 2022) & 5.1 & 22.4 & 42.8 \\ MESE\(\dagger\)(Yang et al., 2021) & 5.6 & 25.6 & 45.5 \\ TSCR* (Zou et al., 2022) & 7.2 & 25.7 & 44.7 \\ \hline BM25 w/o Metadata & 4.8 & 19.5 & 37.4 \\ BM25\(\dagger\) & 5.2 & 20.5 & 38.5 \\ BM25 + User Selection\(\dagger\) & 5.3 & 21.1 & 38.7 \\ \hline \end{tabular} \end{table} Table 2: Item recommendation results on the ReDial benchmark. Our BM25-based models outperform many baselines despite being much, lighter and not using complex KGs. * denotes models using DBPedia KG, \(\dagger\) denotes models using plaintext IMDb metadata. ## 5 Results Table 2 shows \(R@\{1,10,50\}\) on ReDial for the baselines and our models. Our BM25-based models perform strongly, outperforming many baselines which use complex KGs and/or complex model architectures e.g., tree-structured reasoning and reinforcement learning. Improvement is most visible on \(R@1\) and less so on \(R@50\). Our fairest comparison would be with **MESE**, which uses the exact same data (plaintext metadata + dialog context): our best model achieves \(95\%\) of its \(R@1\) and 85% of its \(R@50\) with a far faster and simpler model. We point out that all baselines except TSCR are jointly optimized for both the item recommendation and response generation tasks. A surprising result is **BM25 w/o Metadata** doing better than many baselines, without using any external knowledge whatsoever, in contrast to all other baselines except **ReDial**. This indicates that prior conversations indeed contain sufficient signal for good conversational item recommendation. Our simple **user selection** raises recall by 1-3% across thresholds, with more potential gains from better user-centric modeling Li et al. (2022). ## 6 Cold Start and Data Augmentation Conversational recommenders often suffer from the **cold start problem**: it is difficult for a new item i.e. not seen during training, to be recommended, since not much is known about it beyond metadata. Our model is not immune to this problem. The red lines in Figure 1 show \(R@10\) values for the BM25 model for different sets of movies in Real-Dial based on how many times they are seen in the training set: the model never or rarely recommends movies with 10 or fewer occurrences in training. To counteract this, we perform **data augmentation** using few-shot prompting Liu et al. (2023). In particular, we randomly select 6 conversations from ReDial's training set, use them to prompt a PaLM 2-L model Anil et al. (2023), and generate up to 20 dialogues per movie. We do this only for movies seen 10 or fewer times during training, since the model does the worst on these. Figure 1's blue curve shows notably improved \(R@10\) for the movies for which data was augmented, without hurting \(R@10\) for more frequent movies. Overall \(R@10\) also improves by ~8% using just \(\leq\) 20 artificial dialogues per movie. Further combining augmentation with user selection lifts \(\mathbf{R@1}\) to **5.9**, \(\mathbf{R@10}\) to **22.3**, and \(\mathbf{R@50}\) to **40.7**. Figure 2 plots recall for BM25 model with the number of artificial dialogues added for low-frequency movies. Based on this plot, we opted to generate at most 20 conversations per movie. ## 7 Conclusion We present a retrieval-based formulation of the item recommendation task, used in building CRSs, by modeling conversations as queries and items to recommend as documents. We augment the item representation with conversations recommending that item, therefore the retrieval task reduces to matching conversations to conversations, which is feasible and effective. Using BM25-based retrieval with this IR task results in a model that is very fast and inexpensive to train/build (~5 min on CPU) while being flexible to add-ons such as user selection. We also show that new items can be seamlessly added without retraining the entire model, and that simple data augmentation with as few as 20 conversations counters the cold start problem for a new item: far fewer than most neural network finetuning-based methods would need. Figure 1: Impact of data augmentation on \(R@10\). The shaded area represents the set of movies for which data augmentation was performed. Figure 2: Recall for the BM25 model with varying amounts of augmented conversations.
2306.02378
Encryption by using base-n systems with many characters
It is possible to interpret text as numbers (and vice versa) if one interpret letters and other characters as digits and assume that they have an inherent immutable ordering. This is demonstrated by the conventional digit set of the hexadecimal system of number coding, where the letters ABCDEF in this exact alphabetic sequence stand each for a digit and thus a numerical value. In this article, we consequently elaborate this thought and include all symbols and the standard ordering of the unicode standard for digital character coding. We show how this can be used to form digit sets of different sizes and how subsequent simple conversion between bases can result in encryption mimicking results of wrong encoding and accidental noise. Unfortunately, because of encoding peculiarities, switching bases to a higher one does not necessarily result in efficient disk space compression automatically.
Armin Hoenen
2023-06-04T15:23:23Z
http://arxiv.org/abs/2306.02378v1
# Encryption by using base-n systems with many characters ###### Abstract It is possible to interpret text as numbers (and vice versa) if one interpret letters and other characters as digits and assume that they have an inherent immutable ordering. This is demonstrated by the conventional digit set of the hexadecimal system of number coding, where the letters ABCDEF in this exact alphabetic sequence stand each for a digit and thus a numerical value. In this article, we consequently elaborate this thought and include all symbols and the standard ordering of the unicode standard for digital character coding. We show how this can be used to form digit sets of different sizes and how subsequent simple conversion between bases can result in encryption mimicking results of wrong encoding and accidental noise. Unfortunately, because of encoding peculiarities, switching bases to a higher one does not necessarily result in efficient disk space compression automatically. U nicode; encryption; compression; base-n-systems ## 1 Introduction One can hide a message within another message. To this end hiding text transmitting numbers as ciphertexts corresponding to a non-numerical plaintext or vice versa is an age old concept. Natural languages express words in the written modality through a conventionalized set of characters and conventionalized mappings from the acoustic modality to the visual one - through a writing system with an orthography. Numbers are usually expressed using a likewise conventionalized set of digits with an immutable ordering constituting the mapping to idealized and progressing (+1) integer values starting in 0. One arranges them in a rigorously defined placement system, where each position represents a base which exactly corresponds to the size of the digit inventory raised to a certain power, progressing from right to left, starting in 0 as the lowest power. We arrived at such a system with the default inventory of 10 digits including 0 only relatively recently, but it has since become an international standard especially for scientific interchange and a largely language independent form of mathematical expression in daily life. These givens entail a clear parallelism between numbers and texts: we use conventionalized sets of symbols which have an inherent ordering within the set (integer progression for numbers, the alphabet for letters) and arrange them into basically possibly infinite sequences. There are differences, two of the most important of which are: Firstly, the size of the set with the decimal digit set for integers containing only 10 symbols, whereas the alphabet contains 26 letters and is the minimal set for representing language. This entails under more a much larger combinatorial space for letters and a larger possible entropy. Secondly, the rules for arrangement are quite rigorous in math, but less so for letters, where we can arrange them according to our communicative needs in almost any sequence (think about abbreviations, acronymes, all kinds of word- and letterplay, codes etc.). These differences can and must be mitigated if one wants to exploit the parallelism using letter symbols as mathematical digit sets for instance in order to produce or use interpretational ambiguity. The mathematical inventory needs thus to be enlarged character sets and those must obey or map to the stringent positioning rules of mathematics. This means, that the otherwise not extremely strictly necessary inherent ordering represented by the somehow arbitrary alphabetical sequence must, in case of usage of characters as digits, be absolutely fixed. Both of this can be done easily. In math, we can use a larger inventory by swapping the numerical base of a system and for characters (and digits), we can use the technically fixed and globally adopted standard ordering of symbols from the unicode standard. Apart from this, textual characters forming written expression and digits forming numbers are correlated in many meaningful ways. Nowadays, converting text into numbers and computing its properties is one of the foundations of natural language processing the subdiscipline of computer science closely related to statistics of language and with this only a small step away from both mathematics and cryptoanalysis. ## 2 History Numerical systems with bases other than 10 have been frequent in history. In Mesopotamia, a mixed system with a large importance of the number of 60 (therefore sometimes referred to as _sexagesimal_) was in use, compare [1, p.247]. In ancient China, a base-16 system was used for measurements and other examples are extant. Furthermore, roman numerals, widely in use in Europe until early modern times used letters as digits, which was also quite common in some other writing systems such as Hebrew, Arabic etc. In the latter, letters were used with certain numerical values assigned to them, sometimes combined with positioning or other contextual clues. This already resulted in some encipherment using the ambiguity of interpretation between words and numbers. Consider [2, p.66]: Al-Qalqashandi mentions a procedure in which two Arabic letters correspond to a letter of the plaintext in such a way that the numerical values of the two letters equal the numerical value of the plaintext character However, in the modern age, with the advent of computers the binary and hexadecimal systems became important. Two digits 0 and 1 - simplistically speaking - represent 2 states of electric current. Unfortunately, with a base-2 system even smaller numbers become very long very quickly. The hexadecimal system to the contrary represents numbers with even fewer places than the decimal system and has the advantage to be easily convertible to and from the binary system as it is based on a low power of 2: 2\({}^{4}\). Figure 2 shows how large the difference in length is for some low powers of 10 depending on the system. As one can see, the larger a number gets, the more pronounced is the length difference between lower and higher base-systems for encoding the same number. ## 3 Terminology Terminologically, _number system_ often refers to the kind of numbers we use. This means primarily large sets from number theory, such as \(\mathbb{N}\) standing for _natural numbers_, \(\mathbb{R}\) for real numbers and so forth. A _numeral system_ on the other hand refers to a set of symbols used for expressing numbers. The term itself does include all kinds of differing symbol systems such as Sumerian numerals, Cistercian numerals, Roman numerals and so forth including also our Western Arabic numerals. However, this does not specify anything about a positional use. Roman numerals for instance were arranged according to completely different rules than our modern numerals and the base of the system could not even be determined from the size of the inventory. When we use a standard positional system, we may call it a positional numeral system or the standard positional numeral system and within this system, we can distinguish different bases and name subsystems base-n, with n most commonly a positive integer. However, such terminological conventions are not completely strict and one finds the use of the term _number system_ likewise for base-n systems not necessarily entailing their use as actual sets in the sense of number theory, compare for instance [6]. Also, one finds even more terms such as _numeration system_, _counting system_ and others more used interchangeably to refer to number sets, numeral systems or positional numeral systems or yet other entities. The exact meaning of the terms used should always be clear from the context. Within this article, we will use _numeral_ and _digit_ interchangeably for a symbol in a mathematical system, _number system_ for sets from number theory and _numeral system_, _(standard) positional numeral system_,... for our conventional decimal system and/or mentioning the base for base-n systems. Quotes and titles in the references may not adhere to that usage. Figure 1: Lengths in number of required digits/places (y-axis) for some low powers of 10 (x-axis) in the binary, decimal and hexadecimal systems. ## 4 Mathematical wrap-up on numeral systems There has been plenty of research on (positional) numeral systems in mathematics, computer science and cryptography/cryptoanalytics. In mathematics, research continues until today. One important branch is concerned with different qualities of the bases: negative bases [4], complex numbers as bases [5], rational numbers as bases [6] or particular bases, such as \(-\frac{3}{2}\) as in [3]. In cryptography exponentiation methods and their effective implementation play a crucial role, compare [7]. The interested reader may be referred to these examples and the circumspanning body of literature for flanking information. The aim and method of this paper are much simpler as will be explained in the next section. However, whilst systems with larger bases are being discussed in the aforementioned literature, they are mostly expressed as formula, rather than spelled out as actual numbers, since for the sake of the argument this very often is not necessary. Generally, any number \(x\) in our standard positional numeral system can be expressed as: \[x=\sum_{n=0}^{m}a_{n}b^{n} \tag{1}\] where \(a\) is a digit from the ordered symbol inventory of the particular numeral system1 corresponding to a value (for instance \(B\) in the hexadecimal system has the value \(11\)), \(n\) is the position within the number starting to the right indexed as \(0\) and \(b\) is the given base. \(m\) is the leftmost positional index, which, since we start counting in \(0\) equals \(pl(x,b)-1\) with \(pl(x,b)\) the function returning the number of positions or length of a number given a base. At each position, each of the symbols of the digit inventory (numeral) may appear (apart from leading zeroes) and thus \(a\) can take any value from \(0\) to \(b-1\) with the exception of the leftmost position where \(0\) is disallowed for convenience. As an example, the hexadecimal number \(ABC\) can be expressed as \(12\cdot 16^{0}+11\cdot 16^{1}+10\cdot 16^{2}\). The base \(b\) always corresponds to the size of the symbol inventory whilst the largest value expressed per position for \(a\) is \(b-1\). Footnote 1: Here, we limit ourselves to looking at numeral systems with a \(b\) from \(\mathbb{N}^{+}\). As one can see, for this notation, one does not need symbols other than those used in the decimal system plus some standard mathematical symbols (\(\cdot\), \(+\) and superscripting). So, we can express any hexadecimal \(x\) as a decimal sum: \(12\cdot 16^{0}+11\cdot 16^{1}+10\cdot 16^{2}\). Extending this to the conventionalized set of mathematical symbols such as \(\sum\), we can express basically any number or entire systems omitting the use of other than the decimal inventories digits. Hexadecimal \(ABC\) could be expressed as: \[\sum_{n=0}^{|A|-1}a_{n}b^{n};b=16;A=\{12,11,10\} \tag{2}\] or even more simplistically use something like \(10-11-12\). This would make such a number readable2 and one would not have to bother with choosing or printing a set with an inherent ordering. In general, spelling out such numbers in other less used numeral systems is rather rare and there is good reason for it. We are not used to write and spell out such numbers apart from some few exceptions such as the binary and hexadecimal systems and furthermore, we would hardly be able to read and decode such information. Finally, a conventionalized sequence for the digit inventories should be pre-established and how to do that and which symbols to use is to the best of the authors knowledge, not agreed upon or standardized in the broader field of mathematics. However, in the field of computer science, this is different as will be seen later. For the representation of numbers in the hexadecimal (and binary) system(s) where the digit inventory is conventionally fixed (so one would not need to print the inventory and sequence for decoding when reading), the need sometimes arises to write the basis, especially because otherwise some ambiguities arise. Each number of the decimal system is also a number in the hexadecimal system and each binary number is also one in the decimal and the hexadecimal systems, since the digit inventories are in ordered subset relations. Whilst 0 remains zero and 1 remains the value one in each of the systems due to being the first digits in the ordered inventory, the sequence 10 already has a different value depending on the system. In the binary system, it is 2, in the hexadecimal system, it is 16 and, interestingly, in any system with a digit inventory that starts in the 10 arabic numeral digits, it has the value of the base of the system. In order to clarify which meaning is intended, we can write \(10_{b},10_{d},10_{h}\) or \((10)_{2},(10)_{10},(10)_{16}\). We will use the notation of parenthesis and subscript in decimal since it would be unclear/overcomplex to verbally convert any n-base into a precoroman number word and then find some abbreviation, or in other words, whilst the {b,d,h}-notation avoids reading ambiguity subscripting a decimal number, it is only really usable for binary, decimal and hexadecimal systems. ## 5 Spelling out base-n The problem of spelling out a system with a base larger than 10 resp. 16 is the first focus of this paper. To this end, the hexadecimal system was one of the first, where the concrete application led scholars and practicioners to experiment and finally converge on a standard for the representation of the digits representing the values/numbers 10 through 15. For the binary system, 0 and 1 as a subset of the digit inventory of the decimal system have been established, but it is of course easier to build subsets from an already existing set than to agree on how to expand a set. At first, different groups used different symbols.3 Footnote 3: For reference see [https://en.wikipedia.org/wiki/Hexadecimal](https://en.wikipedia.org/wiki/Hexadecimal), subsection ”Other symbols for 10–15 and mostly different symbol sets”, version from Jun 4, 2023. There were even scholars inventing new naming/pronunciation conventions, for instance [8]. It would of course greatly improve readability, had we special words for digits representing values in other than the decimal system, but it seems that by and large for an average human being one numeral system in the natural languages we talk is already enough. The verbal expression of numbers is again a linguistically complex phenomenon with all kinds of exceptions and complexities such as the (in)famous number 80, expressed as 4 (times) 20 in standard French and e.g. Basque. Adding to this complexity by inventing and teaching new names for digits of the hexadecimal system remained a curiosity. Apart from the hexadecimal and binary systems, other systems have been elaborated on albeit not as rigorously as the former two, which are used more often also for mathematical argument. The second focus of this paper is how to use the first system in connection with and encipherment and compression. With the now quite conventionalized sequence of (uppercase or lowercase, but not mixed case) A-F, coincidentally some hexadecimal numbers can form words of English such as \((BEE)_{16}\) which corresponds to \((3054)_{10}\). Whilst in the hexadecimal system, this is a rare coincidence, one can systematically devise a system which does interpret all text as digits of a base-n system. ## 6 Method The easiest base-n system for English would be a base-36 system, where the 10 digits plus the 26 letters of the alphabet in alphabetic sequence would be in use. This system would just append all other letters after F to the set used by the hexadecimal system, exactly in the alphabetic sequence. Let us suppose we take only lowercase letters and no punctuation marks. Each word would then express a numerical value. However, not every possible digit sequence in that system would correspond to an actual word. The result of a conversion can be seen in Figure 6. In itself, such a conversion could already be used as a ciphertext, especially for short messages and when some ad-hoc modifications add idiosyncratic semantics as in step 4. Such a cipher for longer texts could be hard to spot in the context of number code files, but if spotted easy to break. For instance the number of words per sentence is quite obvious and the statistics of tokens per sentence per language could be a first approach identifying such a ciphertext as encoding of natural language and a hint towards the underlying language itself. Token frequency statistics would quickly open the road to decipherment. Another problem of this text is its length. While the original input has 19 characters including spaces, the idiosyncratic encoding has 29. These problems come under more from the fact that 36 is a higher basis than is 10 and converting to lower bases usually inflates length. A work-around for the first problem would be to enhance the system to encode space as a character making it a somewhat unhandier base-37 system. Also the sequential position of space could be debatable and important for decipherment: would we want to put it to the beginning before 0 with the funny but not so unwelcome effect of even shifting actual numbers in the plaintext, even 0 and 1. Or would one put space between digits and letters or to the end? For the sake of exemplification, we put it to the end as last symbol in the set. One could also include punctuation. In order to disguise the number and frequency statistics of tokens now, one could chunk the text into equally sized bits. This could also benefit length issues for very long words which otherwise would lead to extremely long base-10 encodings. So, if we chose 5 as Figure 2: A simple encryption based on interpreting text as numbers in a base-36 number system, which extends the hexadecimal digit set to the entire alphabet. chunk-size we would arrive at the scenario in figure 6. The new encoding just incidentally has as many numbers as words in the plaintext. Obviously, one could also use hex-encoding instead of decimal encoding, making the sequence mimick some computer relevant machine readable hex sequence of some kind. One would however rely on not being detected. One weakness is the smaller size of the last number, since only the last chunk can but must not become shorter than 5 or any chosen chunk size. Obviously, one could use padding and fill the last word until it too had 5 positions: (pleXX) and simply ignore that in the decipherment. The decipherment itself would still be rather simple if one knew about the method. Knowing the chunk size would not even be necessary, only knowing the base of the system and with rather conventionalized sequences for all letters of the English alphabet and even adding space and some punctuation marks brute-forcing all possible base systems with their inventories would not yet represent a problem for modern computers. An obvious further ingredient to encipherment could be to use any sequence determining chunk sizes. Imaginable would be for instance any integer sequence of the OEIS4. This could be transmitted as a key. In our example, one could chunk according to the prime numbers, with the then key 40 (or A000040).5 One could prefix or suffix this number to the ciphertext or even add it to the first number and only the intended receiver would know that. Of course, periodical sequences with not too large numbers would fit the purpose best in order to avoid length issues. This however could already be one vulnerability. All kinds of additional play could be applied such as always subtracting one, submitting a second equally long sequence of different addends which have been added to the numbers of the first sequence and so forth. An example can be seen in Figure 6. Also one could chain such encodings. Footnote 4: [https://oeis.org/](https://oeis.org/) Footnote 5: [https://oeis.org/A000040](https://oeis.org/A000040) Whilst such plays could add to the safety of the method, basically it would not alter the principal vulnerability caused by the rather small size of the original inventory and the concurrently rather limited number of possible conventionalised sequences one would have to try. Even if the code-breaker would not know exactly what size the inventory had and whether to include punctuation, diacritic marks etc. the number of possible ordered sets would still be small. When considering the possibility of different languages and base alphabets, things already become more sophisticated, but still the number of writing systems in the world is limited and for sure one would have an idea of which languages or at least writing systems could be relevant. Again playful variants writing English with Cyrillic letters etc. would exist but not basically change the game. Also, since hexadecimals are used in various machine codes, one could mimick such files or use channels such as internet packages to transmit them. Figure 3: A slightly more sophisticated conversion with an additional step: chunking. **1 - Original text**: this is an example. **2 - Chunked**: (th), (is ), (is an), ( exampl), (e) **3 - Conversion** **4 - Text converted to base-10**: 1130-25714-35202859-3,455775984\(\cdot 10^{12}\)-14 **4a - Text converted to base-16**: 46A-6472-219272B-BB56869BF64DFF0000-E Some readers may have already felt reminded of a similar encoding being in use. Namely as the Base64 standard encoding for 8-bit binary files. Here, the conventionalized sequence includes 26 uppercase letters, 26 lowercase letters, 10 digits and two additional characters: / and + amounting to 64 (\(2^{6}\)) characters ordered as uppercase-alphabet, lowercase-alphabet, digits, additional characters, thus the ordering differs from the hexadecimal system. The two additional characters have been chosen rather arbitrarily and cause some problems for instance for filenames. They were needed in order to have exactly 64 characters and they had to be from the ASCII standard base set, which did not provide other letters. Besides Base64, some other bases have been established in the world of computers. RFC 4648 [9] is testimony to the fact that standardization is attempted. It lists the Base64 alphabet, but also states about set and sequence, p.5: There is no universally accepted alphabet that fulfills all the requirements. This RFC referring also to previous systems, defines the hexadecimal system inventory and sequence using upper case letters. We also find some observation in relation to attacks on such encodings, p.14: Base encoding adds no entropy to the plaintext, but it does increase the amount of plaintext available and provide a signature for cryptanalysis in the form of a characteristic probability distribution. Basically, if one were to permute the digits and all letters of the alphabet to an idiosyncratic sequence just for the purpose of encipherment, then brute-forcing could be made much more difficult, since combinatorically one can produce \(n!\) different ordered sequences or permutations, for \(n=37\) symbols thus 37! sequences, a huge number with 44 decimal places. If one now also has to decide which exact subset of characters is to be chosen (the encoder could easily choose characters which do not appear in the plaintext or restrict himself to those) and how large it is, the number of possibilities becomes too large to handle. However, the sequence and inventory would have to be known to the receiver which is a new vulnerability, especially without such sophisticated mechanism as private and public keys to transmit this. One virtue of the non-permuted sets lies in the fact that the sequence is near-conventionalized and thus can be known without transmission to the receiver. If one wanted to leverage the power of combinatorics still and without permutation, then there would be yet another way to improve on difficulty. That of expanding the set to all encodable characters and using the conventionalized sequence of the unicode standard. This would mean having a base-n system, where all letters of the languages of the world - or all which can occur in any document - are included. n must be sufficiently large so as to include all letters of the alphabet. Those letters must have a fixed sequence allowing them to be mapped to the same digit all the time. The latter requirement is already met by the fact that alphabets usually have an inherent Figure 4: A yet slightly more sophisticated conversion. Here with idiosyncratically using a power of ten notation for a large number. sequence, which amongst other things serves for learning them at an early age (think of the alphabet song). When it comes to alphabets such an inherent ordering is usually given. But already considering other languages than English, it might become an issue of how to order additional letters or diacritic symbols such as accents. If we take German for instance, the letters a,o,ii and fdo appear. They do have a conventional sequence which is widely acknowledged although there is still some variation in how dictionaries order entries with some letting a-words follow a-words whereas mostly they are appended to the alphabet, thus follow z-words. When we now want to include other writing systems in order to convert multilingual text into numbers, the issue of a fixed sequence can at some point not be handled anymore by only choosing one sequence for each writing system but one would also have to choose the sequence of writing systems. The problem would also be another: for some writing systems with very large character inventories such as Chinese, a conventionally fixed order is not available.6 To solve these ordering problems, one can use the Unicode standard.7 This standard is administered by a consortium and publishes THE conventionalized set of (all) conventionalized, that is widely usable digital characters since 1991. It covers many scripts including character-heavy east asian ones and historical ones. Accidentally it also provides a sequence for all characters present. Although this sequence includes alphabetic sequences, it is largely an arbitrary ordering, which is irrelevant for our purpose. It provides a nearly perfect solution to our base-n problem. Footnote 6: Chinese characters in dictionaries are often ordered either by number of strokes or graphic elements which are called _radicals_ which then in turn are ordered by stroke number and for symbols with the same stroke number there are conventionalized sequences of the few basic strokes which make up all Chinese characters. Also stroke sequence is deterministic for each character. Footnote 7: [https://unicode.org/](https://unicode.org/) We are furnished with a massive number of fixed sequence symbols. In fact, there are meanwhile more than 100 000 so called code points. A code point as basic unit of the standard can be a control character, in which case there is no visual equivalent. Furthermore, there are resrved areas and other special characters, as well as sections defining combined characters. For a functioning conversion, one can exclude problematic characters. A sequence for a base encoding however must have each position filled. Thus excluding any character means that the following one will occupy the position and value subsequent to the one previous of the excluded character. Ignoring this for a moment, we can prinicipally choose any n up to 100 000 or the current Unicode size and use this set as inventory with a large base and convert text into numbers for any of these systems. All we have to do in order to convert/encode our text is the following. We implemented this in the Java programming language and Figure 6 shows a conversion example. For decoding one simply follows the reverse way, but should know the encoding and decoding base. Of course, the algorithm allows for millions of variations and points at which a concealment strategy can be refined. ## 7 Discussion The majority of characters produced are Chinese characters due to their ubiquity in the Unicode standard. The ciphertext may still appear similar to files where a wrong encoding has been applied, a wrong escaping, some filetype such as a binary interpreted as plaintext, maybe wrong OCR or broken output of some buggy computer program, 1. extract each code point present in the text 2. choose the highest codepoint number or one superior as max number of our base system 3. choose a preferably larger number as new base 4. extract the text from the file 5. escape linebreaks to some longer idiosyncratic sequence 6. choose a chunksize 7. chunk the text into slices of the chosen chunksize 8. for each slice convert from base system to new larger base 9. write to file Figure 5: Simple succession of steps necessary to encode a text with any Unicode character into a higher base. Figure 6: Encoding of a simple text from base 200 to base 50 000. Note, that due to the encoding including control characters which determine writing direction, the output appears in lines with beginnings to the right and the left. or a purposefully generated random gibberish, a combination of this and so forth. The digital age will always continue to produce such or similar sequences naturally and this algorithm is probably one more potential addition to the noise production currently challenging cleaning methods for developers of machine learning datasets. Instead of the purpose of concealment, where we can choose either a higher or a lower base in respect to the original, there is generally another aim why base system shifts to larger bases are being performed, namely compression. To this end, there exist independently developed similar implementations such as the ones propagated on github, which is not surprising given that such a use of unicode is an obvious choice and just enhances the hexadecimal set consistently.8 The authors mention compression ratios on their github repository and show, that such an encoding can be efficient for non variable byte-size schemes such as UTF-32. However, details are not available. Their use case is a compression of more information into a character limited Tweet rather than compression. Footnote 8: [https://github.com/qmtm/base65536,https://github.com/qmtm/base2048](https://github.com/qmtm/base65536,https://github.com/qmtm/base2048) Own tests suggested that the compression ratio due to the large disk space requirements of higher codepoints such as Chinese characters is not as effective as with known algorithms such as zip. The reason is that UTF-8 uses fewer bits than 8 if a character comes for instance from the ASCII part of the Unicode standard. Exploiting that standard for compression and more efficient information transmission, scholars have developed base64, see [9]. Similarly base85 is used and conventionalized for IPv6, see [10]. A good overview is provided in the wikipedia.9 They became widely used for their technical benefit. Footnote 9: [https://en.wikipedia.org/wiki/Binary-to-text_encoding](https://en.wikipedia.org/wiki/Binary-to-text_encoding), subsection Encoding standards, last accessed Jun 4, 2023. There is yet another peculiar base system conversion which seems to have come into being in relation with cryptography. It uses base-256 but no digit or character inventory, but instead uses a wordlist, the so-called PGP word list (from Pretty Good Privacy).10 This particular encoding does not aim at compression as much, but instead of single characters/digits, words are used in a conventionalized sequence. This is very strongly reminiscent of the code books used in the cryptology of previous centuries. However again, this list as 'key' does not have to be exchanged alongside the ciphertext as it is conventionally known. Base256 encoding produces a code, which can be pronounced and thus, for instance by a text to speech engine converted into the audio modality and vice versa as to date for English pretty good systems exist, such as openAIs Whisper [11].11 Another special feature of the encoding is that depending on whether the target word stands at an even or odd position in the ciphertext, a different word is chosen. Footnote 10: [http://web.mit.edu/network/pgpfone/manual/index.html#GP000062](http://web.mit.edu/network/pgpfone/manual/index.html#GP000062) Footnote 11: [https://github.com/openai/whisper](https://github.com/openai/whisper) These different base systems with larger bases have come into being and enrich or complicate the world of conveying messages. It seems however for such a system to establish, a strong usecase such as compression is needed. To this end, an encoding through the unicode set ## 8 Conclusion We presented a method to reencode a plaintext by interpreting it as a number in a sufficiently large base-n system including at least all characters of the text and then chunking and reencoding it in a larger base. As a conventionalized ordered inventory, we used the unicode standard. Encryption methods especially since private-key architectures are more sophistcated and more secure than the herepresented ideas presumably even if a random permutation of the available character inventories and some other idiosyncratic properties are chosen as has been demonstrated by small usecases and discussed. However, at least since the middle ages where Arabic scholars such as Al-Qalqashandi used letters and their assigned numerical values, systems such as the one described here have been used to encrypt messages. The article rethinks this method with the ingredients of the digital age and recontextualises it to a quite digital world with a lot of more types of texts and documents. As an encipherment method, the herepresented one is rather a play, since one would not invest implementing and using an easier-to-crack method than what is already in use and can be installed out-of-the-box. The only advantage could be that the method is relatively unknown and easy to implement abd that its result mimicks frequently encountered types of noise.
2305.03536
Shaping Next-Generation RAN Topologies to Meet Future Traffic Demands: A Peak Throughput Study
Millimeter-Wave (mm-Wave) Radio Access Networks (RANs) are a promising solution to tackle the overcrowding of the sub-6 GHz spectrum, offering wider and underutilized bands. However, they are characterized by inherent technical challenges, such as a limited propagation range and blockage losses caused by obstacles. Integrated Access and Backhaul (IAB) and Reconfigurable Intelligent Surfaces (RIS) are two technologies devised to face these challenges. This work analyzes the optimal network layout of RANs equipped with IAB and RIS in real urban scenarios using MILP formulations to derive practical design guidelines. In particular, it shows how optimizing the peak user throughput of such networks improves the achievable peak throughput, compared to the traditional mean-throughput maximization approaches, without actually sacrificing mean throughputs. In addition, it indicates star-like topologies as the best network layout to achieve the highest peak throughputs.
Paolo Fiore, Ilario Filippini, Danilo De Donno
2023-05-05T13:44:42Z
http://arxiv.org/abs/2305.03536v1
# Shaping Next-Generation RAN Topologies to Meet Future Traffic Demands: A Peak Throughput Study ###### Abstract Millimeter-Wave (mm-Wave) Radio Access Networks (RANs) are a promising solution to tackle the overcrowding of the sub-6 GHz spectrum, offering wider and underutilized bands. However, they are characterized by inherent technical challenges, such as a limited propagation range and blockage losses caused by obstacles. Integrated Access and Backhaul (IAB) and Reconfigurable Intelligent Surfaces (RIS) are two technologies devised to face these challenges. This work analyzes the optimal network layout of RANs equipped with IAB and RIS in real urban scenarios using MILP formulations to derive practical design guidelines. In particular, it shows how optimizing the peak user throughput of such networks improves the achievable peak throughput, compared to the traditional mean-throughput maximization approaches, without actually sacrificing mean throughputs. In addition, it indicates star-like topologies as the best network layout to achieve the highest peak throughputs. ## I Introduction Since its introduction in 3GPP Release 15, the millimeter-Wave (mm-Wave) radio spectrum has been sought over as a launchpad to reach previously unachievable transmission rates for end users. With the prospect of bandwidths orders of magnitude larger than the ones possible in LTE and 5G sub-6 GHz, combined with a sparsely used spectrum, this frequency range has become very attractive to operators, vendors, and researchers to answer the pressing issue of spectrum overcrowding in sub-6 GHz and to give traction to innovation and development in mobile radio communications [1, 2]. Moreover, the need for a more suitable frequency range for high-speed/low-latency use cases spawns from numerous forecasts, which consistently predict an ever-increasing volume of traffic consumed by mobile devices. As reported in [3], it is expected that, by 2028, the total mobile traffic will increase three-fold compared to 2023, and all of the data traffic growth will come from 5G New Radio (NR) connections, leaving far behind the usage of previous generations of mobile Radio Access Networks (RAN), such as GSM (2G), UMTS (3G), and LTE (4G). The public expectation on mm-Wave frequencies must face the inherent challenges of transmitting with limited propagation range due to severe path loss and blockage loss caused by obstacles at such a high frequency [4]. Suppose an obstacle interrupts a mm-Wave radio link. In that case, the high reflectivity of materials found in urban areas causes the wave to be deflected in unwanted directions, reducing the probability of being detected by the designated receiver [1]. One way to partially solve the propagation shortcomings of mm-Wave frequencies consists in densifying the Radio Access Network (RAN) by installing a greater number of base stations in the considered area. This solution comes with increased installation costs proportional to the desired level of densification. Integrated Access and Backhaul (IAB), a paradigm standardized yet in Release 16, can mitigate the limitations of this approach. Backhaul links between base stations are relocated from the more expensive underground fiber cabling to the radio spectrum. In this way, all radio links can be shortened by deploying simpler and cheaper devices than full-fledged base stations. This process effectively reduces path loss and, at the same time, can be less taxing on the capital expenditure (CAPEX), saving up to 85\(\%\) of installation costs as there is no need for wired connections and trenching [5, 6, 7, 8]. Nevertheless, massive RAN densification still represents a challenging topic spawning the search for other alternative solutions. The recent topic of metasurfaces, particularly Reconfigurable Intelligent Surfaces (RIS), is gaining momentum in the mm-Wave research and industry community [9] to boost throughput [10] and resilience to obstacle outages [11, 12]. RIS are planar surfaces made of small radiating elements and have been proven cheaply mass-producible. RIS can steer the impinging waves to any direction in their Field of View (FoV) in a quasi-passive way (also addressing power consumption concerns), and can exploit alternative radio paths that, as previously mentioned, are mostly unavailable, managing to turn around static obstacles (e.g., buildings) and limiting the impact of sudden obstacles (e.g., vehicles and pedestrians). The technologies above are promising enough to have been an object of study in recent publications [13, 14], and a detailed analysis is warranted to evaluate their impact on the connectivity service from a networking perspective. This has produced Mixed-Integer Linear Programming (MILP) models, whose solutions mathematically describe how and where each device should be installed to deploy a RAN that maximizes a preselected indicator. This allows to unequivocally measure the network performance and assess the contribution of these new technologies. Traditionally, the mean user throughput has been employed as the main parameter for network optimization [15]. However, in [16, 17], it has been recently stated that mobile data traffic of type **video** has increased from \(53.72\) (2021) [16] to \(67.60\%\) (2022) [17] of the total mobile traffic and is set to keep increasing in the following years. Therefore, it is time to evaluate different video traffic-friendly metrics as the focus of optimization, even more so considering the different propagation conditions of mm-Waves compared to commonly used lower frequencies. With such an abundance of bandwidth available in the mm-Wave spectrum, ultra-high throughput transmissions will transfer large data volumes in very-short time periods, which will be then consumed in the idle period before the next large-volume burst. A shorter occupation of the RAN resources leads more users to transmit and receive at the maximum rate available. Thus, peak user throughput becomes a potentially crucial optimization parameter. In addition, speed tests are now an extremely common tool to evaluate the quality of a mobile operator; hence, maximizing the peak user throughput will be very effective. This work, to the best of our knowledge, is the first to analyze how an urban RAN topology that is planned by optimizing the peak user throughput compares against a more traditional mean throughput approach. Under the optimality guarantees of MILP models, we show that deploying a peak-throughput optimal topology yields mean-throughput results similar to traditional methods while significantly enhancing the maximum achievable throughput, all at an equal installation cost. The remainder of this paper is structured as follows: Section II focuses on the detailed description of the components of the considered network scenario, and Section III describes the MILP network planning models used; Section IV compares the results obtained by the conventional approach and our proposal; final remarks in Section V conclude the paper. ## II System Model In this section, we provide a description of a RIS-empowered mm-Wave IAB Radio Access Network, shown in Figure 1. We describe in detail the main elements of such networks, their behavior, and how they relate to each other in the system model. An IAB network consists of: User Equipments (UE) that must be served; a single IAB Donor, a full-fledged Base Station (BS) and the only node in the RAN cabled to the rest of the network, and IAB Nodes, simple BS capable of giving access to UEs and wireless relaying data from/to other IAB Nodes. Backhaul links between IAB Nodes and access links serving the UEs operate at the same frequency range (_in-band backhauling_). An IAB network assumes a tree topology, as in 3GPP specifications [18], enabling end-to-end connections between the Donor and the UEs in a multi-hop fashion. Due to space limitations, only downlink traffic flow has been considered in this work. Nevertheless, simple amendments can be added to straightforwardly extend it to consider uplink traffic. All links in the RAN use advanced beamforming to improve propagation at the mm-Wave range: thus, given the narrow nature of the beams, in concert with a half-duplex operation mode of every device and a continuous Time Division Multiplexing (TDM) approach1, the impact of mutual interference is typically minimal. Therefore, we consider interference between different links negligible [19, 20]. Footnote 1: We considered the timesharing resources to be allocated in a continuous temporal frame instead of a real-scenario discrete resource block The use of RISs in an IAB Network is mainly motivated by their ability to impact EM propagation in order to improve channel conditions in case of obstacles. RISs operate as passive beamformers, capable of redirecting an incident radio signal toward a desired direction. UEs can be served by a single direct link from an IAB Node or by a Smart Radio Connection (SRC), a triplet involving a UE and both an IAB Node and a RIS. IAB Nodes are expected to dynamically command and change SRC configurations [9] to activate the reflection through the RIS, within their FoV, to improve throughput during obstacle obstruction, and thus resilience to obstacle blockage. We adopt the mm-Wave channel model provided in [21], which includes the effect of RISs, as well as the physical characterization of the involved devices in the RAN (IAB Donor, IAB Node, RIS, and UE). We consider nomadic obstacles modeled as in [21], where the blockage probability and blockage loss are calculated from Monte-Carlo simulations and then fitted to derive a probability distribution. We also consider the self-blockage zone, modeled as in 3GPP specifications, which consists of a circular sector centered in the UE position where the signal attenuation is increased to model the effect of users' body [22]. We consider static and nomadic obstacles. Static obstacles (i.e., buildings) can be avoided during the network planning phase so that IAB Nodes can be connected only if in Line-of-Sight (LoS) conditions. Vice versa, the heights of nomadic obstacles are typically smaller than those of IAB Nodes' and RIS' installation sites, therefore their presence will affect only access links from the access nodes (IAB Nodes and RISs) to the UEs. For these reasons, we assume all backhaul links are always in Line-of-Sight (LoS) conditions, and their capacity (in Mb/s) is constant. On the other hand, access links can be interrupted by nomadic obstacles, the user's body, or both; therefore, their average capacity is weighted by the probability of being in every possible blockage state (including both direct link and SRC links). Moreover, when an SRC is available, only the best link between the direct IAB-UE and the reflected RIS-UE is selected as the access link. Fig. 1: RAN scenario with different types of obstacles. ## III MILP Models We now detail two different MILP formulations to provide optimal IAB network topology: the Mean-Throughput Formulation, a baseline formulation in which the objective function maximizes the mean user throughput, and the Peak-Throughput Formulation, an extension of the baseline, in which the peak user throughput becomes the objective of the maximization. Both formulations share a common notation, defined as follows. Adopting the standard approach used in the literature [23], we define a set \(\mathcal{C}\) of Candidate Sites (CS) where a network device (the IAB Donor, an IAB Node, or a RIS) can be installed over an urban area planned to be covered by a mmWave network. Inside \(\mathcal{C}\), two CSs \(\hat{c}\) and \(\tilde{c}\) are selected for the installation of two fixed special devices: \(\hat{c}\) is always reserved for the IAB Donor. In contrast, \(\tilde{c}\) is a placeholder CS for a "fake" RIS. This method lets the solver decide whether to assign a real SRC (a BS, a UE, and a RIS) or a "fake" SRC (a BS, a UE, and the "fake" RIS), thus a direct connection, to each UE, without the need to introduce additional variables to characterize the two different kinds of access connections. Test Points (TP), centroids of traffic mimicking the geographical distribution of UEs in the area, are represented by set \(\mathcal{T}\). All physical characteristics of an SRC, such as SNR and blockage loss, are encapsulated in the binary activation parameters \(\Delta\) and the achievable rate parameters \(C\in\mathbb{R}^{+}\). Parameter \(\Delta_{t,c,r}^{SRC}\) equals \(1\) when an SRC can be established between TP \(t\in\mathcal{T}\), an IAB device installed in \(c\in\mathcal{C}\), and a RIS in CS \(r\in\mathcal{C}\). Similarly, parameter \(\Delta_{c,d}^{BH}\) indicates the availability of the inband-backhaul link between IAB nodes \(c,d\in\mathcal{C}\). When an SRC can be established, \(C_{t,c,r}^{SRC}\) is the weighted average of capacities calculated in various states of blockage, while \(C_{t,c,r}^{RIS}\in\mathbb{R}^{+}\) is the achievable rate of SRC \((t,c,r)\) when only the reflected path through the RIS is available. Finally, \(C_{c,d}^{BH}\in\mathbb{R}^{+}\) defines the capacity of a backhaul link between two nodes \(c,d\in\mathcal{C}\). A minimum amount of demand \(D\), measured in Mb/s, must be served to each TP. Every RIS has an associated azimuthal angle \(F\), representing its Field of View (FoV), where both an IAB Node and a TP must lie in to be able to use the RIS2. To evaluate if two devices fall within the FoV of the RIS, parameters \(\Phi_{r,t}^{\text{A}}\), \(\Phi_{r,c}^{\text{B}}\in[0,2\pi]\) must be defined: the former represents the angle between RIS \(r\in\mathcal{C}\) and TP \(t\in\mathcal{T}\), and the latter the angle between RIS \(r\) and BS \(c\), with \(r,c\in\mathcal{C}\). Footnote 2: The elevation angle of the FoV of the RIS is managed in pre-processing given the fixed height of the involved devices in the planning scenario. Finally, the selection of network devices to deploy is constrained by budget \(B\), and \(P^{\text{IAB}}\) and \(P^{\text{RIS}}\) are the prices for IAB Nodes and RISs, respectively. The network planning model's solution consists in assigning a value to the decision variables in Table I, as per the goal stated by the selected objective function. Said variables determine which devices will be installed and where (\(y_{c}^{\text{DQN}},y_{c}^{\text{IAB}},y_{c}^{\text{RIS}}\)), how they will be connected to each other (\(x_{t,c,r},z_{c,d}\)), how much traffic will flow through each link (\(f_{c,d},g_{t,c,r},w_{c}\)), how the time resources of each BS are employed (\(t_{c}^{\text{TX}},t_{c}^{\text{RX}}\)), and how the RIS will be oriented (\(\phi_{c}\)). ### _Mean-Throughput Formulation (MTF)_ The MTF is defined by the following objective and constraints: \[\max\sum_{t\in\mathcal{T},c,r\in\mathcal{C}}g_{t,c,r}\] (1a) s.l.: \[y_{c}^{\text{Lab}}+y_{c}^{\text{RIS}}\leq 1, \forall c\in\mathcal{C}, \tag{1b}\] \[y_{c}^{\text{DON}}\leq y_{c}^{\text{IAB}}, \forall c\in\mathcal{C},\] (1c) \[\sum_{c\in\mathcal{C}}y_{c}^{\text{DON}}\leq 1,\] (1d) \[y_{c}^{\text{RIS}}\geq 1,\] (1e) \[z_{c,d}\leq\Delta_{c,d}^{\text{BH}}\left(y_{c}^{\text{IAB}}+y_{ d}^{\text{IAB}}\right)/2, \forall c,d\in\mathcal{C},\] (1g) \[x_{t,c,r}\leq\Delta_{t,c,r}^{\text{SRC}}\left(y_{c}^{\text{IAB}}+ y_{r}^{\text{RIS}}\right)/2, \forall t\in\mathcal{T},c,r\in\mathcal{C},\] (1h) \[\sum_{c,r\in C}x_{t,c,r}=1, \forall t\in\mathcal{T},\] (1i) \[\sum_{d\in\mathcal{C}}z_{d,c}\leq 1-y_{c}^{\text{DON}}, \forall c\in\mathcal{C},\] (1j) \[\sum_{c\in\mathcal{C}\setminus\{\tilde{c},\tilde{c}\}}\left(P^{ \text{IAB}}y_{c}^{\text{IAB}}+P^{\text{RIS}}y_{c}^{\text{RIS}}\right)\leq B,\] (1k) \[w_{c}+\sum_{d\in\mathcal{C}}\left(f_{d,c}-f_{c,d}\right)-\sum_{ \begin{subarray}{c}t\in\mathcal{T}\\ r\in\mathcal{C}\end{subarray}}g_{t,c,r}=0, \forall c\in\mathcal{C},\] (1l) \[f_{c,d}\leq C_{d,c}^{\text{BH},d}z_{c,d}, \forall c,d\in\mathcal{C},\] (1m) \[Dx_{t,c,r}\leq g_{t,c,r}\leq C_{t,c,r}^{\text{IAB}}x_{t,c,r}, \forall t\in\mathcal{T},c,r\in\mathcal{C},\] (1n) \[w_{c}\leq M^{\text{MAX}}y_{c}^{\text{DON}}, \forall c\in\mathcal{C},\] (1o) \[t_{c}^{\text{TX}}=\sum_{d\in\mathcal{C}}\frac{f_{c,d}}{C_{c,d}^{ \text{BH}}}+\sum_{\begin{subarray}{c}t\in\mathcal{T}\\ r\in\mathcal{C}\end{subarray}}\frac{g_{t,c,r}}{C_{t,c,r}^{\text{SRC}}}, \forall c\in\mathcal{C},\] (1p) \[t_{c}^{\text{RX}}=\sum_{d\in\mathcal{C}}\frac{f_{d,c}}{C_{d,c}^{ \text{BH}}}, \forall c\in\mathcal{C},\] (1q) \[t_{c}^{\text{TX}}+t_{c}^{\text{RX}}\leq y_{c}^{\text{IAB}}, \forall c\in\mathcal{C},\] (1r) \[\sum_{\begin{subarray}{c}t\in\mathcal{T}\\ c\in\mathcal{C}\end{subarray}}\frac{g_{t,c,r}}{C_{t,c,r}^{\text{RIS}}}\leq y_{r}^{ \text{RIS}}, \forall r\in\mathcal{C}\setminus\tilde{c},\] (1s) \[\phi_{r}\geq\Phi_{r,t}^{\text{A}}-F/2-2\pi(1-x_{t,c,r}), \forall t\in\mathcal{T},c,r\in\mathcal{C}\setminus\tilde{c}, \tag{1l}\] \begin{table} \begin{tabular}{|c|c|} \hline **Variable** & **Description** \\ \hline \(y_{c}^{\text{DON}}\in\{0,1\}\) & Installation of IAB Donor in CS \(c\in\mathcal{C}\). \\ \hline \(y_{c}^{\text{RIS}}\in\{0,1\}\) & Installation of IAB Node in CS \(c\in\mathcal{C}\). \\ \hline \(x_{t,c,r}\in\{0,1\}\) & Activation of SRC (\(t,c,r\)), \(t\in\mathcal{T},c,r\in\mathcal{C}\). \\ \hline \(z_{c,d}\in\{0,1\}\) & Activation of backhaul link (\(c,d\)), \(c,d\in\mathcal{C}\). \\ \hline \(f_{c,d}\in\mathbb{R}^{+}\) & Traffic on backhaul link (\(c,d\)), \(c,d\in\mathcal{C}\). \\ \hline \(g_{t,c,r}\in\mathbb{R}^{+}\) & Traffic on SRC (\(t,c,r\)), \(t\in\mathcal{T},c,r\in\mathcal{C}\). \\ \hline \(w_{c}\in\mathbb{R}^{+}\) & Total traffic to the IAB Donor, \(c\in\mathcal{C}\). \\ \hline \(t_{c}^{\text{RX}}\in[0,1]\) & Transmission time ratio of BS installed in CS \(c\in\mathcal{C}\). \\ \hline \(t_{c}^{\text{RX}}\in[0,1]\) & Reception time ratio of BS installed in CS \(c\in\mathcal{C}\). \\ \hline \(\phi_{c}\in[0,2\pi]\) & Azimuthal orientation of RIS installed in CS \(c\in\mathcal{C}\). \\ \hline \multicolumn{3}{c}{**PIF flow variables**} \\ \hline \(f_{t,c,d}^{\text{X}}\in\mathbb{R}^{+}\) & Extra traffic on backhaul link (\(c,d\)), \(c,d\in\mathcal{C}\). \\ \hline \(g_{t,c,r}^{\text{RX}}\in\mathbb{R}^{+}\) & Extra traffic on SRC (\(t,c,r\)), \(t\in\mathcal{T},c,r\in\mathcal{C}\). \\ \hline \(w_{c,c}^{\text{RX}}\in\mathbb{R}^{+}\) & Total extra traffic to the IAB Donor, \(c\in\mathcal{C}\). \\ \hline \ \[\phi_{r} \leq\Phi_{r,t}^{\text{A}}+F/2+2\pi(1-x_{t,c,r}), \forall t\in\mathcal{T},c,r\in\mathcal{C}\setminus\tilde{c}, \tag{1u}\] \[\phi_{r} \geq\Phi_{r,c}^{\text{B}}-F/2-2\pi(1-x_{t,c,r}), \forall t\in\mathcal{T},c,r\in\mathcal{C}\setminus\tilde{c},\] (1v) \[\phi_{r} \leq\Phi_{r,c}^{\text{B}}+F/2+2\pi(1-x_{t,c,r}), \forall t\in\mathcal{T},c,r\in\mathcal{C}\setminus\tilde{c}. \tag{1w}\] The MTF objective function (1a) maximizes the sum-throughput of all UEs3. Footnote 3: The value of the objective function is divided by \(|\mathcal{T}|\) in post-processing to obtain the mean user throughput. _Deployment constraints (1b-1f):_ Constr. (1b) guarantees mutual exclusivity between IAB Nodes and RISs in a specific CS \(c\in\mathcal{C}\), constr. (1c) enables the possibility of an IAB Node to be promoted to a Donor, constr. (1d) guarantees at most a single Donor, while constr. (1e) and constr. (1f) install the "fake" RIS and the Donor in \(\tilde{c},\tilde{c}\in\mathcal{C}\), respectively. _Link-Activation constraints (1g-1j):_ Constr. (1g) activates a backhaul link between two CSs \(c,d\in\mathcal{C}\) if both of them have an IAB Node installed and the physical characteristics of the potential link are favorable (\(\Delta_{c,d}^{BH}=1\)). In the same way, in (1h), an SRC is activated if a BS is installed in \(c\) and a RIS in \(r\), and binary parameter \(\Delta_{t,c,r}^{SRC}\) is equal to 1, \(c,r\in\mathcal{C}\). (1i) forces each TP to be served by a single SRC, and (1j) guarantees the deployment of a tree topology. _Budget constraint:_ Constr. (1k) limits the acquisition of devices to the budget \(B^{4}\). _Flow constraints (1l-1o):_ Const. (1l) guarantees flow balance at any BS in the tree, const. (1m) upper bounds backhaul link flows to link capacities, const. (1n) imposes both the minimum demand \(D\) and the limit of maximum SRC capacity. Finally, constr. (1o) limit the traffic entering the RAN from the Core Network, under the assumption that this quantity cannot be more than the capacity of the best-performing link coming out of the Donor, indicated by \(M^{\text{MAX}}\). Note that, although not strictly necessary, these types of constraints help reduce the solution time by tightly shaping the solution space. _Resource-sharing constraints (1p-1s):_ Constr. (1p) defines the timeshare (assuming \(1\) to be \(100\)% of the available time) dedicated to transmission for any BS (in both the access and backhaul phases). Similarly, constr. (1q) defines the reception timeshare. Constr. (1r) enforces half-duplex operation mode considering both transmission and reception timeshares, while constr. (1s) manages the timeshare of a RIS installed in CS \(r\in\mathcal{C}\) among different SRCs \(\{(t_{1},c_{1},r),\cdots,(t_{n},c_{n},r)\},t_{1}\cdots t_{n}\in\mathcal{T},c_{1 }\cdots c_{n},r\in\mathcal{C}\). _RIS-orientation constr. (1t-1w):_ These constraints set the value of the orientation variable \(\phi_{c}\) dependent according to the angles between the involved devices and force angles of reflection links to lie within the FoV of the RIS, if any.5 Footnote 5: This constraint makes sure not to consider in the budget the IAB Donor (which is a fixed cost and thus not included in the variable budget) and the “fake” RIS (which is not an actual device but rather a way to keep the formulation compact and easy to manage). ### _Peak-Throughput Formulation (PTF)_ The Peak-Throughput Formulation extends the Mean-Throughput formulation by adding some variables and constraints. The other parts of the model are inherited from the previous formulation. As mentioned in Section I, when users manage to establish a peak-throughput connection, the event is akin to a traffic burst, characterized by short duration and no correlation with other users' traffic. For this reason, the allocation of the extra traffic enabled by the peak-throughput is fundamentally different from the average traffic demand \(g_{t,c,r}\), and must be modeled with a distinct approach. While being routed on different links from the IAB Donor to the final UE, the average demand \(g_{t,c,r}\) for SRC (\(t,c,r\)) shares the same time resources with all the other SRCs in its path. The extra traffic \(g_{t,c,r}^{\text{X}}\), on the other hand, does not share the resources with the other SRCs, but is allocated as if any considered UE is the only one in the network and can reserve all the capacity that is not used by the guaranteed traffic \(g_{t,c,r}\). The extra traffic of an SRC (\(t,c,r\)) is defined by the **spare capacities** of the links in its route from the IAB Donor to the UE; its value is the one of the BS with the least resources available (_bottleneck BS_). The timeshares which are already reserved for the mean-throughput traffic of all UEs are still guaranteed by constraints (1p-1s) and thus remain untouched. Therefore, we add further flow variables to capture peak-throughput traffic. They are listed in Table I. This model does not need to maximize the mean user throughput; all instances of variable \(g_{t,c,r}\) in MTF (in constraints 1l,1n,1p, and 1s) can be replaced with \(Dx_{t,c,r}\), resulting in a leaner formulation. Objective function and constraints characterizing PTF are: \[\max\sum_{t\in\mathcal{T},c,r\in\mathcal{C}}g_{t,c,r}^{\text{X}}\] (2a) s.t.: \[w_{t,c}^{\text{X}}+\sum_{d\in\mathcal{C}}\left(f_{t,d,c}^{\text {X}}-f_{t,c,d}^{\text{X}}\right)-\,\sum_{r\in\mathcal{C}}g_{t,c,r}^{\text{X}}=0, \forall t\in\mathcal{T},c\in\mathcal{C}, \tag{2b}\] \[f_{t,c,d}^{\text{X}}\leq C_{d,\text{AC}}^{\text{EH}}c_{d,d}, \forall t\in\mathcal{T},c,d\in\mathcal{C},\] (2c) \[g_{t,c,r}^{\text{X}}\leq C_{t,c,r}^{\text{SRC}}x_{t,c,r}, \forall t\in\mathcal{T},c,r\in\mathcal{C},\] (2d) \[w_{t,c}^{\text{X}}\leq M^{\text{MAX}}y_{t}^{\text{DON}}, \forall t\in\mathcal{T},c\in\mathcal{C},\] (2e) \[\sum_{r\in\mathcal{C}}\frac{g_{t,c,r}^{\text{X}}}{C_{t,c,r}^{\text {SRC}}}+\sum_{d\in\mathcal{C}}\frac{f_{t,c,c}^{\text{X}}}{C_{d,c}^{\text{BH}}} \leq y_{c}^{\text{LAB}}-t_{c}^{\text{TX}}-t_{c}^{\text{RX}}, \forall c\in\mathcal{C},t\in\mathcal{T},\] (2f) \[\sum_{d\in\mathcal{C}}(\frac{f_{t,d,c}^{\text{X}}}{C_{d,c}^{\text{ BH}}}+\frac{f_{t,c,d}^{\text{X}}}{C_{d,d}^{\text{BH}}})\leq y_{c}^{\text{LAB}}-t_{c}^{ \text{TX}}-t_{c}^{\text{RX}}, \forall c\in\mathcal{C},t\in\mathcal{T},\] (2g) \[\sum_{c\in\mathcal{C}}\frac{g_{t,c,r}^{\text{X}}}{C_{t,c,r}^{\text {X}}}\leq y_{r}^{\text{RIS}}-\sum_{\begin{subarray}{c}\in\mathcal{T}\\ \in\mathcal{C}\end{subarray}}\frac{Dx_{r,c,r}}{C_{r,c}^{\text{RES}}}, \forall r\in\mathcal{C}\setminus\tilde{c},t\in\mathcal{T}. \tag{2h}\] The objective function (2a) maximizes the sum of the users' peak throughputs instead of the mean throughput of the previous formulation. Constr. (2b), similar to constr. (1l), imposes the flow balance of the extra traffic routing through the tree; constr. (2c-2e) are the counterparts of capacity-related constr. (1m-1o), but involving the peak-throughput traffic. Finally, constr. (2f-2h) respectively model the BS' timeshare assigned to peak-throughput traffic for a BS involved in an SRC (2f), for a BS only involved in peak-throughput traffic backhauling (2g), and for a RIS (2h). ### _Peak-Throughput Heuristics_ The additional complexity introduced by peak-throughput traffic resulted in a formulation that was too hard to be solved in a reasonable amount of time (e.g., a single instance reached an unsatisfactory optimality gap of \(30\%\) in one hour); therefore, we have developed a heuristic approach to speed up solving time while keeping a high level of quality of solutions. The technique consists in reducing the \(M^{\text{MAX}}\) parameter, representing the highest-capacity backhaul link of the IAB Donor, to a fraction of its actual value, a sort of _forced bottleneck_ approach. This adjustment significantly reduces the solution space, speeding up the _branch-and-cut_ phase of the solution search. This heuristic approach has been validated by comparing its results to those of the exact formulation in a set of smaller scenarios. Due to space limitations, we do not report the complete performance analysis. Nevertheless, the two approaches produced very similar performance trends; the heuristic approach reduces the solution time to an average of 6 seconds within a gap smaller than \(5\%\). ## IV Results In this section, we will compare the performances of the network scenarios optimized according to the mean-throughput formulation (MTF) against the ones obtained by applying the peak-throughput formulation (PTF), solved using the heuristic approach in Section III-C. We first investigate mean and peak throughputs achieved by both formulations, then we compare topological aspects to obtain a comprehensive performance analysis of the strengths and weaknesses of each model. The solutions are also iterated over progressively increasing values of the available budget to observe how much CAPEX will impact the quality of the solutions found. The instances subject to network planning optimization are defined by a \(150m\) radius hexagonal cell deployment area centered around a randomly chosen point within a full 3D representation of the metropolitan area of Milan [24], which includes buildings as opaque static obstacles of an actual urban scenario. The installation of the IAB Donor is fixed to the leftmost vertex of the hexagon; 25 CSs and 15 TPs are then randomly placed in the area according to surrounding buildings. The IAB Donor and the IAB Nodes are composed of three \(120^{\circ}\) sectors with \(16x12\) element panel arrays (\(12x8\) for the IAB Node) and a \(58dBm\) EIRP (\(51dBm\) for the IAB Node), a carrier frequency of \(28GHz\) and a bandwidth of \(200MHz\). The UEs are modeled as a \(2\)x\(2\) antenna array. RISs are made of \(100\)x\(100\) passive elements in a rectangular array, with a FoV of \(120^{\circ}\). All devices have \(\frac{\lambda}{2}\) spacing in both directions between elements. The IAB Donor is set at a \(25m\) height, the IAB Nodes at \(6m\), the RISs at \(3m\), and the UEs at \(1.5m\). The price of IAB Nodes is normalized at \(1\) unit of cost, while the expected inexpensive production process of RISs compared to IAB Nodes is encapsulated in a price ten times smaller at \(0.1\). The total available budget spans from \(6\) to \(20\) units, with steps of \(0.2\). Minimum UE demand is fixed at \(150Mbps\). All the link capacities are computed according to Shannon's capacity. All the results are averaged over \(80\) random deployment instances generated through MATLAB and solved with IBM ILOG CPLEX. The maximum optimality gap between the feasible solutions found and their respective continuous relaxation upper bound is fixed at \(5\%\). ### _Performance Comparison_ Figure 1(a) shows how RIS usage is more prominent in the MTF planning (dashed blue curve), where, on average, four RIS are installed in each cell; in PTF (dashed yellow curve), instead, RIS reduces to about 2 per scenario, indicating that they are not suitable devices to increase peak throughput. As for IAB Nodes, the two models perform in the same way up to around 10 units of budget. Beyond that point, MTF (solid blue curve) tends to buy more IAB Nodes compared to PTF (solid yellow curve). In Figure 1(b), the metrics driving the whole optimization are shown: the solid lines represent the mean throughput achievable in cells planned with MTF (blue) and PTF (yellow), while the dashed lines indicate the related peak throughput. It is clear from these results that even if PTF's results derive from a heuristic approach, peak throughputs in PTF-planned layouts gain more than \(150Mbps\) compared to those of the same scenarios planned with MTF, while at the same time obtaining a similar mean throughput. Remarkable is the fact that there is no need to increase the available budget significantly above 12 units since the mean throughput is almost constant throughout the plot, and the peak throughput does not show meaningful improvement, further increasing the budget. These results show that the PTF-planned networks obtain a quasi-optimal mean throughput (i.e., like MTF planning) while notably improving the peak throughput for the users, so they can fully exploit the entire capacity of the RAN. ### _Topology Features_ In this subsection, we investigate how the different throughput performance reflects in a different network topology generated by the two formulations. Since the IAB network is intrinsically connected to the concept of multi-hop forwarding, it is important to check the average path depth of the tree from the IAB Donor to a generic UE, measured in number Fig. 2: Device installation and throughput sensitivity to budget variation. of hops. Together with the depth, another relevant aspect of a tree topology is its degree, that is, how many subtrees spawn from each node. In particular, the degree of the root node (the IAB Donor) can be used as a metric to evaluate how close to a star topology the RAN is. In Figure (a)a, PTF obtains a consistently smaller average number of hops per user than MTF, indicating that its trees must be shallower than those generated by MTF. The degree of the Donor is shown in Figure (b)b. PTF not only assigns the Donor more incident links than MTF, but the two curves have opposite trends as the available budget increases: PTF gets closer to a star topology, while MTF departs from it. As a proof of concept, we report in Figure 4 an instance with a significant difference between the peak throughput obtained by the two models; it is evident how PTF (Figure (b)b) selects star-like topologies compared to MTF (Figure (a)a). For completeness, we include the same instance, shown from a three-dimensional isometric view in Figure 5. ## V Conclusion In this paper, we analyzed the potential of mm-Wave IAB RANs equipped with RISs to provide peak and mean throughput traffic. Our analysis has demonstrated that optimizing the network layout of mm-Wave RANs integrated with IAB and RIS considering the peak user throughput can significantly improve the achievable peak throughput in realistic urban scenarios compared to the traditional mean-throughput approach. In addition, these layouts can guarantee a mean throughput comparable with the one achievable via traditional planning approaches. Network layouts must move towards shallow star-like topology to pursue peak throughput maximization. This has important practical implications for developing and deploying next-generation networks, particularly in urban environments, where the demand for ultra-high-speed connectivity is rapidly increasing. ## Acknowledgment The research in this paper has been carried out in the framework of Huawei-Politecnico di Milano Joint Research Lab. The Authors acknowledge Huawei Milan research center for the collaboration.
2310.19900
New spinorial mass-quasilocal angular momentum inequality for initial data with marginally future trapped surface
We prove a new geometric inequality that relates the Arnowitt-Deser-Misner mass of initial data to a quasilocal angular momentum of a marginally future trapped surface inner boundary. The inequality is expressed in terms of a 1-spinor, which satisfies an intrinsic first-order Dirac-type equation. Furthermore, we show that if the initial data is axisymmetric, then the divergence-free vector used to define the quasilocal angular momentum cannot be a Killing field of the generic boundary.
Jarosław Kopiński, Alberto Soria, Juan A. Valiente Kroon
2023-10-30T18:10:24Z
http://arxiv.org/abs/2310.19900v2
New Spinorial Mass-Quasilocal Angular Momentum Inequality for Initial Data with marginally Outer Trapped Surface ###### Abstract. We prove a new geometric inequality that relates the Arnowitt-Deser-Misner (ADM) mass of initial data to a quasilocal angular momentum of a marginally outer trapped surface (MOTS) inner boundary. The inequality is expressed in terms of a 1-spinor, which satisfies an intrinsic first-order Dirac-type equation. Furthermore, we show that if the initial data is axisymmetric, then the divergence-free vector used to define the quasilocal angular momentum cannot be a Killing field of the generic boundary. ## 1. Introduction Geometric inequalities arise naturally in General Relativity (GR) as relations involving quantities characterizing black holes, like mass, angular momentum, and horizon area. One of the most significant example of a geometric inequality is the positive (ADM) mass theorem. It was first proven by Schoen and Yau in [21] for manifolds of dimension \(d\leq 7\), and later by Witten with the use of spinor formalism for arbitraty \(d\)[27]. The latter proof was extended to the case of initial data with trapped surfaces by Gibbons, Hawking, Horowitz, and Perry in [10]. The spinorial approach was further adapted by Ludvigsen and Vickers in the context of the Bondi mass in [17]. A refined version of the positivity of the ADM mass has been formulated by Penrose in the form of a lower bound on this quantity in terms of the horizon area of a black hole. If true, it would provide further evidence in favor of the weak cosmic censorship conjecture [19]. There exists a stronger version of the Penrose inequality involving the angular momentum of the initial data, namely \[m\geq\left(\frac{|S|}{16\pi}+\frac{4\pi J^{2}}{|S|}\right)^{\frac{1}{2}}, \tag{1}\] where \(m\), \(J\) and \(|S|\) are the ADM mass, angular momentum and the outermost apparent horizon area respectively--see e.g. [8, 9, 18] for more details. It admits a rigidity case, where equality exclusively occurs for the Kerr black hole. A quasilocal version of this relation states that \[m\geq\left(\frac{|S|}{16\pi}+\frac{4\pi J_{BH}^{2}}{|S|}\right)^{\frac{1}{2}},\] where \(J_{BH}\) is the quasilocal angular momentum of the horizon. Geometric inequalities for black holes remain a very active area of research with new interesting results being obtained. Among them is a bound on the ADM energy in terms of horizon area, angular momentum, and charge obtained by Jaracz and Khuri in [11]. In [2, 3] a different approach was considered by Anglada, who used the monotonic properties of the Geroch and Hawking energy along the inverse mean curvature flow in order to prove a Penrose-like inequality with angular momentum. The first author and Tafel considered perturbations of Schwarzschild data and showed that (1) holds in this setting [14, 15]. Another refinement to the Penrose inequality with angular momentum has been proven by Alaee and Kunduri for 4-dimensional biaxially symmetric maximal initial data [1]. Additionally, recent numerical results such as the ones obtained in [16] give support to the validity of (1) in its full generality. The examples presented above are far from exhaustive, providing a glimpse into contemporary research in geometric inequalities in GR. We refer the reader to [9, 18] for further references. In the present work a spinorial approach is used to obtain a geometric inequality involving the ADM mass of the initial data and a quasilocal angular momentum (a la Szabados [24, 25]) of the MOTS inner boundary. It generalises the result presented in [13] to the case of non-vanishing connection 1-form on the normal bundle of the boundary. The solvability of the boundary value problem for the so-called _approximate twistor equation_ is still an essential ingredient for deriving the main result. The existence of solution is used to obtain a basic mass inequality \[4\pi m\geq\sqrt{2}\oint_{\partial\mathcal{S}}\widehat{\phi}^{A}\gamma_{A}{}^{B }\mathcal{D}_{BC}\phi^{C}dS, \tag{2}\] where \(\mathcal{D}_{AB}\) and \(\gamma_{AB}\) are the 2-dimensional Sen connection and the complex metric on the boundary respectively (see below for details), while \(\phi_{A}\) is a valence 1 spinor on \(\partial\mathcal{S}\). The right-hand side of (2) can be rewritten in terms of the inner null expansion \(\theta^{-}\) of the boundary and the aforementioned angular momentum, provided that \(\phi_{A}\) satisfies a certain first-order Dirac-type equation. Ultimately, \[4\pi m\geq\sqrt{2}\oint_{\mathbb{S}^{2}}\rho^{\prime}|\widetilde{\phi}_{0}|^{ 2}\Omega d\mathbb{S}^{2}+\frac{\kappa}{\sqrt{2}}O\left[\widetilde{\phi},U \right], \tag{3}\] where \(\rho^{\prime}=-\frac{\theta^{-}}{2}\), \(\widetilde{\phi}_{A}\) is a Dirac eigenspinor on \(\mathbb{S}^{2}\), \(\Omega\) is a conformal factor relating the metrics of \(\partial\mathcal{S}\) and \(\mathbb{S}^{2}\) and \(O\left[\widetilde{\phi},U\right]\) a quasilocal angular momentum depending on \(\widetilde{\phi}_{A}\) and a rotation potential \(U\) defined below. It should be noted that the integrals in (3) are now taken with respect to the 2-sphere volume element. A natural symmetry associated with the angular momentum is the existence of axial Killing vector. Therefore, with such assumption we analyze a scenario where the quasilocal angular momentum is generated by such vector (on top of arising from a spinor \(\phi_{A}\)) and show that it is in fact impossible for a generic MOTS inner boundary \(\partial\mathcal{S}\). The article is structured as follows: Section 2 provides a discussion of our main mathematical tools, in particular a new formalism for the \(1+1+2\) decomposition of spinors. Section 3 is an adaptation of the result of [13] to the case of non-vanishing connection 1-form on the normal bundle of \(\partial\mathcal{S}\). In Section 4 we present the main result of this work, a new mass-quasilocal angular momentum inequality for the initial data with a MOTS. In the last section we particularise our analysis to the axisymmetric setting and show that the divergence-free vector generating the quasilocal angular momentum cannot arise simultaneously from a first-order Dirac-type equation and be a Killing vector of the boundary. In the following, 4-dimensional metrics are considered to have the signature \((+---)\). As a result, Riemannian 3- and 2-dimensional metrics will be negative definite. Whenever appropriate, we will expand spinorial expressions using either the Geroch-Held-Penrose (GHP) or Newman-Penrose (NP) formalism, following the conventions outlined in [20]. Throughout this paper, we employ abstract index notation, with lowercase letters representing tensorial indices and uppercase letters representing spinorial indices. Bold font will be used to denote components in a basis. ## 2. Preliminaries ### Basic setting An initial data set \((\mathcal{S},h_{ab},K_{ab})\) for the vacuum Einstein field equations is said to be _asymptotically Schwarzschildean_ if the metric \(h_{ab}\) and the second fundamental form \(K_{ab}\) satisfy the decay conditions \[h_{ab}=-\left(1+\frac{2m}{r}\right)\delta_{ab}+o_{\infty}(r^{-3/2}), \tag{4b}\] \[K_{ab}=o_{\infty}(r^{-5/2}), \tag{4a}\] with \(r^{2}\equiv(x^{1})^{2}+(x^{2})^{2}+(x^{3})^{2}\), \((x^{\alpha})=(x^{1},x^{2},x^{3})\) being asymptotically Cartesian coordinates and \(m\) the ADM mass. In this work we assume that \(\mathcal{S}\) has an inner boundary \(\partial\mathcal{S}\) which is a topological 2-sphere and is equipped with a metric \(\sigma_{ab}\). We consider a \(1+1+2\) spinor formalism, first proposed in [23] by Szabados, and based on the use of \(SL(2,\mathbb{C})\) spinors. Maintaining the same philosophy as in [13], the so-called \(SU(2,\mathbb{C})\) spinors (or space spinors) introduced in [22] will be essential to our purposes since they allow to work efficiently on spacelike hypersurfaces. For more information on the spinor formalism, we refer the reader to [4, 26]. Let \(\tau^{AA^{\prime}}\) be a spinorial counterpart of the orthogonal future vector \(\tau^{a}\) to \(\mathcal{S}\) such that \(\tau_{AA^{\prime}}\tau^{AA^{\prime}}=2\). Likewise, we will denote a spinorial counterpart of the normal vector \(\rho^{a}\) to \(\partial\mathcal{S}\) on \(\mathcal{S}\) as \(\rho^{AA^{\prime}}\) and assume that \(\rho_{AA^{\prime}}\rho^{AA^{\prime}}=-2\). The spinors \(\tau_{AA^{\prime}}\) and \(\rho_{AA^{\prime}}\) are orthogonal --i.e. \(\tau_{AA^{\prime}}\rho^{AA^{\prime}}=0\). We consider dyads \(\{o^{A},\iota^{A}\}\) such that \[\tau_{AA^{\prime}} =o_{A}\overline{o}_{A^{\prime}}+\iota_{A}\overline{\iota}_{A^{ \prime}},\] \[\rho_{AA^{\prime}} =o_{A}\overline{o}_{A^{\prime}}-\iota_{A}\overline{\iota}_{A^{ \prime}}.\] The spinor \(\tau_{AA^{\prime}}\) is used to construct a space-spinor version of a given spinor. In particular, \[\gamma_{AB}\equiv\tau_{(B}{}^{A^{\prime}}\rho_{A)A^{\prime}}\] is the space-spinor version of \(\rho_{AA^{\prime}}\), also called the _complex metric_. It satisfies \(\gamma_{A}{}^{B}\gamma_{B}{}^{C}=\delta_{A}{}^{C}\) and can be expressed as \[\gamma_{AB}=o_{A}\iota_{B}+o_{B}\iota_{A}\] with the use of spin dyad. The spinorial counterpart of the projection operator \(\Pi_{a}{}^{b}\) onto the 2-dimensional surface \(\partial\mathcal{S}\) can now be defined as \[\Pi_{AA^{\prime}}{}^{BB^{\prime}}\equiv\delta_{A}{}^{B}\delta_{A^{\prime}}{}^{ B^{\prime}}-\tfrac{1}{2}\tau_{AA^{\prime}}\tau^{BB^{\prime}}+\tfrac{1}{2} \rho_{AA^{\prime}}\rho^{BB^{\prime}}=\tfrac{1}{2}\left(\delta_{A}{}^{B}\delta_ {A^{\prime}}{}^{B^{\prime}}-\gamma_{A}{}^{B}\overline{\gamma}_{A^{\prime}}{}^{ B^{\prime}}\right). \tag{5}\] Similarly, the spinorial counterpart of the projector \(T_{AA^{\prime}BB^{\prime}}\) onto \(\mathcal{S}\) reads \[T_{AA^{\prime}}{}^{BB^{\prime}}\equiv\tfrac{1}{2}\left(\delta_{A}{}^{B}\delta_ {A^{\prime}}{}^{B^{\prime}}-\tfrac{1}{2}\tau_{AA^{\prime}}\tau^{BB^{\prime}} \right).\] Let \(\nabla_{AA^{\prime}}\) be the spinorial counterpart of the spacetime covariant derivative \(\nabla_{a}\). The \(T_{AA^{\prime}}{}^{BB^{\prime}}\) projector allows to define the 3-dimensional Sen connection \(\mathcal{D}_{AA^{\prime}}\) associated to \(\nabla_{AA^{\prime}}\) as \[\mathcal{D}_{AA^{\prime}}\pi_{C}\equiv T_{AA^{\prime}}{}^{BB^{\prime}}\nabla_{ BB^{\prime}}\pi_{C}.\] As mentioned above, the \(SU(2,\mathbb{C})\) (i.e. space-spinor version) of \(\mathcal{D}_{AA^{\prime}}\) can be constructed by means of \(\tau^{AA^{\prime}}\) as \[\mathcal{D}_{AB}=\tau_{(B}{}^{A^{\prime}}\mathcal{D}_{A)A^{\prime}}.\] The space-spinor version of the 3-dimensional Levi-Civita connection on \(\mathcal{S}\) can be recovered form \(\mathcal{D}_{AB}\) via \[\nabla_{AB}\pi_{C}=\mathcal{D}_{AB}\pi_{C}-\tfrac{1}{2}K_{ABC}{}^{Q}\pi_{Q},\] where \(K_{ABCD}\equiv\tau_{D}{}^{C^{\prime}}\mathcal{D}_{AB}\tau_{CC^{\prime}}\) is the Weingarten spinor. It decomposes as \[K_{ABCD}=\Omega_{ABCD}-\tfrac{1}{3}K\epsilon_{A(C}\epsilon_{D)B},\] where \(\Omega_{ABCD}\equiv K_{(ABCD)}\) is its fully symmetrized part, and \(K\equiv K_{AB}{}^{AB}\) is the mean curvature of \(\mathcal{S}\). The 3-dimensional Levi-Civita operator satisfies \(\nabla_{AB}\epsilon_{CD}=0\). Given a spinor \(\pi_{A_{1}\ldots A_{K}}\), its _Hermitian conjugate_ is defined as follows, \[\widehat{\pi}_{A_{1}\ldots A_{K}}\equiv\tau_{A_{1}}{}^{A^{\prime}_{1}}\ldots \tau_{A_{k}}{}^{A^{\prime}_{k}}\,\overline{\pi}_{A^{\prime}_{1}\ldots A^{ \prime}_{k}}.\] A spinor \(\pi_{A_{1}\ldots A_{K}}\) is said to be real if \[\widehat{\pi}_{A_{1}B_{1}\ldots A_{k}B_{k}}{}^{C_{1}D_{1}\ldots C_{m}D_{m}}= (-1)^{(k+m)}\pi_{A_{1}B_{1}\ldots A_{k}B_{k}}{}^{C_{1}D_{1}\ldots C_{m}D_{m}}.\] The space counterpart of the Levi-Civita connection \(\nabla_{AB}\) is real in the sense that \(\widehat{\nabla_{AB}\pi_{C}}=-\nabla_{AB}\widehat{\pi}_{C}\), while \[\widehat{\mathcal{D}_{AB}\pi_{C}}=-\mathcal{D}_{AB}\widehat{\pi}_{C}+K_{ABC}{ }^{D}\widehat{\pi}_{D}.\] ### On the inner boundary A 2-dimensional Sen connection \(\not{\mathcal{D}}_{AA^{\prime}}\) on \(\partial\mathcal{S}\) arises as a \(\Pi\)-projection of \(\nabla_{AA^{\prime}}\), i.e. \[\not{\mathcal{D}}_{AA^{\prime}}\equiv\Pi_{AA^{\prime}}{}^{BB^{\prime}}\nabla_ {BB^{\prime}}, \tag{6}\] and its associated \(SU(2,\mathbb{C})\) version is given by \(\not{\mathcal{D}}_{AB}\equiv\tau_{(B}{}^{A^{\prime}}\not{\mathcal{D}}_{A)A^{ \prime}}\). It can be promoted to the 2-dimensional Levi-Civita connection \(\not{\nabla}_{AA^{\prime}}\) with the use of the transition spinor \(Q_{AA^{\prime}BC}\), \[\not{\nabla}_{AA^{\prime}}v_{BB^{\prime}}=\not{\mathcal{D}}_{AA^{\prime}}v_{ BB^{\prime}}-Q_{AA^{\prime}B}{}^{C}v_{CB^{\prime}}-\overline{Q}_{AA^{\prime}B^{ \prime}}{}^{C^{\prime}}v_{BC^{\prime}}, \tag{7}\] where \[Q_{AA^{\prime}BC}\equiv-\tfrac{1}{2}\gamma_{C}{}^{D}\not{\mathcal{D}}_{AA^{ \prime}}\gamma_{BD}. \tag{8}\] The \(\not{\nabla}_{AA^{\prime}}\) connection is torsion-free by definition, i.e. \(\left(\not{\nabla}_{AA^{\prime}}\not{\nabla}_{BB^{\prime}}-\not{\nabla}_{ BB^{\prime}}\not{\nabla}_{AA^{\prime}}\right)\phi=0\), and its curvature spinor \(\not{\nabla}_{CC^{\prime}DD^{\prime}AA^{\prime}BB^{\prime}}\) can be defined defined with the use of the following relation \[\left(\not{\nabla}_{AA^{\prime}}\not{\nabla}_{BB^{\prime}}-\not{\nabla}_{BB^{ \prime}}\not{\nabla}_{AA^{\prime}}\right)\pi^{C}=\not{\tau}{}^{C}{}_{QAA^{ \prime}BB^{\prime}}\pi^{Q}=\left(m_{a}\overline{m}_{b}-\overline{m}_{a}m_{b} \right)\left(\rho\rho^{\prime}-\sigma\sigma^{\prime}+\Psi_{2}\right)\gamma^{ CD}\pi_{D}, \tag{9}\] where \(m^{AA^{\prime}}\equiv o^{A}\overline{t}^{A^{\prime}}\), \(\overline{m}^{AA^{\prime}}\equiv\iota^{A}\overline{\sigma}^{A^{\prime}}\), and \(\Psi_{2}\equiv\Psi_{ABCD}o^{A}o^{B}{}_{t}{}^{C}\iota^{D}\) is a component of the Weyl spinor \(\Psi_{ABCD}\). Another 2-dimensional connection \((\not{D}_{AB})\) can be obtained by considering a space-spinor counterpart of \(\not{\nabla}_{AA^{\prime}}\), i.e. \[\not{D}_{AB}\equiv\tau_{(B}{}^{A^{\prime}}\not{\nabla}_{A)A^{\prime}}. \tag{10}\] It is particularly useful in some calculations and can be related to \(\not{\mathcal{D}}_{AB}\) via \[\not{D}_{AB}\pi_{C}=\not{\mathcal{D}}_{AB}\pi_{C}-Q_{AB}{}^{Q}{}_{C}\pi_{Q}, \tag{11}\] where the transition spinor is now given by \[Q_{AB}{}^{C}{}_{D}\equiv-\tfrac{1}{2}\gamma_{D}{}^{Q}\not{\mathcal{D}}_{AB} \gamma_{Q}{}^{C}=\sigma^{\prime}o_{A}o_{B}o_{C}o_{D}+\sigma_{L}\iota_{B}\iota_{C }\iota_{D}-\rho o_{A}o_{B}\iota_{C}\iota_{D}-\rho^{\prime}\iota_{A}\iota_{B}o_{C }o_{D}. \tag{12}\] A natural choice for the ingoing and outgoing null vectors \(k^{a}\) and \(l^{a}\) spanning the normal bundle to \(\partial\mathcal{S}\) is given by \[k^{a}=\tfrac{1}{2}(\tau^{a}-\rho^{a}),\qquad l^{a}=\tfrac{1}{2}(\tau^{a}+\rho^{a }).\] The nature of a trapped surfaces is determined by the causal character and orientation of its mean curvature vector, or equivalently, by the signs of the associated inner and outer null expansions, \[\theta^{-}=\sigma^{ab}\nabla_{a}k_{b},\qquad\theta^{+}=\sigma^{ab}\nabla_{a}l_{b}.\] Making use of Proposition 4.14.2 in [20] we can express \(\theta^{-}\) and \(\theta^{+}\) in terms of a GHP spin coefficients, \[\theta^{-}=-2\rho^{\prime},\qquad\theta^{+}=-2\rho.\] We are now ready to define a MOTS. **Definition 1**.: _The boundary \(\partial\mathcal{S}\) is said to be a marginally outer trapped surface (MOTS) if \(\theta^{+}=0\) and \(\theta^{-}\leq 0\), i.e. if \(\rho=0\) and \(\rho^{\prime}\geq 0\) on \(\partial\mathcal{S}\)._ The 2-dimensional connection \(\not{D}_{AB}\) annihilates \(\epsilon_{AB}\) and \(\gamma_{AB}\): \[\not{D}_{AB}\epsilon_{CD}=0,\quad\not{D}_{AB}\gamma_{CD}=0.\] However, \(\not{D}_{AB}\) is not a Levi-Civita connection on \(\partial\mathcal{S}\) as it has a non-vanishing torsion, \[\not{D}_{AB}\not{D}_{CD}\phi-\not{D}_{CD}\not{D}_{AB}\phi=\tfrac{1}{2}\left(A _{AB}\gamma_{C}{}^{X}\delta_{D}{}^{Y}-A_{CD}\gamma_{A}{}^{X}\delta_{B}{}^{Y} \right)\not{D}_{XY}\phi, \tag{13}\] where \(A_{AB}\) is defined as \[A_{AB}\equiv\tau^{CC^{\prime}}\not{D}_{AB}\rho_{CC^{\prime}}=2\left(\alpha+ \overline{\beta}\right)o_{A}o_{B}-2\left(\overline{\alpha}+\beta\right)\iota _{A}\iota_{B}. \tag{14}\] This spinor is real, \(\widehat{A}_{AB}=-A_{AB}\), and satisfies \(\gamma^{AB}A_{AB}=0\). We can use it to recover a space-spinor version of the Levi-Civita connection \(\not{\nabla}_{AB}\), \[\not{\nabla}_{AB}\pi_{C}\equiv\not{D}_{AB}\pi_{C}-\tfrac{1}{4}A_{AB}\gamma_{ C}{}^{D}\pi_{D}. \tag{15}\] Indeed, \[\not{\nabla}_{AB}\epsilon_{CD}=\not{\nabla}_{AB}\gamma_{CD}=0,\quad\gamma^{ AB}\not{\nabla}_{AB}\pi_{C}=0, \tag{16}\] and \(\not{\nabla}_{AB}\) has vanishing torsion. Moreover, \[\left(\not{\nabla}_{AC}\not{\nabla}_{B}{}^{C}-\not{\nabla}_{B}{}^{C}\not{ \nabla}_{AC}\right)\pi^{B}=\tfrac{1}{2}\left(\Psi_{2}+\rho\rho^{\prime}-\sigma \sigma^{\prime}+\text{c.c.}\right)\pi_{A},\] where c.c. denotes the complex conjugation of the expression in the brackets above. Ultimately, the 2-dimensional Sen and Levi-Civita connections on the boundary are related in the following way, \[\not{D}_{AB}\phi_{C}=\not{\nabla}_{AB}\phi_{C}+\tfrac{1}{4}A_{AB}\gamma_{C}{} ^{D}\phi_{D}+Q_{AB}{}^{L}{}_{C}\phi_{L}. \tag{17}\] The connection \(\not{\nabla}_{AB}\) is real, i.e. \[\widetilde{\not{\nabla}_{AB}\pi_{C}}=-\not{\nabla}_{AB}\widehat{\pi}_{C},\] while, \[\widetilde{\not{\mathcal{D}}_{AB}\pi_{C}}=-\not{\mathcal{D}}_{AB}\widehat{\pi} _{C}+\left(Q_{ABC}{}^{D}+\widehat{Q}_{ABC}{}^{D}\right)\widehat{\pi}_{D}+ \tfrac{1}{2}\gamma_{C}{}^{D}\widehat{\pi}_{D}A_{AB}.\] In the following we will also require an expression for the Hermitian conjugate of the connection \(\not{D}_{AB}\). A direct computation yields \[\widetilde{\not{D}_{AB}\pi_{C}}=-\not{D}_{AB}\widehat{\pi}_{C}+\tfrac{1}{2} \gamma_{C}{}^{D}\widehat{\pi}_{D}A_{AB}. \tag{18}\] In [24], Szabados proposed the following definition of quasilocal angular momentum associated with \(\partial\mathcal{S}\), \[O[N]\equiv-\frac{1}{2\kappa}\oint_{\partial\mathcal{S}}N^{c}A_{c}dS, \tag{19}\] where \(N^{a}\) is a divergence-free vector on \(\partial\mathcal{S}\), \(A_{c}=\rho_{a}\Pi^{f}{}_{c}\nabla_{f}\tau^{a}\) is the connection 1-form on the normal bundle of \(\partial\mathcal{S}\) and \(\kappa=8\pi G\) the gravitational coupling constant. After inspecting the definition (14) of a spinor \(A_{AB}\) we immediately see that it is in fact a space-spinor counterpart of \(A_{b}\) in the expression above. In the sequel we will also make use of a Hodge decomposition on \(\partial\mathcal{S}\). Specifically, given any 1-form \(V_{a}\) on \(\partial\mathcal{S}\) there exist two functions \(f\) and \(f^{\prime}\) such that \[V_{a}=\epsilon_{a}{}^{b}\nabla\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 3. Approximate twistor equation ### Setup Let \(\mathfrak{S}_{1}\), \(\mathfrak{S}_{3}\) be the spaces of symmetric valence \(1\) and \(3\) spinors over the hypersurface \(\mathcal{S}\). The (overdetermined) spatial twistor operator can be defined as follows, \[\mathbf{T}:\mathfrak{S}_{1}\rightarrow\mathfrak{S}_{3},\qquad\mathbf{T}( \kappa)_{ABC}\equiv\mathcal{D}_{(AB}\kappa_{C)},\] and is a space-spinor counterpart of the twistor operator \(\nabla_{A^{\prime}(A}\kappa_{B)}\) (see [5] for more details). The formal adjoint of \(\mathbf{T}\) is given by \[\mathbf{T}^{*}:\mathfrak{S}_{3}\rightarrow\mathfrak{S}_{1},\qquad\mathbf{T}^{ *}(\zeta)_{A}\equiv\mathcal{D}^{BC}\zeta_{ABC}-{\Omega_{A}}^{BCD}\zeta_{BCD},\] and allows to define the _approximate twistor operator_\(\mathbf{L}\equiv\mathbf{T}^{*}\circ\mathbf{T}:\mathfrak{S}_{1}\rightarrow \mathfrak{S}_{1}\), \[\mathbf{L}(\kappa)_{A}\equiv\mathcal{D}^{BC}\mathcal{D}_{(AB}\kappa_{C)}-{ \Omega_{A}}^{BCD}\mathcal{D}_{BC}\kappa_{D}, \tag{23}\] which is formally self-adjoint --i.e. \(\mathbf{L}^{*}=\mathbf{L}\). Let \(\kappa_{A}\) be a solution of the approximate twistor equation \(\mathbf{L}(\kappa)_{A}=0\). The spinors \[\xi_{A}\equiv\tfrac{2}{3}{\mathcal{D}_{A}}^{Q}\kappa_{Q},\quad\xi_{ABC}\equiv \mathcal{D}_{(AB}\kappa_{C)}\] encode independent components of \(\mathcal{D}_{AB}\kappa_{C}\). Moreover, one has that \[\mathbf{L}(\widehat{\xi})_{A}=0.\] Given the set of asymptotically Cartesian coordinates \((x^{\alpha})\) on \(\mathcal{S}\), the position spinor can be defined as follows, \[x_{\mathbf{AB}}\equiv\frac{1}{\sqrt{2}}\left(\begin{array}{cc}x^{1}+\mathrm{ i}x^{2}&-x^{3}\\ -x^{3}&-x^{1}+\mathrm{i}x^{2}\end{array}\right).\] We will consider a solution of the approximate twistor equation with an asymptotic behaviour of the form \[\kappa_{\mathbf{A}}=\bigg{(}1+\frac{m}{r}\bigg{)}x_{\mathbf{AB}}o^{\mathbf{B}} +o_{\infty}(r^{-1/2}). \tag{24}\] A direct computation shows that \[\xi_{\mathbf{A}}=\bigg{(}1-\frac{m}{r}\bigg{)}o_{\mathbf{A}}+o_{ \infty}(r^{-3/2}), \tag{25b}\] \[\xi_{\mathbf{ABC}}=-\frac{3m}{2r^{3}}x_{(\mathbf{AB}}o_{\mathbf{C})}+o_{ \infty}(r^{-5/2}). \tag{25a}\] As a consequence of the above asymptotic expansion of \(\kappa_{A}\), one arrives at the following inequality relating the ADM mass of \(\mathcal{S}\) and an integral of concomitants of the spinor \(\kappa_{A}\), provided that the inner boundary \(\partial\mathcal{S}\) is a MOTS [13], \[4\pi m\geq\oint_{\partial\mathcal{S}}n_{AB}\zeta_{C}\widetilde{\mathcal{D}^{( AB}\zeta^{C)}}\mathrm{d}S, \tag{26}\] where \(n_{AB}\) is the outer directed (i.e. towards \(r=\infty\)) unit normal on \(\partial\mathcal{S}\) as a surface of \(\mathcal{S}\) and \(\zeta_{A}\equiv\widehat{\xi}_{A}\). In the sequel we will use a boundary condition for \(\kappa_{A}\) to refine the inequality (26). ### A boundary value problem for the approximate twistor equation Let \[\mathcal{D}_{A}{}^{Q}\kappa_{Q}=-\frac{3}{2}\widehat{\phi}_{A}\quad\text{on}\quad \partial\mathcal{S}, \tag{27}\] where \(\phi_{A}\) is a smooth spinorial field. The approximate twistor equation together with (27) satisfy the _Lopatinskij-Shapiro_ compatibility conditions (see eg. [7, 28]). This implies that the associated boundary value problem is elliptic. Moreover, the decay conditions (4a) and (4b) for the first and second fundamental forms of the initial data make the approximate twistor operator \(\mathbf{L}\) asymptotically homogeneous. In the sequel we will make use of an operator \(\mathbf{B}\), defined in the in the following way, \[\mathbf{B}:\mathfrak{S}_{1}\to\mathfrak{S}_{1},\qquad\mathbf{B}(\kappa)_{A} \equiv-\sqrt{2}\gamma_{A}{}^{P}\xi_{P}=-\frac{2\sqrt{2}}{3}\gamma_{A}{}^{P} \mathcal{D}^{Q}{}_{P}\kappa_{Q}.\] The equation (27) now becomes \[\mathbf{B}(\kappa)_{A}|_{\partial\mathcal{S}}=\sqrt{2}\gamma_{A}{}^{P} \widehat{\phi}_{P},\] and the associated boundary value problem is \[\mathbf{L}(\kappa)_{A}=0,\quad\mathbf{B}(\kappa)_{A}|_{\partial\mathcal{S}}= \sqrt{2}\gamma_{A}{}^{P}\widehat{\phi}_{P}. \tag{28}\] To discuss the solvability of (28) one has to look at the adjoint operators \(\mathbf{L}^{*}\) and \(\mathbf{B}^{*}\). A similar computation as in [13] (in this case the extrinsic geometry of the boundary is non-trivial) leads to the following: **Proposition 1**.: _If \(\partial\mathcal{S}\) is a MOTS, then the boundary value problem_ \[\mathbf{L}(\kappa)_{A}=0,\qquad\mathbf{B}(\kappa)_{A}|_{\partial\mathcal{S}}= \sqrt{2}\gamma_{A}{}^{P}\hat{\phi}_{P},\] _with a smooth spinorial field \(\phi_{A}\) over \(\partial\mathcal{S}\) admits a unique solution of the form_ \[\kappa_{A}=\hat{\kappa}_{A}+\theta_{A},\qquad\theta_{A}\in H^{2}_{-1/2}, \tag{29}\] _with \(\hat{\kappa}_{A}\) given by the leading term in (24) and where \(H^{s}_{\beta}\) with \(s\in\mathbb{Z}^{+}\) and \(\beta\in\mathbb{R}\) denotes the weighted \(L^{2}\) Sobolev spaces._ ### Inequality with the connection 1-form on the normal bundle of the boundary The boundary condition (27) allows to simplify the inequality (26) to the following form \[4\pi m\geq\sqrt{2}\oint_{\partial\mathcal{S}}\widehat{\phi}^{A}\gamma_{A}{}^ {B}\not{\mathcal{D}}_{BC}\phi^{C}dS, \tag{30}\] where \(\phi_{A}\) is a free data in the boundary value problem (28). All quantities in the integral are now intrinsic to the boundary. A relation (17) between the 2-dimensional Sen and Levi-Civita connections implies that \[\widehat{\phi}^{A}\gamma_{A}{}^{B}\not{\mathcal{D}}_{BC}\phi^{C}=\widehat{ \phi}^{A}\gamma_{A}{}^{B}(\not{\nabla}_{BC}\phi^{C})+\tfrac{1}{4}\widehat{ \phi}^{A}\gamma_{A}{}^{B}A_{BC}\,\gamma^{CL}\phi_{L}+\widehat{\phi}^{A}\gamma_ {A}{}^{B}Q_{BC}{}^{LC}\phi_{L}.\] Using (12), the GHP expression for \(Q_{ABCD}\), it is easy to see that \(Q_{AB}{}^{CB}=\rho^{\prime}\iota_{A}o^{C}\) on a MOTS. This can be combined with the fact that \(\gamma_{A}{}^{B}A_{B}{}^{C}\gamma_{C}{}^{D}=-A_{A}{}^{D}\) to yield \[4\pi m\geq -\sqrt{2}\oint_{\partial\mathcal{S}}\widehat{\phi}^{A}\gamma_{A}{}^{B }(\not{\nabla}_{B}{}^{C}\phi_{C})\,dS+\sqrt{2}\oint_{\partial\mathcal{S}}\rho ^{\prime}|\phi_{0}|^{2}dS-\frac{1}{2\sqrt{2}}\oint_{\partial\mathcal{S}}A_{AB} \widehat{\phi}^{A}\phi^{B}dS. \tag{31}\] It should be noted that since \(A_{AB}\) is intrinsic to \(\partial\mathcal{S}\) the last term can be expressed as \[-\frac{1}{2\sqrt{2}}\oint_{\partial\mathcal{S}}\widehat{\phi}^{A}\phi^{B} \sigma_{AB}{}^{CD}A_{CD}dS, \tag{32}\] where \(\sigma_{ABCD}=\frac{1}{2}(\epsilon_{AC}\epsilon_{BD}\,+\gamma_{AC}\gamma_{BD})\) is the spinorial counterpart of the 2-dimensional metric. ## 4. Mass-quasilocal angular momentum inequality In this section we present the main result of this article - the mass-quasilocal angular momentum inequality for the initial data \((\mathcal{S},h_{ab},K_{ab})\). It is based on a simplification of (31) under suitable choice of the boundary spinor \(\phi_{A}\). A natural condition for \(\phi_{A}\) arises after inspecting the first term on the right-hand side of (31) - its \(2\)-dimensional Dirac derivative should be controlled. Indeed, we will proceed with a following choice, \[\not{\nabla}_{A}{}^{B}\phi_{B}=i\frac{\lambda}{\Omega}\phi_{A}, \tag{33}\] where \(\Omega\) is a conformal factor relating a metric \(\sigma_{ab}\) on \(\partial\mathcal{S}\) with that on a round sphere \(\mathbb{S}^{2}\). It can be showed that with a suitable choice of the conformal rescaling of the spin basis the equation (33) corresponds to a Dirac eigenproblem on \(\mathbb{S}^{2}\) (see Subsection 2.3 for more details). The inequality (31) can now be simplified with the use of (33) to the following form, \[4\pi m\geq\sqrt{2}\oint_{\partial\mathcal{S}}\rho^{\prime}|\phi_{0}|^{2}dS- \frac{1}{2\sqrt{2}}\oint_{\partial\mathcal{S}}\widehat{\phi}^{A}\phi^{B} \sigma_{AB}{}^{CD}A_{CD}\,dS, \tag{34}\] where the reality of the ADM mass \(m\) has been used to eliminate a (purely imaginary) term with the eigenvalue \(\lambda\), i.e. \[\lambda\oint_{\partial\mathcal{S}}\left(|\phi_{0}|^{2}-|\phi_{1}|^{2}\right) \Omega^{-1}dS=0. \tag{35}\] To make a connection between the second term on the right-hand side of (34) and the quasilocal angular momentum (19) we will introduce a spinor \(N^{AB}\), defined as follows, \[N^{AB}\equiv\sigma^{ABCD}\phi_{C}\widehat{\phi}_{D}=\phi_{0}\overline{\phi}_{ 1}\iota^{A}\iota^{B}-\phi_{1}\overline{\phi}_{0}o^{A}o^{B}. \tag{36}\] One can verify that \(N^{AB}\) is real, i.e. \(\widehat{N}^{AB}=-N^{AB}\), so it corresponds to a real \(3\)-vector. Moreover, \(\gamma^{AB}N_{AB}=0\) and \[\not{\nabla}_{a}N^{a} = \not{\nabla}_{AB}\left(\sigma^{ABCD}\phi_{C}\widehat{\phi}_{D} \right)=\not{\nabla}_{AB}\left(\phi^{A}\widehat{\phi}^{B}\right)\] \[= -(\not{\nabla}_{B}{}^{A}\phi_{A})\widehat{\phi}^{B}+\phi^{A}( \not{\nabla}_{A}{}^{B}\widehat{\phi}_{B})=0, \tag{37}\] where (33) has been used in the last equality. Hence, \(N^{a}\) is intrinsic to the boundary \(\partial\mathcal{S}\) and \(\not{\nabla}\)-divergence-free, so we can identify it with \(N^{a}\) generating the quasilocal angular momentum (19). With this choice the inequality (34) yields \[4\pi m\geq\sqrt{2}\oint_{\partial\mathcal{S}}\rho^{\prime}|\phi_{0}|^{2}dS+ \frac{\kappa}{\sqrt{2}}O\left[\sigma^{ABCD}\phi_{C}\widehat{\phi}_{D}\right]. \tag{38}\] In the remainder of this section we will simplify (38) and express it in terms of integrals over a round sphere \(\mathbb{S}^{2}\) and the eigenspinor of the \(\mathbb{S}^{2}\)-Dirac operator. The Hodge decomposition can be applied to the connection \(1\)-form on the normal bundle of the boundary \(A_{b}\) to yield \(A_{b}=\epsilon_{b}{}^{c}\not{\nabla}_{c}U+\not{\nabla}_{b}U^{\prime}\), where \(U\) is a rotation potential. This allows to simplify the quasilocal angular momentum term from (38), i.e. \[\oint_{\partial\mathcal{S}}N^{a}A_{a}dS=\oint_{\partial\mathcal{S}}U\epsilon^{ ab}\not{\nabla}_{a}N_{b}dS. \tag{39}\] The spinorial counterpart of the volume element \(\epsilon_{ab}\) of \(\partial\mathcal{S}\) is \(\epsilon_{ABCD}=\frac{i}{2}\left(\epsilon_{AC}\gamma_{BD}+\epsilon_{BD}\gamma_ {AC}\right)\), and \[\epsilon^{ab}\not{\nabla}_{a}N_{b}=\epsilon^{ABCD}\not{\nabla}_{AB}N_{CD}=2 \lambda\Omega\gamma^{AB}\phi_{A}\widehat{\phi}_{B}=2\lambda\Omega(|\phi_{0}|^{ 2}-|\phi_{1}|^{2}), \tag{40}\] where the definition (36) has been taken into account. Inserting this expression into (39) and using \(dS=\Omega^{2}d\mathbb{S}^{2}\) yields \[\oint_{\partial\mathcal{S}}N^{a}A_{a}dS=2\lambda\oint_{\mathbb{S}^{2}}U\left(| \widetilde{\phi}_{0}|^{2}-|\widetilde{\phi}_{1}|^{2}\right)d\mathbb{S}^{2}, \tag{41}\] where \(\widetilde{\phi}_{\boldsymbol{A}}=\sqrt{\Omega}\,\phi_{\boldsymbol{A}}\). The relation between the volume elements of \(\partial\mathcal{S}\) and \(\mathbb{S}^{2}\) can also be utilized to write the first term on the right-hand side of (38) in terms of an integral over \(\mathbb{S}^{2}\). Ultimately, \[4\pi m\geq\sqrt{2}\oint_{\mathbb{S}^{2}}\rho^{\prime}|\widetilde{\phi}_{0}|^{2 }\Omega d\mathbb{S}^{2}+\frac{\kappa}{\sqrt{2}}O\left[\widetilde{\phi},U\right], \tag{42}\] where \[O\left[\widetilde{\phi},U\right]\equiv-\frac{\lambda}{\kappa}\oint_{\mathbb{S }^{2}}U\left(|\widetilde{\phi}_{0}|^{2}-|\widetilde{\phi}_{1}|^{2}\right)d \mathbb{S}^{2},\] i.e. the quasilocal angular momentum term can now be written only in terms of the geometry of \(\mathbb{S}^{2}\) and the rotation potential \(U\). ## 5. Axisymmetric inner boundary and the Dirac-Killing system. A natural assumption associated with the existence of a well-defined angular momentum is that the initial data is axisymmetric, i.e. there exists 1-form \(v_{a}\) such that \[\nabla_{(a}v_{b)}=0\quad\text{on}\quad\mathcal{S}.\] If the inner boundary \(\partial\mathcal{S}\) is invariant under the action of the 1-parameter group of isometries generated by \(v_{a}\), then \(v_{a}=\Pi_{a}{}^{b}v_{b}\) (\(v_{a}\) is intrinsic to \(\partial\mathcal{S}\)) and the projection of the Killing equation gives \[\not{\nabla}_{(a}v_{b)}=0\implies\not{\nabla}_{a}v^{a}=0.\] This suggests that a natural choice for the vector \(N^{a}\) defining the quasilocal angular momentum (19) is that it arises as a solution to the boundary Killing equation, i.e. \(\not{\nabla}_{(a}N_{b)}=0\). However, \(N^{a}\) has already been constructed from a spinor \(\phi_{A}\) satisfying a first-order Dirac-type equation (33) on \(\partial\mathcal{S}\). Hence, a natural question arises --can such \(N_{a}\) be also a Killing vector of the boundary? We will show that this cannot be the case on a generic \(\partial\mathcal{S}\). In the sequel we will use an adapted system of coordinates \((\psi,\varphi)\) on the boundary, such that its metric \(\sigma_{ab}\) can be written in the following form, \[\sigma=-R^{2}\left(\tfrac{1}{F^{2}}d\psi\otimes d\psi+F^{2}d\varphi\otimes d \varphi\right),\quad\psi\in[\psi_{0},\psi_{1}],\quad\varphi\in[0,2\pi],\] where \(F=F(\psi)\), \(R\) is a constant and the axisymmetric Killing vector is now proportional to \(\partial_{\varphi}\). To avoid the conical singularities on the poles we will assume that \(F(\psi_{0})=F(\psi_{1})=0\). The NP operators \(\delta\) and \(\overline{\delta}\) reduce to \[\delta=\tfrac{1}{\sqrt{2R}}\left(F\partial_{\psi}+\tfrac{i}{F}\partial_{ \varphi}\right),\qquad\overline{\delta}=\tfrac{1}{\sqrt{2R}}\left(F\partial_{ \psi}-\tfrac{i}{F}\partial_{\varphi}\right),\] in this setting. Moreover, \(\alpha-\overline{\beta}=\tfrac{1}{\sqrt{2R}}\partial_{\psi}F\) (see [6] for details). A straightforward computation yields \[\not{\nabla}_{AB}N_{CD}+\not{\nabla}_{CD}N_{AB} = 2\eth^{\prime}\left(\overline{\phi}_{0}\phi_{1}\right)o_{A}o_{B} o_{C}o_{D}-\left(\eth\left(\overline{\phi}_{0}\phi_{1}\right)+\eth^{\prime} \left(\phi_{0}\overline{\phi}_{1}\right)\right)o_{A}o_{B}{}^{L}c_{D}\] \[-\left(\eth\left(\overline{\phi}_{0}\phi_{1}\right)+\eth^{\prime }\left(\phi_{0}\overline{\phi}_{1}\right)\right)\iota_{A}\iota_{B}o_{C}o_{D}+2 \eth\left(\phi_{0}\overline{\phi}_{1}\right)\iota_{A}\iota_{B}{}^{L}c_{D},\] where we have used a decomposition of vector \(N^{a}\) in terms of a spinor \(\phi_{A}\) in accordance with (36). Ultimately, the condition \(\nabla\!\!\!\!/_{(a}N_{b)}=0\) implies that \[\phi_{0}\overline{\phi}_{1}=icF,\quad c\in\mathbb{R}. \tag{43}\] Additionally, the spinor \(\phi_{A}\) satisfies a first-order Dirac-type equation (33), which can now be written as \[\begin{split} F\partial_{\psi}\phi_{1}+\tfrac{\phi_{1}}{2} \partial_{\psi}F-\tfrac{i\sqrt{2}\lambda R}{\Omega}\phi_{0}&=0, \\ F\partial_{\psi}\phi_{0}+\tfrac{\phi_{0}}{2}\partial_{\psi}F- \tfrac{i\sqrt{2}\lambda R}{\Omega}\phi_{1}&=0.\end{split} \tag{44}\] After multiplying the first equation by \(\overline{\phi}_{1}\) and the second by \(\overline{\phi}_{0}\) and performing some manipulations one arrives at \[\begin{split}\partial_{\psi}\left(F|\phi_{1}|^{2}\right)+\tfrac {2\sqrt{2}\lambda RcF}{\Omega}&=0,\\ \partial_{\psi}\left(F|\phi_{0}|^{2}\right)-\tfrac{2\sqrt{2} \lambda RcF}{\Omega}&=0,\end{split}\] where (43) has been used. Hence \[|\phi_{0}|^{2}=\frac{c_{0}}{F}+\frac{2\sqrt{2}\lambda Rc}{F}\int\limits_{\psi_ {0}}^{\psi}\frac{F}{\Omega}dz,\quad|\phi_{1}|^{2}=\frac{c_{1}}{F}-\frac{2 \sqrt{2}\lambda Rc}{F}\int\limits_{\psi_{0}}^{\psi}\frac{F}{\Omega}dz, \tag{45}\] for some \(c_{0},c_{1}\in\mathbb{R}\). On the other hand, one can apply \(F\partial_{\psi}\) to both sides of (43) and use (44) to get \[\sqrt{2}\lambda R\left(|\phi_{1}|^{2}-|\phi_{0}|^{2}\right)=2c\Omega F\partial _{\psi}F.\] After using (45) we obtain the following compatibility condition for \(F\), \[\sqrt{2}\lambda R\bigg{(}\frac{c_{1}-c_{0}}{F}-\frac{4\sqrt{2}\lambda Rc}{F} \int\limits_{\psi_{0}}^{\psi}\frac{F}{\Omega}dz\bigg{)}=2c\Omega F\partial_{ \psi}F. \tag{46}\] Multiplying the above relation by \(F\) and differentiating with respect to \(\psi\) we arrive at \[c\left[\partial_{\psi}\left(\Omega F^{2}\partial_{\psi}F\right)+4R^{2}\lambda ^{2}\Omega^{-1}F\right]=0, \tag{47}\] where \(F(\psi_{0})=0\) has been used. This equation implies that the metric of the inner boundary (via the function \(F\)) depends on the choice of the eigenvalue \(\lambda\). This cannot be the case, as the former arises as part of the fixed geometric data associated with the initial hypersurface and the latter from the first-order Dirac-type equation, which is an auxiliary condition used to simplify the mass inequality. Hence, the only way to solve (47) is to assume that \(c=0\). In this case \(\phi_{A}=0\) and the right-hand side of the mass-quasilocal angular momentum inequality (38) vanishes. ## 6. Conclusions We have obtained a new bound for the ADM mass of of _asymptotically Schwarzschild_ initial data in terms of the future inner null expansion of the inner MOTS boundary and its quasilocal angular momentum. Our approach bears similarities to the one presented in [13], but we extend it here to allow for boundaries with nontrivial extrinsic geometry. An expression for quasilocal angular momentum (in the sense of Szabados [24]) has been recovered in the bound for the ADM mass by assuming a specific type of boundary condition for the approximate twistor equation- a spinor \(\phi_{A}\) solving a first-order Dirac-type equation. The strategy developed in this work could also be applied to obtain Penrose-type inequalities with different type of asymptotics (e.g. asymptotically hyperbolic) --as long as the concept of quasilocal angular momentum is well-defined. ### Acknowledgments This project began during JK's visit to the School of Mathematical Sciences at Queen Mary University of London. He would like to express his gratitude to the school for its hospitality and acknowledge the support provided by the Polish National Science Centre through the MINIATURA project No. 2021/05/X/ST2/01151.
2307.02051
Flowchase: a Mobile Application for Pronunciation Training
In this paper, we present a solution for providing personalized and instant feedback to English learners through a mobile application, called Flowchase, that is connected to a speech technology able to segment and analyze speech segmental and supra-segmental features. The speech processing pipeline receives linguistic information corresponding to an utterance to analyze along with a speech sample. After validation of the speech sample, a joint forced-alignment and phonetic recognition is performed thanks to a combination of machine learning models based on speech representation learning that provides necessary information for designing a feedback on a series of segmental and supra-segmental pronunciation aspects.
Noé Tits, Zoé Broisson
2023-07-05T06:32:42Z
http://arxiv.org/abs/2307.02051v1
# Flowchase: a Mobile Application for Pronunciation Training ###### Abstract In this paper, we present a solution for providing personalized and instant feedback to English learners through a mobile application, called Flowchase, that is connected to a speech technology able to segment and analyze speech segmental and supra-segmental features. The speech processing pipeline receives linguistic information corresponding to an utterance to analyze along with a speech sample. After validation of the speech sample, a joint forced-alignment and phonetic recognition is performed thanks to a combination of machine learning models based on speech representation learning that provides necessary information for designing a feedback on a series of segmental and supra-segmental pronunciation aspects. **Index Terms**: pronunciation training, language learning, speech analysis, machine learning, transfer learning, human-computer interaction ## 1 Introduction In the field of Computer-Assisted Language Learning (CALL), there are nowadays still very few solutions focusing on oral skills, and specifically on pronunciation. Computer-Assisted Pronunciation Training (CAPT) is an important research discipline, but there is a lack of concrete applications, although explicit focus on pronunciation, when combined with the use of technologies, has a significant impact on L2 learners pronunciation [11, 2]. A reason for this situation is the gap of complexity between developing feedback on written, reading or listening skills compared to spoken skills. Indeed for the first three skill sets, implementing simple heuristics based on multiple answer exercises, or matching a user answer to a gold standard is straightforward. On the contrary, providing feedback on spoken skills is not. A speech technology tailored to analyzing segmental and supra-segmental patterns is necessary. The techniques of mispronunciation errors have been close to the findings of speech recognition area, from HMM-GMM [3], to DNN-HMM [4] and more recently, transformers [5]. Indeed the tasks share a strong common characteristic, which is extracting information from audio, a representation of human speech, be it text or phonetics. Transfer Learning [6] is today a widely used technique in Deep Learning for leveraging models trained on related tasks for which there exist abundant datasets towards tasks for which few data exist. This principle has been applied successfully for speech technology application [7] with few available data such as speech recognition for low resource languages, emotion recognition in speech [8], emotional or expressive speech synthesis [9, 10] or voice conversion [11], and also to pronunciation assessment [12]. A specific form of Transfer Learning that was shown to be very efficient is self-supervised learning where a model is trained to learn representations of input data without the need for explicit supervision. In this paper, we present a complete system able to provide a pronunciation training based on a speech technology built on top of a wav2vec2 [13] model adapted for mispronunciation detection, integrated in a mobile application. Although the application contains a mix of tutorials, listening activities and speaking activities, we focus here on the description of the speaking activities that involves the speech processing pipeline for analyzing English learners' pronunciation and providing feedback. ## 2 System Figure 1 describes the main steps of the user experience inside a speaking exercise of a learning program. First, the exercise data is shown to the user. Specifically, it shows an English utterance that the user is expected to say, with a pronunciation guide to help him understand how it has to be pronounced. The pronunciation of the sentence can also be heard thanks to a set of different actor recordings with different English variations. On this screen, the user can record himself. Then the audio recording is sent to the speech technology backend along with the exercise information in order to perform segmentation and analysis of the speech sample. From this analysis, a number of information are extracted depending on the pronunciation aspect analyzed. In the second screen, feedback cards are shown to the user in order to communicate the result of the analysis, and advice in order to improve. Figure 2 details the processing steps happening in the second step explained above. The speech analysis takes as inputs exercise information such as the phonetic content and the En Figure 1: Sequence of steps in a Speaking Activity glish learner's speech sample. The user recording has first to be validated thanks to a series of test on audio that checks is a valid speech sample, including: * the duration of the audio is plausible to have a human speech rate compared to the expected utterance * the speech sample contains voiced content * the phonetic content in speech is sufficiently close from the phonetic content If the speech sample is validated, a combination of machine learning models based on speech representation learning is used for performing a forced-alignment between the speech sample and the phonetic transcription in order to extract the start and end timings of each phoneme of the sequence. The machine learning model also analyzes the phonetic content of the audio and allows us to extract information related to set of different pronunciation aspects such as analysis of vowels or consonants, and specifically analyzing minimal pairs, as shown in Figure 3, analysis of intonation such as word stress or sentence stress, and other supra-segmental aspects like an analysis of pauses between breath groups in an utterance. An example of analysis results on a word from a sentence is shown in Figure 3. Expected phonemes and predicted phonemes are extracted along with the start and end timings, as well as the respective posterior probabilities according to the statistical model. ## 3 Conclusions In this paper, we presented Flowchase, a mobile application for personalized pronunciation training that utilizes a speech technology pipeline for analyzing English learners' pronunciation and providing instant feedback. We employed transfer learning and self-supervised learning techniques to build a speech technology model for detecting mispronunciations based on the wav2vec2 architecture. The system provides feedback on both segmental and supra-segmental aspects of pronunciation. Our solution addresses the gap in current computer-assisted language learning applications, which mostly focus on written, reading, or listening skills. Flowchase provides a tool for improving oral language skills, particularly pronunciation, which is crucial for effective communication. Future work includes testing the effectiveness of the application and the speech technology pipeline in real-world settings and extending the system to support other languages. ## 4 Acknowledgements This work is part of the project _REDCALL_ that is partially funded by a FIRST Entreprise Docteur program from SPW Recherche1 Footnote 1: [https://recherche.wallonie.be/](https://recherche.wallonie.be/)
2307.01558
Scalable variable selection for two-view learning tasks with projection operators
In this paper we propose a novel variable selection method for two-view settings, or for vector-valued supervised learning problems. Our framework is able to handle extremely large scale selection tasks, where number of data samples could be even millions. In a nutshell, our method performs variable selection by iteratively selecting variables that are highly correlated with the output variables, but which are not correlated with the previously chosen variables. To measure the correlation, our method uses the concept of projection operators and their algebra. With the projection operators the relationship, correlation, between sets of input and output variables can also be expressed by kernel functions, thus nonlinear correlation models can be exploited as well. We experimentally validate our approach, showing on both synthetic and real data its scalability and the relevance of the selected features. Keywords: Supervised variable selection, vector-valued learning, projection-valued measure, reproducing kernel Hilbert space
Sandor Szedmak, Riikka Huusari, Tat Hong Duong Le, Juho Rousu
2023-07-04T08:22:05Z
http://arxiv.org/abs/2307.01558v1
# Scalable variable selection for two-view learning tasks with projection operators ###### Abstract In this paper we propose a novel variable selection method for two-view settings, or for vector-valued supervised learning problems. Our framework is able to handle extremely large scale selection tasks, where number of data samples could be even millions. In a nutshell, our method performs variable selection by iteratively selecting variables that are highly correlated with the output variables, but which are not correlated with the previously chosen variables. To measure the correlation, our method uses the concept of projection operators and their algebra. With the projection operators the relationship, correlation, between sets of input and output variables can also be expressed by kernel functions, thus nonlinear correlation models can be exploited as well. We experimentally validate our approach, showing on both synthetic and real data its scalability and the relevance of the selected features. **Keywords:** Supervised variable selection, vector-valued learning, projection-valued measure, reproducing kernel Hilbert space ## 1 Introduction Vector-valued, or more generally structured output learning tasks arising from various domains have attracted much research attention in recent years (Micchelli and Pontil, 2005; Deshwal et al., 2019; Brogat-Motte et al., 2022). For both supervised but also unsupervised learning approaches, multi-view data has been of interest (Hotelling, 1936; Xu et al., 2013; Minh et al., 2016a). Despite many successful approaches for various multi-view and vector-valued learning settings, including interpretability to these models has received less attention. While there are various feature selection and dimensionality reduction methods either for scalar-valued learning tasks, or unsupervised methods for data represented in a single view (Zebari et al., 2020; Li et al., 2017; Anette and Nokto, 2018; Bommert et al., 2020), there is scarcity of methods suitable for when data is represented in two views, or arises from a vector-valued learning task. From the point of view of interpretability, especially feature selection methods are advantageous over dimensionality reduction since the relevant features are directly obtained as a result and not given only in (linear) combinations. Recently, some feature selection methods have been proposed for structured output learning tasks. (Brouard et al., 2022) proposed kernel-based non-linear feature selection model relying on sparsity regularization. Another supervised feature selection approach based on kernel methods is introduced in (Song et al., 2012), this one relying instead on forward- and backward-selection ideology. In addition, (Jordan et al., 2021) discusses feature selection in conjunction with kernel-based models, obtaining sparsity implicitly via loss function without explicit regularization term. An alternative, spline based, approach to the non-linear feature selection is proposed by (Boyd et al., 2018). These methods, relying on the kernel evaluations between data samples for both inputs and outputs, tend not to scale very well to large sample sizes. In this paper, we introduce a novel variable selection approach for vector-valued, or two-view learning tasks, including CCA. Our method is based on efficient iterative computation of projections of input variables to the vector space intersection between the space spanned by the output variables and the one of the previously selected input variables. In this space, the input variables are then selected by a correlation-based criterion. Going one step further, we also exploit a kernel-based representation of the variables, allowing us to capture complex, non-linear relationships. Here, we consider the kernelised representation of the variables instead of data samples - in essence, we model the co-variance on the features in the Hilbert space induced by the kernel. Notably, both input and output features are captured with the same kernelisation. This is in stark contrast to other proposed kernel-based feature selection approaches in literature, where separate kernels are used for data samples in input and output spaces (Brouard et al., 2022; Song et al., 2012; Jordan et al., 2021). We can more readily draw comparisons to canonical correlation analysis (CCA) and its kernelized version, where the correlations are computed between two sets of variables instead of pairs of individual ones (Bie et al., 2005). For many approaches, scalability in feature selection can be a challenge for when the data dimensionality is extremely large. Some supervised linear feature selection models adapted to this setting are proposed in (Fan et al., 2009; Aghazadeh et al., 2018; Valcarci et al., 2022). We note, that all these methods are for the supervised setting, but with scalar-valued output variables. While scalability w.r.t the feature dimensionality is often considered due to motivations arising from fat data, the scalability to large sample sizes is less focused on. Traditionally, kernelized algorithms, while powerful, are very poorly scalable due to the dependence to the kernel matrix, especially if its inverse is required. Contrary to the usual, by leveraging the recursive formulation of our algorithm and a trick with singular value decomposition on the variable representation, our approach is extremely scalable to large sample sizes - which we also demonstrate experimentally in Section 5. To summarize, our main contributions in this paper are as follows: * we propose projective selection (ProjSe) algorithm, a novel approach for variable selection for vector-valued or two-view learning problems that is based on projection operators. In ProjSe the result of the feature selection only depends on the subspace spanned by the outputs, not on the specific values (invariance). * our proposed iterative method offers high scalability even for the kernelised formulation capturing non-linearities in the data, due to a trick with singular value decomposition applied to the feature representation. * we experimentally validate the proposed approach, showing both relevance of the selected features and the efficiency of the algorithm. \begin{table} \begin{tabular}{l l} \(\mathcal{H}\) & is a Hilbert space - unless otherwise noted, it has finite dimension \(d\), in which case \\ \(\oplus\) & denotes the direct sum of subspaces \\ \(\mathbf{I}\) & is the identity operator acting on \(\mathcal{H}\). \\ \(\mathcal{L}\) & is an arbitrary subspace of \(\mathcal{H}\). \\ \(\mathcal{L}_{\mathbf{X}}\) & is a subspace of \(\mathcal{H}\), spanned by the columns of matrix \(\mathbf{X}\). \\ \(\mathcal{L}^{\perp}\) & is a subspace of \(\mathcal{H}\), the orthogonal complement of \(\mathcal{L}\). \\ \(\mathbf{P}_{\mathcal{L}}\) & is an orthogonal projection operator into subspace \(\mathcal{L}\), \(\mathbf{P}_{\mathcal{L}}:\mathcal{H}\rightarrow\mathcal{L}\) \\ \(\mathcal{L}_{\mathbf{P}}\) & is the subspace corresponding to the projection operator \(\mathbf{P}\). \\ \(\mathbf{P}_{\mathcal{L}^{\perp}}\) & is an orthogonal projection operator into the orthogonal complement of subspace \(\mathcal{L}\). \\ \(\mathbf{P}_{\mathbf{X}}\) & is an orthogonal projection operator into the subspace of \(\mathcal{H}\), spanned by the columns \\ \(\mathbf{P}_{\mathbf{X}^{\perp}}\) & is an orthogonal projection operator into the subspace of \(\mathcal{H}\) orthogonal to the subspace spanned by the columns of matrix \(\mathbf{X}\). It is the same as \(\mathbf{P}_{\mathcal{L}^{\perp}_{\overline{X}}}\). \\ \(\mathbf{A}^{+}\) & denotes the Moore-Penrose inverse of matrix \(\mathbf{A}\). \\ \([n]\) & is a short hand notation for the set \(\{1,\dots,n\}\). \\ \(\mathbf{A}\circ\mathbf{B}\) & denotes pointwise(Hadamard) product of matrices \(\mathbf{A}\) and \(\mathbf{B}\). \\ \(\mathbf{A}^{\circ n}\) & is the pointwise power of matrix \(\mathbf{A}\). \\ \(\mathbf{A}[:,\mathcal{I}]\) & selects the subset of columns of matrix \(\mathbf{A}\) with indices in set \(\mathcal{I}\). \\ \end{tabular} \end{table} Table 1: Some of the frequently used notation in this paper. The paper is organised as follows. In the next section we give overview of the method, before moving to more rigorous treatment in Section 3. There we give a brief introduction to projection operators and their matrix representation, and discuss the key detail of our approach, expressing the projector into intersection. We then move on to describing our large-scale kernelized adaptation of the algorithm in Section 4. We validate our approach experimentally in Section 5 before concluding. ## 2 Method overview Our algorithm is designed to perform variable selection when there are multiple dependent variables of interest.We denote the matrix containing the data from which the variables are selected as \(\mathbf{X}\in\mathbb{R}^{m\times n_{x}}\), and the reference data as \(\mathbf{Y}\in\mathbb{R}^{m\times n_{y}}\) - the sample size is \(m\), and the number of features/variables are \(n_{x}\) and \(n_{y}\) (see other frequently used notation in Table 1). Here \(\mathbf{X}\) and \(\mathbf{Y}\) could also correspond to vector-valued inputs and outputs of some supervised learning task. Our method is based on defining correlation via projection operators: we define the correlation between a variable vector \(\mathbf{x}\in\mathbb{R}^{m}\) (a column vector from \(\mathbf{X}\) containing the values of a single input variable for all data points) and a set of variables in columns of matrix \(\mathbf{Y}\), as \[\text{corr}(\mathbf{x},\mathbf{Y})=\left\|\mathbf{P}_{\mathcal{L}_{Y}}\frac{ \mathbf{x}}{||\mathbf{x}||}\right\|=\left\langle\mathbf{P}_{\mathcal{L}_{Y}} \frac{\mathbf{x}}{||\mathbf{x}||},\mathbf{P}_{\mathcal{L}_{Y}}\frac{\mathbf{x }}{||\mathbf{x}||}\right\rangle^{\frac{1}{2}}=\left\langle\frac{\mathbf{x}}{|| \mathbf{x}||},\mathbf{P}_{\mathcal{L}_{Y}}\frac{\mathbf{x}}{||\mathbf{x}||} \right\rangle^{\frac{1}{2}} \tag{1}\] where \(\mathbf{P}_{\mathcal{L}_{Y}}\) (or \(\mathbf{P}_{Y}\) in shorthand) is the orthogonal projection operator into a subspace \(\mathcal{L}_{Y}\) spanned by the columns of \(\mathbf{Y}\). This definition is motivated by the concept of _P_rojection-Valued Measure which plays a significant role in quantum mechanics theory (see for example [11]). Our approach selects variables from input data \(\mathbf{X}\) iteratively, such that correlation between the selected variable and the outputs is high, while correlation to the previously selected variables is low. **Remark 1**.: _For sake of simplicity, we assume that for all \(\mathbf{x}\in\mathbb{R}^{m}\), \(\|\mathbf{x}\|=1\)._ Our variable selection algorithm, ProjSe, is illustrated in Figure 1. The first set of variables is is chosen simply to maximize the projection onto the subspace spanned by columns of \(\mathbf{Y}\), \(\mathcal{L}_{Y}\). This is illustrated with \(\mathbf{x}_{1}\), which is projected with \(\mathbf{P}_{Y}\) as \(\mathbf{P}_{Y}\mathbf{x}_{1}\). The second set of features chosen, \(\mathbf{x}_{2}\) in the figure, is projected into the intersection of \(\mathcal{L}_{Y}\), and the orthogonal complement of the chosen feature \(\mathbf{x}_{1}\), \(\mathcal{L}_{\mathbf{x}_{1}\perp}\). At this step, the correlation is measured with the projection operator \(\mathbf{P}_{\mathcal{L}_{Y}\cap\mathcal{L}_{\mathbf{x}_{1}\perp}}\). Interestingly, it turns out that this projected feature, \(\mathbf{P}_{\mathbf{Y}\cap\mathbf{x}_{1}}\mathbf{x}_{2}\), lies also in the intersection of \(\mathcal{L}_{Y}\) and \(\mathcal{L}_{(\mathbf{P}_{Y}\mathbf{x}_{1})\perp}\). This observation paves the way for building our efficient, recursive algorithm for the feature selection with projection operators. The pseudo-code of the basic form of our proposed variable selection by projection, ProjSe, algorithm is displayed in Figure 2. The approach is fully deterministic without randomness, and thus practical to apply. Similarly to CCA, Figure 1: Illustration of the main steps of the algorithm our variable selection algorithm in a sense joins the variable spaces of the inputs and outputs - both of them are considered in the same space. At the same time, in order to our selection approach to work, \(\mathcal{L}_{\mathbf{X}}\) should not be fully orthogonal \(\mathcal{L}_{\mathbf{Y}}\). Additionally, due to the properties of the projection operators, our approach promotes invariance: the selected explanatory variables (input features) depend only on the subspace spanned by the response variables (output features), and are independent on any transformation on the response variables that would span the same subspace. These transformations can be singular or even nonlinear, as long as they are automorphisms of the output space. In this basic form the algorithm is is scalable to medium-scale data, as it is limited memory required to store the projection matrix.In the following sections we present techniques that allow scaling to very large datasets, e.g. \(m>1000000\) and \(m\gg n_{x},n_{y}\). A recursive representation of the projection operators (see Section 3.2), and especially the singular vector based form, (eq. (9)), significantly reduces the demand for resources, both for memory and for computation time. ## 3 Projection operators This section first introduces relevant background on projection operators and their algebra. Then, two key points for our algorithm are discussed: the matrix representation of the projectors, and how the projection into the intersection can be expressed. ### Projection operators, projectors We now briefly introduce the mathematical framework describing the projection operators of a Hilbert space. The proofs of the statements mentioned, as well as further details, are presented for example by [11]. Let \(\mathbf{T}\) be a linear operator \(\mathbf{T}:\mathcal{H}\rightarrow\mathcal{H}\). Its adjoint \(\mathbf{T}^{*}:\mathcal{H}\rightarrow\mathcal{H}\) is defined by \(\left\langle\mathbf{y},\mathbf{T}^{*}\mathbf{x}\right\rangle=\left\langle \mathbf{T}\mathbf{y},\mathbf{x}\right\rangle\) for all \(\mathbf{x},\mathbf{y}\in\mathcal{H}\). A linear operator \(\mathbf{T}\) is self-adjoint, or Hermitian if \(\mathbf{T}=\mathbf{T}^{*}\), unitary if \(\mathbf{T}^{*}=\mathbf{T}^{-1}\) and normal if \(\mathbf{T}\mathbf{T}^{*}=\mathbf{T}^{*}\mathbf{T}\). On the set of self-adjoint operators of \(\mathcal{H}\) one can define a partial order \(\preceq\) by \[\mathbf{T}_{1}\preceq\mathbf{T}_{2}\Leftrightarrow\left\langle\mathbf{T}_{1} \mathbf{x},\mathbf{x}\right\rangle\leq\left\langle\mathbf{T}_{2}\mathbf{x}, \mathbf{x}\right\rangle, \tag{2}\] for all \(\mathbf{x}\in\mathcal{H}\). An operator \(\mathbf{T}\) is positive if \[\mathbf{0}\preceq\mathbf{T}\Leftrightarrow 0\leq\left\langle\mathbf{T}, \mathbf{x}\right\rangle, \tag{3}\] for all \(\mathbf{x}\in\mathcal{H}\). As a consequence we have \[\mathbf{T}_{1}\preceq\mathbf{T}_{2}\Leftrightarrow\mathbf{0}\preceq\mathbf{T}_{ 2}-\mathbf{T}_{1}. \tag{4}\] Let \(\mathcal{L}\) be a subspace of \(\mathcal{H}\), the orthogonal complement of \(\mathcal{L}\) is given by \(\mathcal{L}^{\perp}=\{\mathbf{x}|\mathbf{x}\perp\mathbf{z},\forall\mathbf{z} \in\mathcal{L},\mathbf{x}\in\mathcal{H}\}\). **Theorem 2**.: _For any subspace \(\mathcal{L}\subseteq\mathcal{H}\), \(\mathcal{H}=\mathcal{L}\oplus\mathcal{L}^{\perp}\)._ Figure 2: The generic algorithm of supervised variable selection by projection A linear operator \(\mathbf{P}\) is a projection operator if \(\mathbf{P}:\mathcal{H}\rightarrow\mathcal{L}\) for a subspace \(\mathcal{L}\) of \(\mathcal{H}\). To highlight the connection between the subspace and projection, they can be also denoted as \(\mathcal{L}_{P}\) and \(\mathbf{P}_{L}\). An operator \(\mathbf{P}\) is _idempotent_ if \(\mathbf{P}\mathbf{P}\mathbf{x}=\mathbf{P}\mathbf{x},\ \text{or}\ \mathbf{P}\mathbf{P}= \mathbf{P}\) holds for any \(\mathbf{x}\in\mathcal{H}\). The projection operators can be characterized by the following statements. **Theorem 3**.: _A linear operator \(\mathbf{P}:\mathcal{H}\rightarrow\mathcal{H}\) is a projection if it is self adjoint, \(\mathbf{P}=\mathbf{P}^{*}\), and idempotent \(\mathbf{P}\mathbf{P}=\mathbf{P}\)._ **Proposition 4**.: _The map connecting the set of closed subspaces1 of \(\mathcal{H}\) and the set of the corresponding orthogonal projections is bijective._ Footnote 1: In a finite dimensional Hilbert space all subspaces are closed. As a consequence of the idempotent and self-adjoint properties we have that the range \(\mathcal{R}(\mathbf{P})\) and the null space \(\mathcal{N}(\mathbf{P})\) of \(\mathbf{P}\) are orthogonal, namely for any \(x,y\in\mathcal{H}\) \[\langle\mathbf{P}x,y-\mathbf{P}y\rangle=\langle\mathbf{P}^{2}x,y-\mathbf{P}y \rangle=\langle\mathbf{P}x,(\mathbf{P}-\mathbf{P}^{2})y\rangle=0. \tag{5}\] The following theorems describe some algebraic properties of projection operators we are going to exploit. **Theorem 5**.: _(Product of projections) Let \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) be projections on \(\mathcal{H}\). \(\mathbf{P}=\mathbf{P}_{1}\mathbf{P}_{2}\) is projection if and only if \(\mathbf{P}_{1}\mathbf{P}_{2}=\mathbf{P}_{2}\mathbf{P}_{1}\). Then \(\mathbf{P}:\mathcal{H}\rightarrow\mathcal{L}_{P_{1}}\cap\mathcal{L}_{P_{2}}\)._ **Theorem 6**.: _(Sum of projections) Let \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) be projections on \(\mathcal{H}\). \(\mathbf{P}=\mathbf{P}_{1}+\mathbf{P}_{2}\) is projection if and only if \(\mathcal{L}_{P_{1}}\perp\mathcal{L}_{P_{2}}\). Then \(\mathbf{P}:\mathcal{H}\rightarrow\mathcal{L}_{P_{1}}\oplus\mathcal{L}_{P_{2}}\)._ **Theorem 7**.: _(Partial order) Let \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) be projections on \(\mathcal{H}\), and \(\mathcal{N}(\mathbf{P}_{1})\) and \(\mathcal{N}(\mathbf{P}_{2})\) the corresponding null spaces. Then the following statements are equivalent._ \[\begin{array}{ll}\mathbf{P}_{1}\mathbf{P}_{2}&=\mathbf{P}_{2}\mathbf{P}_{1} =\mathbf{P}_{1},\\ \mathcal{L}_{P_{1}}&\subseteq\mathcal{L}_{P_{2}},\\ \mathcal{N}(\mathbf{P}_{1})&\supseteq\mathcal{N}(\mathbf{P}_{2}),\\ ||\mathbf{P}_{1}\mathbf{x}||&\leq||\mathbf{P}_{2}\mathbf{x}||,\\ \mathbf{P}_{1}&\preceq\mathbf{P}_{2}.\end{array} \tag{6}\] **Theorem 8**.: _(Difference of projections) Let \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) be projections on \(\mathcal{H}\). \(\mathbf{P}=\mathbf{P}_{2}-\mathbf{P}_{1}\) is projection if and only \(\mathcal{L}_{P_{1}}\subseteq\mathcal{L}_{P_{2}}\). Then \(\mathbf{P}:\mathcal{H}\rightarrow\mathcal{L}_{P}\), where \(\mathcal{L}_{P_{2}}=\mathcal{L}_{P_{1}}\oplus\mathcal{L}_{P}\), namely \(\mathcal{L}_{P}\) is the orthogonal complement of \(\mathcal{L}_{P_{1}}\) in \(\mathcal{L}_{P_{2}}\)._ From the theorems above we can derive a simple corollary: if \(\mathcal{L}\) is a subspace, then the projection into its complement is equal to \(\mathbf{P}_{\mathcal{L}^{\perp}}=\mathbf{I}-\mathbf{P}_{\mathcal{L}}\). **Theorem 9**.: _(Monotone increasing sequence) Let \((\mathbf{P}_{n})\) be monotone increasing sequence of projections defined on \(\mathcal{H}\). Then:_ 1. \((\mathbf{P}_{n})\) _is strongly operator convergent, and the limit_ \(\mathbf{P}\)_,_ \(\mathbf{P}_{n}\rightarrow\mathbf{P}\)_, is a projection._ 2. \(\mathbf{P}:\mathcal{H}\rightarrow\cup_{n=1}^{\infty}\mathcal{L}_{\mathbf{P}_{n}}\)_._ 3. \(\mathcal{N}(\mathbf{P})=\cap_{n=1}^{\infty}\mathcal{N}(\mathbf{P}_{n})\)_._ If \(\mathbf{S}\) is a self-adjoint operator and \(\mathbf{P}\) is a projection into the range of \(\mathbf{S}\) then \(\mathbf{S}\mathbf{P}=\mathbf{P}\mathbf{S}\), see (Conway, 1997) for further details. Let \(\mathbf{I}\) be projection into the entire space, and \(\mathbf{0}\) its complement. If \(\mathbf{0}\leq\mathbf{S}\leq\mathbf{I}\), and \(\mathbf{T}\geq\mathbf{0}\) operators. If \(\mathbf{P}\) is a projection into \(\mathbf{S}+\mathbf{T}\) then \(\mathbf{P}\) commutes both \(\mathbf{S}\) and \(\mathbf{T}\). See in (Hayashi and Nagaoka, 2002). ### Matrix representation of projectors If a basis of the Hilbert space \(\mathcal{H}\) is fixed, then every linear operator acting on \(\mathcal{H}\) can be represented by a matrix. Let the subspace \(\mathcal{L}\) of \(\mathcal{H}\) be spanned by the vectors \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\) of \(\mathcal{H}\). Let us construct a matrix \(\mathbf{A}\) whose columns are equal to the vectors \(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\). Here the linear independence of those vectors is not assumed. The corresponding subspace is denoted by \(\mathcal{L}_{A}\) The matrix representing the orthogonal projection operator into to subspace \(\mathcal{L}_{A}\) can be expressed by a well-known minimization problem (Golub and Loan, 2013), \[\arg\min_{\mathbf{w}}\|\mathbf{x}-\mathbf{A}\mathbf{w}\|^{2}=\mathbf{w}^{*}=( \mathbf{A}^{T}\mathbf{A})^{+}\mathbf{A}^{T}\mathbf{x}, \tag{7}\] where \({}^{+}\) denotes the Moore-Penrose pseudo-inverse. Based on eq. (5) the vector \(\mathbf{A}^{T}\mathbf{w}^{*}\) is the orthogonal projection of \(\mathbf{x}\) into \(\mathcal{L}\). The orthogonal projection of \(\mathbf{x}\) is equal to \(\mathbf{P}_{A}\mathbf{x}=\mathbf{A}(\mathbf{A}^{T}\mathbf{A})^{+}\mathbf{A}^{T }\mathbf{x}\). Since this is true for any \(\mathbf{x}\in\mathcal{H}\), the matrix representation of the orthogonal projection operator \(\mathbf{P}_{A}\) is given by \[\mathbf{P}_{A}=\mathbf{A}(\mathbf{A}^{T}\mathbf{A})^{+}\mathbf{A}^{T}. \tag{8}\] This formula can be simplified by exploiting the properties of the Moore-Penrose pseudo-inverse, see for example [Ben-Israel and Greville, 2003], via the singular value decomposition \(\mathbf{U}_{A}\mathbf{S}_{A}\mathbf{V}_{A}^{T}\) of the matrix \(\mathbf{A}\). Here we assume that the matrix \(\mathbf{A}\in\mathbb{R}^{m\times n_{A}}\), \(m>n_{A}\), and \(\mathbf{V}_{A}\) is a square matrix, but \(\mathbf{U}_{A}\) contains only those left singular vectors where the corresponding singular values are not equal to zero. We have \[\boxed{\mathbf{P}_{A}}=\mathbf{A}(\mathbf{A}^{T}\mathbf{A})^{+}\mathbf{A}^{T }=\mathbf{A}\mathbf{A}^{+}=\mathbf{U}_{A}\mathbf{S}_{A}\mathbf{V}_{A}^{T} \mathbf{V}_{A}\mathbf{S}_{A}^{+}\mathbf{U}_{A}^{T}=\boxed{\mathbf{U}_{A} \mathbf{U}_{A}^{T}}. \tag{9}\] This representation of the projection operator plays a central role in our variable selection algorithm. The following proposition ensures that the projection operator does not depend on its representation. **Proposition 10**.: _Assume that two different matrices \(\mathbf{A}\) and \(\mathbf{B}\) span the same subspace \(\mathcal{L}\) of dimension \(k\). Then the two representations \(\mathbf{P}_{A}=\mathbf{U}_{A}\mathbf{U}_{A}^{T}\) and \(\mathbf{P}_{B}=\mathbf{U}_{B}\mathbf{U}_{B}^{T}\) yield the same projection operator._ Proof.: Since the columns of \(\mathbf{U}_{B}\) as linear combinations of \(\mathbf{B}\) are in the \(\mathcal{L}\), thus \(\mathbf{P}_{A}\mathbf{U}_{B}=\mathbf{U}_{B}\). Multiplying both sides with \(\mathbf{U}_{B}^{T}\) we obtain that \(\mathbf{P}_{A}\mathbf{U}_{B}\mathbf{U}_{B}^{T}=\mathbf{U}_{B}\mathbf{U}_{B}^{T}\) which is \(\mathbf{P}_{A}\mathbf{P}_{B}=\mathbf{P}_{B}\). Because the right hand side, \(\mathbf{P}_{B}\), is a projection, the left hand side \(\mathbf{P}_{A}\mathbf{P}_{B}\) is also one. Following the same line we have \(\mathbf{P}_{B}\mathbf{P}_{A}=\mathbf{P}_{A}\) as well. From Theorem 5 we know that if the product of projections is a projection, then the product of projections is commutative, \(\mathbf{P}_{B}\mathbf{P}_{A}=\mathbf{P}_{A}\mathbf{P}_{B}\). Finally we can conclude that \[\mathbf{P}_{A}=\mathbf{P}_{B}\mathbf{P}_{A}=\mathbf{P}_{A}\mathbf{P}_{B}= \mathbf{P}_{B}.\] We also exploited that if \(\mathcal{H}\) is finite dimensional and the corresponding field is \(\mathbb{R}\) then the adjoint of \(\mathbf{P}^{*}\) is represented by the transpose \(\mathbf{P}^{T}\) of the matrix \(\mathbf{P}\). #### 3.2.1 Projection onto the intersection of subspaces - General view Our algorithm hinges on the orthogonal projector of the intersection of a set of subspaces \(\{\mathcal{L}_{1},\mathcal{L}_{2},\ldots,\mathcal{L}_{n_{L}}\}\). To introduce this concept, here we mainly follow the line presented by [Ben-Israel, 2015]. We can start with some classical result, first we can recall [von Neumann, 1950], who derived a solution in case of two subspaces as a limit: \[\mathbf{P}_{\mathcal{L}_{1},\cap\mathcal{L}_{2}}=\lim_{n\to\infty}(\mathbf{P} _{\mathcal{L}_{1}}\mathbf{P}_{\mathcal{L}_{2}})^{n}. \tag{10}\] That result has been extended to arbitrary finite sets of subspaces by [Halperin, 1962],: \[\mathbf{P}_{\mathcal{L}_{1},\cap\cdots\cap\mathcal{L}_{n_{L}}}=\lim_{n\to \infty}(\mathbf{P}_{\mathcal{L}_{1}}\ldots\mathbf{P}_{L_{n}})^{n}. \tag{11}\] [Anderson. and Duffin, 1969], gave an explicit formula for the case of two subspaces by \[\mathbf{P}_{\mathcal{L}_{1},\cap\mathcal{L}_{2}}=2\mathbf{P}_{\mathcal{L}_{1}} (\mathbf{P}_{\mathcal{L}_{1}}+\mathbf{P}_{\mathcal{L}_{2}})^{\dagger}\mathbf{P }_{\mathcal{L}_{2}}. \tag{12}\] [Ben-Israel, 2015], provides an alternative to compute \(\mathbf{P}_{\mathcal{L}_{1},\cap\cdots\cap\mathcal{L}_{n_{L}}}\) Here we rely on the Lemma 4.1 and Corollary 4.2 of his work: **Proposition 11**.: _For \(i=1,\ldots,n_{L}\), let \(\mathcal{L}_{i}\) be subspaces of \(\mathcal{H}\), \(\mathbf{P}_{i}\) be the corresponding projectors, \(\mathbf{P}_{i}^{\perp}=\mathbf{I}-\mathbf{P}_{i}\), and \(\lambda_{i}>0\). Define \(\mathbf{Q}:=\sum_{i=1}^{n_{L}}\lambda_{i}\mathbf{P}_{i}^{\perp}\). then we have \(\mathbf{P}_{\mathcal{L}_{1},\cap\cdots\cap\mathcal{L}_{n_{L}}}=\mathbf{I}- \mathbf{Q}^{\dagger}\mathbf{Q}\). With the particular choice \(\sum_{i=1}^{n_{L}}\lambda_{i}=1\), \(\mathbf{Q}\) might be written as \(\mathbf{Q}:=\mathbf{I}-\sum_{i=1}^{n_{L}}\lambda_{i}\mathbf{P}_{i}\), eliminating all the complements of the projectors._ By exploting that for any projector \(\mathbf{P}\)\(\mathbf{P}^{\perp}=\mathbf{I}-\mathbf{P}\), the \(\mathbf{Q}_{t}\) corrsponding to \(\mathbf{P}_{\mathcal{L}_{V}\cap\mathcal{L}_{\hat{\mathcal{L}}_{t}^{\prime}}}\) can be written as \[\mathbf{Q}_{t}=\lambda_{V}(\mathbf{I}-\mathbf{P}_{\mathcal{L}_{V}})+\sum_{ \mathbf{x}\in\bar{\mathcal{X}}_{t}}\lambda_{x}\mathbf{P}_{L_{\{\mathbf{x}\}}}. \tag{13}\] The critical point is the computation of the Moore-Penrose inverse of \(\mathbf{Q}\). ### Expressing the projector into intersection To implement the proposed variable selection algorithm (Figure 2) the projection into the intersection of an arbitrary subspace \(\mathcal{L}_{\mathbf{P}}\) and the complement of an arbitrary vector \(\mathbf{x}\), \(\mathbf{P}_{\mathcal{L}\cap\mathbf{x}^{\perp}}\), has to be computed. The projector \(\mathbf{P}_{\mathcal{L}^{\perp}}\) to the complement of a subspace \(\mathcal{L}\) can be expressed as \(\mathbf{I}-\mathbf{P}_{\mathcal{L}}\), hence the projector into \(\mathbf{P}_{\mathbf{x}^{\perp}}\) is given by \(\mathbf{I}-\frac{\mathbf{x}\mathbf{x}^{T}}{||\mathbf{x}||^{2}}\). Since \(\mathcal{L}\) is arbitrary we use \(\mathbf{P}\) instead of \(\mathbf{P}_{\mathcal{L}}\) for sake of simplicity. While we have these two projectors, their product, according to Theorem 5, is not a projection as it does not commute: \[\mathbf{P}\left(\mathbf{I}-\frac{\mathbf{x}\mathbf{x}^{T}}{||\mathbf{x}||^{2} }\right)=\mathbf{P}-\frac{\mathbf{P}\mathbf{x}\mathbf{x}^{T}}{||\mathbf{x}||^ {2}}\neq\left(\mathbf{I}-\frac{\mathbf{x}\mathbf{x}^{T}}{||\mathbf{x}||^{2}} \right)\mathbf{P}=\mathbf{P}-\frac{\mathbf{x}\mathbf{x}^{T}\mathbf{P}}{|| \mathbf{x}||^{2}}, \tag{14}\] because in the general case \(\mathbf{P}\mathbf{x}\mathbf{x}^{T}\neq\mathbf{x}\mathbf{x}^{T}\mathbf{P}\). To overcome this problem we can recognize that the intersection \(\mathcal{L}_{\mathbf{P}}\cap\mathcal{L}_{\mathbf{x}^{\perp}}\) can be expressed after a simple transformation. **Lemma 12**.: _Let \(\mathbf{P}\) be a projector and \(\mathbf{x}\) be any vector, then the intersections \(\mathcal{L}_{\mathbf{P}}\cap\mathcal{L}_{\mathbf{x}^{\perp}}\) and \(\mathcal{L}_{\mathbf{P}}\cap\mathcal{L}_{(\mathbf{P}\mathbf{x})^{\perp}}\) are the same subspaces of \(\mathcal{L}_{\mathbf{P}}\)._ Proof.: Any vector \(\mathbf{u}\) is in \(\mathcal{L}_{\mathbf{P}}\) if \(\mathbf{Pu}=\mathbf{u}\), \(\mathbf{u}\) is in \(\mathcal{L}_{\mathbf{x}^{\perp}}\) if \(\langle\mathbf{x},\mathbf{u}\rangle=0\), and \(\mathbf{u}\) is in \(\mathcal{L}_{(\mathbf{P}\mathbf{x})^{\perp}}\) if \(\langle\mathbf{P}\mathbf{x},\mathbf{u}\rangle=0\). Since \(\mathbf{Pu}=\mathbf{u}\), therefore \(\langle\mathbf{x},\mathbf{u}\rangle=\langle\mathbf{P}\mathbf{x},\mathbf{u} \rangle=0\). By projecting \(\mathbf{x}\) into \(\mathcal{L}\) first, and then computing the corresponding intersection, we can compute the projector into \(\mathcal{L}_{\mathbf{P}}\cap\mathcal{L}_{\mathbf{x}^{\perp}}\) in a simple way. **Proposition 13**.: _Let \(\|\mathbf{x}\|=1\) and \(\alpha=\|\mathbf{P}\mathbf{x}\|^{2}=\mathbf{x}^{T}\mathbf{P}\mathbf{P}\mathbf{ x}=\mathbf{x}^{T}\mathbf{P}\mathbf{x}\). If \(\alpha>0\) then_ \[\boxed{\mathbf{P}_{\mathcal{L}_{\mathbf{P}}\cap\mathcal{L}_{\mathbf{x}^{ \perp}}}}=\mathbf{P}_{\mathcal{L}_{\mathbf{P}}\cap\mathcal{L}_{(\mathbf{P} \mathbf{x})^{\perp}}}=\mathbf{P}\left(\mathbf{I}-\frac{1}{\alpha}\mathbf{P} \mathbf{x}\mathbf{x}^{T}\mathbf{P}\right)=\boxed{\mathbf{P}-\frac{1}{\alpha} \mathbf{P}\mathbf{x}\mathbf{x}^{T}\mathbf{P}}. \tag{15}\] _When \(\alpha=0\), which means \(\mathbf{x}\) orthogonal to \(\mathcal{L}_{\mathbf{P}}\), then we have \(\mathbf{P}_{\mathcal{L}_{\mathbf{P}}\cap\mathcal{L}_{\mathbf{x}^{\perp}}}= \mathbf{P}\)._ We can check that \(\mathbf{P}-\frac{1}{\alpha}\mathbf{P}\mathbf{x}\mathbf{x}^{T}\mathbf{P}\) is a real projector. It is idempotent, since \[\left(\mathbf{P}-\frac{1}{\alpha}\mathbf{P}\mathbf{x}\mathbf{x}^{T}\mathbf{P} \right)^{2}=\mathbf{P}-\frac{1}{\alpha}\mathbf{P}\mathbf{x}\mathbf{x}^{T} \mathbf{P}-\frac{1}{\alpha}\mathbf{P}\mathbf{x}\mathbf{x}^{T}\mathbf{P}+\frac {\alpha}{\alpha^{2}}\mathbf{P}\mathbf{x}\mathbf{x}^{T}\mathbf{P}=\mathbf{P}- \frac{1}{\alpha}\mathbf{P}\mathbf{x}\mathbf{x}^{T}\mathbf{P}.\] This agrees with Theorem 5 which states that the product of projections is a projection, idempotent and adjoint, and it is map into the intersection of the corresponding subspaces. Furthermore the orthogonality of \(\mathbf{x}\) and the projection of any \(\mathbf{u}\in\mathcal{H}\) by \(\mathbf{P}-\frac{1}{\alpha}\mathbf{P}\mathbf{x}\mathbf{x}^{T}\mathbf{P}\) can also be verified, as \[\left\langle\mathbf{x},\left(\mathbf{P}-\frac{1}{\alpha}\mathbf{P}\mathbf{x} \mathbf{x}^{T}\mathbf{P}\right)\mathbf{u}\right\rangle=\mathbf{x}^{T} \mathbf{Pu}-\frac{1}{\alpha}\mathbf{x}^{T}\mathbf{P}\mathbf{x}\mathbf{x}^{T} \mathbf{Pu}=\mathbf{x}^{T}\mathbf{Pu}-\frac{1}{\alpha}\alpha\mathbf{x}^{T} \mathbf{P}\mathbf{u}=0.\] **Definition 14**.: _([19]) Two subspaces \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are orthogonal if for any vectors, \(\mathbf{x}_{1}\in\mathcal{L}_{1}\) and \(\mathbf{x}_{2}\in\mathcal{L}_{2}\)\(\langle\mathbf{x}_{1},\mathbf{x}_{2}\rangle=0\) holds. Two subspaces \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are geometrically orthogonal if \(\mathcal{L}_{1}=(\mathcal{L}_{1}\cap\mathcal{L}_{2})\oplus\mathcal{L}_{1}\) and \(\mathcal{L}_{2}=(\mathcal{L}_{1}\cap\mathcal{L}_{2})\oplus\mathcal{L}_{2}\) and \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are orthogonal._ **Lemma 15**.: _([19]) Two subspaces \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are geometrically orthogonal if and only if \(\mathbf{P}_{\mathcal{L}_{1}}\mathbf{P}_{\mathcal{L}_{2}}=\mathbf{P}_{\mathcal{L}_ {2}}\mathbf{P}_{\mathcal{L}_{1}}\) and \(\mathcal{L}_{\mathbf{P}_{\mathcal{L}_{1}}\mathbf{P}_{\mathcal{L}_{2}}}=\mathcal{ L}_{1}\cap\mathcal{L}_{2}\)._ **Proposition 16**.: _The subspaces \(\mathcal{L}_{\mathbf{P}}\) and \(\mathcal{L}_{(\mathbf{P}\mathbf{x})^{\perp}}\) are geometrically orthogonal._ Proof.: It is a simple Corollary of Proposition 13. ## 4 Selecting variables in RKHS In order to take into account non-linear correlations in the data, we propose a kernelized adaptation of the problem. Kernel methods are a group of varied machine learning models, taking advantage of a symmetric and positive semi-definite kernel function comparing data samples (sets of features) \(k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\). The usage of a kernel function allows including non-linearity to the models implicitly via a feature map \(\varphi:\mathcal{X}\rightarrow\mathcal{F}_{k}\): a kernel evaluated with two samples corresponds to an inner product in this so-called feature space (more specifically reproducing kernel Hilbert space, RKHS): \(k(x,z)=\langle\varphi(x),\varphi(z)\rangle_{\mathcal{F}_{k}}\). For more thorough introduction to traditional kernel methods, we refer the reader e.g. to [11]. We here propose to kernelize the variable representation. We consider \(\phi:\mathbb{R}^{m}\rightarrow\mathcal{H}\), where \(\mathbb{R}^{m}\) is the vector space containing all columns of \(\mathbf{Y}\in\mathbb{R}^{m\times n_{y}}\) and \(\mathbf{X}\in\mathbb{R}^{m\times n_{x}}\), and \(\mathcal{H}\) is a RKHS. In essence, this corresponds to defining a kernel on the variable vectors, \(\kappa:\mathbb{R}^{m}\times\mathbb{R}^{m}\to\mathbb{R}\) - in fact, we assume that the \(\phi\) is only given implicitly via \(\kappa\). In mathematical sense, this matrix can equally well be considered to be a kernel matrix, since distinction between the rows and columns is by convention only. Usually however the matrix built from inner products between the variables is referred to as covariance operator. The covariance operators are also extended to RKHS with various applications in machine learning tasks (Muandet et al., 2017; Minh et al., 2016). Contrary to our approach, there the feature map and kernel are defined on data space \(\mathcal{X}\) instead of variable space \(\mathbb{R}^{m}\). We need to mention here also the Gaussian Process Regression, (Rasmussen and Williams, 2005) where kernels are also used to cover the covariance matrix, thus connecting the variables via inner product. We highlight that as the kernel is defined on variables, we can easily evaluate \(\kappa(\mathbf{x}_{i},\mathbf{y}_{j})\). We use the following shorthands for feature and kernel evaluations on the available training data: \(\phi(\mathbf{Y})=[\phi(\mathbf{y}_{1}),\dots,\phi(\mathbf{y}_{n_{y}})]\) with \(\phi(\mathbf{y}_{i}),i\in[n_{y}]\) a column vector, and \(\kappa(\mathbf{Y},\mathbf{x})=[\kappa(\mathbf{y}_{1},\mathbf{x}),\dots, \kappa(\mathbf{y}_{n_{y}},\mathbf{x})]^{\top}\) a column vector of kernel evaluations (similarly for \(\phi(\mathbf{X})\)). Note that \(\kappa(\mathbf{Y},\mathbf{Y})=\phi(\mathbf{Y})^{\top}\phi(\mathbf{Y})\) with this notation. We further denote \(\mathbf{K}_{\mathbf{y}}=\kappa(\mathbf{Y},\mathbf{Y})\). We assume that \(\|\phi(\mathbf{x})\|=1\). ### Expressing the Projection operator in RKHS Based on Section 3.2, Equation (9), the projection \(\mathbf{P}_{Y}\) is represented with the left singular vectors of \(\mathbf{P}_{Y}\), \(\mathbf{U}_{Y}\). This representation is also needed for the kernelized algorithm. However calculating directly the singular value decomposition on \(\phi(\mathbf{Y}),\phi(\mathbf{Y})=\mathbf{U}_{Y}\mathbf{S}_{\mathbf{Y}}\mathbf{ V}_{Y}^{T}\), might not be feasible if the dimensionality of the feature space is large. Assuming that \(\mathcal{H}\) is finite dimensional2 with dimension \(d\), we have \(\phi(\mathbf{Y}),\mathbf{U}_{Y}\in\mathbb{R}^{d\times n_{y}}\), and \(\mathbf{S}_{Y},\mathbf{V}_{Y}\in\mathbb{R}^{n_{y}\times n_{y}}\). Therefore we can write Footnote 2: For clarity, we restrict the discussion to finite dimensions and \(\mathcal{H}=\mathbb{R}^{d}\) with \(d<\infty\). We note that the approach is equally valid also with infinite dimensions. \[\mathbf{S}_{Y}=\left[\begin{array}{c}\mathbf{D}_{Y}\\ \varnothing\end{array}\right],\mathbf{D}_{Y}\in\mathbb{R}^{n_{y}\times n_{y}},\varnothing\in[0]^{m-n_{y},n_{y}}, \tag{16}\] and \(\mathbf{D}_{Y}\) diagonal with nonnegative elements of singular values, thus \(\phi(\mathbf{Y})=\mathbf{U}_{Y}\mathbf{D}_{Y}\mathbf{V}_{Y}^{T}\). Again, this decomposition can not be computed directly, however we can go on the following line of computation. To express the \(\mathbf{U}_{Y}\) we can apply a similar approach to what is exploited in the computation of the kernel principal component analysis (Scholkopf et al., 1998). Recall that the kernel matrix on columns of \(\phi(\mathbf{Y})\) is \(\mathbf{K}_{Y}=\phi(\mathbf{Y})^{T}\phi(\mathbf{Y})\). From the singular value decomposition we can derive that \(\mathbf{K}_{Y}=\mathbf{V}_{Y}\mathbf{S}_{Y}^{2}\mathbf{V}^{T}\). This kernel has a reasonably small size, \(n_{y}\times n_{y}\), thus its eigenvalue decomposition can be computed, which yields \(\mathbf{V}_{Y}\) and the squares of the singular values of the diagonal elements of \(\mathbf{S}_{Y}\). By combining these expressions we have \[\phi(\mathbf{Y})\mathbf{V}_{Y}\mathbf{S}_{Y}=\mathbf{U}_{Y}\mathbf{S}_{Y}^{2} \ \Rightarrow\boxed{\mathbf{U}_{Y}=\phi(\mathbf{Y})\mathbf{V}_{Y}\mathbf{S}_{Y}^{ +}} \tag{17}\] with the help of the Moore-Penrose generalized inverse. Our algorithm hinges on evaluating products between projectors and the variable vectors. We can now write the products of the \(\mathbf{U}_{Y}^{T}\) with an arbitrary vector represented in \(\mathcal{H}\) as \[\boxed{\mathbf{U}_{Y}^{T}\phi(\mathbf{x})}=\mathbf{S}_{Y}^{-1}\mathbf{V}_{Y}^ {T}\phi(\mathbf{Y})^{T}\phi(\mathbf{x})=\boxed{\mathbf{S}_{Y}^{-1}\mathbf{V}_{ Y}^{T}\kappa(\mathbf{Y},\mathbf{x})}. \tag{18}\] Thus the product can be expressed with the help of the kernel on the variables with complexity \(O(n_{y}^{2})\) if the \(\mathbf{K}_{Y}\), \(\mathbf{V}_{Y}\) and \(\mathbf{S}_{Y}^{-1}\) are precomputed. ### The recursive selection procedure To calculate the projection operator efficiently in each iteration we can exploit the structure of \(\mathbf{P}_{\mathcal{L}_{Y}\cap\mathcal{L}_{\tilde{X}_{i}^{\pm}}}\phi(\mathbf{ x})\) introduced in Proposition 13. To this end, we define an intermediate operator, projection into the complement subspace of vector \(\mathbf{q}\in\mathbb{R}^{n}\) as: \[\mathbf{Q}(\mathbf{q})=\mathbf{I}-\frac{\mathbf{q}\mathbf{q}^{T}}{||\mathbf{q }||^{2}}. \tag{19}\] Since \(\mathbf{Q}(\mathbf{q})\) is a projection, we have \(\mathbf{Q}(\mathbf{q})=\mathbf{Q}(\mathbf{q})\mathbf{Q}(\mathbf{q})\) and \(\mathbf{Q}(\mathbf{q})=\mathbf{Q}(\mathbf{q})^{T}\). It can also be seen that multiplying \(\mathbf{Q}\) with a matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\), \[\mathbf{Q}(\mathbf{q})\mathbf{A}=\left(\mathbf{I}-\frac{\mathbf{q}\mathbf{q}^ {T}}{||\mathbf{q}||^{2}}\right)\mathbf{A}=\mathbf{A}-\frac{\mathbf{q}(\mathbf{q }^{T}\mathbf{A})}{||\mathbf{q}||^{2}}, \tag{20}\] has the complexity of only \(O(n^{2})\) since only matrix-vector and outer product are needed. We are also going to use the following recurrent matrix products for a fixed \(t\) \[\tilde{\mathbf{U}}_{t}=\mathbf{U}_{Y}\prod_{s=1}^{t-1}\mathbf{Q}(\mathbf{q}_{s}) =\tilde{\mathbf{U}}_{t-1}\mathbf{Q}(\mathbf{q}_{t-1}). \tag{21}\] Now we can write up the sequence of projections corresponding to the Algorithm (2): \begin{tabular}{|l l|} \hline Let & \(\mathbf{U}_{0}=\mathbf{U}_{Y}\), \(\mathcal{I}_{0}=\emptyset\), \\ \hline \(\overline{\mathbf{P}_{0}}\) & \(=\mathbf{P}_{\phi(Y)}=\left[\overline{\mathbf{U}_{0}\mathbf{U}_{0}^{T}}\right]\) \\ \hline \(\mathbf{q}_{1}\) & \(=\mathbf{U}_{0}^{T}\phi(\mathbf{x}_{k_{1-}})\), \(k_{1*}=\arg\max_{k\in[n_{s}]\backslash\mathcal{I}_{0}}||\mathbf{P}_{0}\phi( \mathbf{x}_{k})||^{2}\), \(\mathcal{I}_{1}=\mathcal{I}_{0}\cup k_{1^{*}}\), \\ \hline \(\overline{\mathbf{P}_{1}}\) & \(=\mathbf{U}_{0}\mathbf{U}_{0}^{T}-\frac{\mathbf{U}_{0}\mathbf{q}_{1}\mathbf{ q}_{1}^{T}\mathbf{U}_{0}^{T}}{||\mathbf{q}_{1}||^{2}}=\mathbf{U}_{0}\mathbf{Q}( \mathbf{q}_{1})\mathbf{U}_{0}^{T}=\mathbf{U}_{0}\mathbf{Q}(\mathbf{q}_{1}) \mathbf{U}_{0}^{T}=\overline{\mathbf{U}_{1}\mathbf{U}_{1}^{T}}\) \\ \(\mathbf{q}_{2}\) & \(=\mathbf{U}_{1}^{T}\phi(\mathbf{x}_{k_{2*}})\), \(k_{2*}=\arg\max_{k\in[n_{s}]\backslash\mathcal{I}_{1}}||\mathbf{P}_{1}\phi( \mathbf{x}_{k})||^{2}\), \(\mathcal{I}_{2}=\mathcal{I}_{1}\cup k_{2^{*}}\), \\ \(\vdots\) & \\ \hline \(\overline{\mathbf{P}_{t}}\) & \(=\mathbf{U}_{t-1}\mathbf{U}_{t-1}^{T}-\frac{\mathbf{U}_{t-1}\mathbf{q}_{t} \mathbf{q}_{t}^{T}\mathbf{U}_{t-1}^{T}}{||\mathbf{q}_{t}||^{2}}=\mathbf{U}_{t -1}\mathbf{Q}(\mathbf{q}_{t})\mathbf{U}_{t-1}^{T}\) \\ \(\mathbf{q}_{t+1}\) & \(=\mathbf{U}_{t}\mathbf{Q}(\mathbf{q}_{t})\mathbf{Q}(\mathbf{q}_{t})\mathbf{U} _{t-1}^{T}=\overline{\mathbf{U}_{t}\mathbf{U}_{t}^{T}}\) \\ \(\mathbf{q}_{t+1}\) & \(=\mathbf{U}_{t}^{T}\phi(\mathbf{x}_{k_{(t+1)*}})\), \(k_{(t+1)*}=\arg\max_{k\in[n_{s}]\backslash\mathcal{I}_{t}}||\mathbf{P}_{t} \phi(\mathbf{x}_{k})||^{2}\), \(\mathcal{I}_{t+1}=\mathcal{I}_{t}\cup k_{(t+1)^{*}}\), \\ \(\vdots\) & \\ \hline \end{tabular} **Proposition 17**.: _The sequence of projections above correctly computes the projection operators of Algorithm in Figure 2._ Proof.: We apply induction on \(t\) to prove the statement. In case of \(t=1\) we have by Proposition 13, that \[\begin{split}\mathbf{P}_{1}&=\mathbf{P}_{0}-\frac{ \mathbf{P}_{0}\phi(\mathbf{x}_{k_{1*}})\phi(\mathbf{x}_{k_{1*}})^{T}\mathbf{P} _{0}}{||\mathbf{P}_{0}\phi(\mathbf{x}_{k_{1*}})||}=\mathbf{U}_{0}\mathbf{U}_{0} ^{T}-\frac{\mathbf{U}_{0}\mathbf{U}_{0}^{T}\phi(\mathbf{x}_{k_{1*}})\phi( \mathbf{x}_{k_{1*}})^{T}\mathbf{U}_{0}\mathbf{U}_{0}^{T}}{||\mathbf{U}_{0} \mathbf{U}_{0}^{T}\phi(\mathbf{x}_{k_{1*}})||^{2}}\\ &=\mathbf{U}_{0}\left(\mathbf{I}-\frac{\mathbf{q}_{1}\mathbf{q}_{ 1}^{T}}{||\mathbf{q}_{1}||^{2}}\right)\mathbf{U}_{0}^{T}=\mathbf{U}_{0} \mathbf{Q}(\mathbf{q}_{1})\mathbf{U}_{0}^{T}=\mathbf{U}_{0}\mathbf{Q}(\mathbf{q }_{1})\mathbf{Q}(\mathbf{q}_{1})\mathbf{U}_{0}^{T}=\overline{\mathbf{U}_{1} \mathbf{U}_{1}^{T}}\) \\ \hline \end{split} \tag{22}\] In transforming \(||\mathbf{U}_{0}\mathbf{U}_{0}^{T}\phi(\mathbf{x})_{t_{1*}}||^{2}\) into \(||\mathbf{q}_{1}||^{2}\) we exploited that \(\mathbf{U}_{0}\mathbf{U}_{0}^{T}\) is a projection, hence it is idempotent. Let \(t>1\) be arbitrary. Suppose that \[\mathbf{P}_{t}=\mathbf{U}_{t-1}\mathbf{U}_{t-1}^{T}-\frac{\mathbf{U}_{t-1} \mathbf{q}_{t}\mathbf{q}_{t}^{T}\mathbf{U}_{t-1}^{T}}{||\mathbf{q}_{t}||^{2}}= \overline{\mathbf{U}_{t}\mathbf{U}_{t}^{T}} \tag{23}\] holds true. Now, computing the projector \(t+1\) we obtain \[\begin{split}\mathbf{P}_{t+1}&=\mathbf{P}_{t}-\frac{ \mathbf{P}_{t}\phi(\mathbf{x}_{k_{(t+1)*}})\phi(\mathbf{x}_{k_{(t+1)*}})^{T} \mathbf{P}_{t}}{||\mathbf{P}_{t}\phi(\mathbf{x}_{k_{(t+1)*}})||^{2}}\\ &=\mathbf{U}_{t}\mathbf{U}_{t}^{T}-\frac{\mathbf{U}_{t}\mathbf{ U}_{t}^{T}\phi(\mathbf{x}_{k_{(t+1)*}})\phi(\mathbf{x}_{k_{(t+1)*}})^{T} \mathbf{U}_{t}\mathbf{U}_{t}^{T}}{||\mathbf{U}_{t}\mathbf{U}_{t}^{T}\phi( \mathbf{x}_{k_{(t+1)*}})||^{2}}=\mathbf{U}_{t}\left(\mathbf{I}-\frac{\mathbf{q }_{t+1}\mathbf{q}_{t+1}}{||\mathbf{q}_{t+1}||^{2}}\right)\mathbf{U}_{t}^{T}\\ &=\mathbf{U}_{t}\mathbf{Q}(\mathbf{q}_{t+1})\mathbf{U}_{t}^{T}= \mathbf{U}_{t}\mathbf{Q}(\mathbf{q}_{t+1})\mathbf{Q}(\mathbf{q}_{t+1})\mathbf{U}_ {t}^{T}=\overline{\mathbf{U}_{t+1}\mathbf{U}_{t+1}^{T}}\end{split}\] In the norm we again applied that \(\mathbf{U}_{t}\mathbf{U}_{t}^{T}\) is idempotent. We can express the main computation step, Step 2.b in Algorithm 2, by exploiting the kernelized recursive iteration. From the sequential procedure we can see that a key step of the computation is the calculation of the vectors \(\mathbf{q}_{i}\) via Equation (18), \(\mathbf{U}_{Y}^{T}\phi(\mathbf{x})=\mathbf{S}_{Y}^{-1}\mathbf{V}_{Y}^{T}\kappa( \mathbf{Y},\mathbf{x})\) for an arbitrary \(\phi(\mathbf{x})\in\mathcal{H}\). In iteration \(t\), we have \[\mathbf{q}_{t+1}=\mathbf{U}_{t}^{T}\phi(\mathbf{x})=\left(\mathbf{U}_{Y}\prod_{s= 1}^{t-1}\mathbf{Q}(\mathbf{q}_{s})\right)^{T}\phi(\mathbf{x})=\mathbf{Q}( \mathbf{q}_{t})\cdot\cdot\cdot\cdot\cdot\mathbf{Q}(\mathbf{q}_{1})\underbrace{ \mathbf{U}_{Y}^{T}\phi(\mathbf{x})}_{\mathbf{S}_{Y}^{-1}\mathbf{V}_{Y}^{T}\kappa( \mathbf{Y},\mathbf{x})}. \tag{24}\] Taking advantage of the recursive definition of \(\mathbf{U}_{t}^{T}\phi(\mathbf{x})\) we also have that \[\mathbf{U}_{t+1}^{T}\phi(\mathbf{x}) =\mathbf{Q}(\mathbf{q}_{t+1})\mathbf{U}_{t}^{T}\phi(\mathbf{x})= \left(\mathbf{I}-\frac{\mathbf{q}_{t+1}\mathbf{q}_{t+1}^{T}}{||\mathbf{q}_{t+1 }||^{2}}\right)\mathbf{U}_{t}^{T}\phi(\mathbf{x}), \tag{25}\] where \(\mathbf{q}_{t+1}=\mathbf{U}_{t}^{T}\phi(\mathbf{x}_{k_{(t+1)}})\), thus all terms relate to those computed in the previous iteration. The computation of the norm \(||\mathbf{q}_{t+1}||^{2}\) can also exploit the recursive nature of the algorithm. Finally, all the feature representations \(\phi(\mathbf{x})\) and \(\phi(\mathbf{Y})\) are implicit, and are only expressed via kernel evaluations since they only appear in inner products. Based on these statements and Proposition 17 we can present a concrete implementation of our algorithm in Figure 3. In the first step the kernels are computed, where \(\mathbf{K}_{YX}\) requires \(O(mn_{y}n_{x})\), and \(\mathbf{K}_{Y}\)\(O(mn_{y}^{2})\) operations in case of for example linear and Gaussian kernels. For the eigenvalue decomposition of \(\mathbf{K}_{Y}\) we need \(O(n_{y}^{3})\) operations, where \(D\leq\min(n_{y},n_{x})\). In the algorithm, the critical step is Step 4.a. Its complexity in step \(t\) is \(O(n_{y}(n_{x}-t))\), thus, in general for selecting \(D\) variables we need \(O(n_{y}n_{x}D)\) operations. Assuming that \(m\gg n_{y},n_{x}\), the dominating part is the computation of the kernels, thus the entire complexity is equal to \(O(mn_{y}\max(n_{x},n_{y}))\). ## 5 Experiments In this section we experimentally validate our approach.34 We first show demonstrations on our algorithm's scalability on synthetic data, before moving on to experimenting with real data and analysing the stability of the feature selection. Footnote 3: The code for the algorithm is available [https://github.com/aalto-ics-kepaco/ProjSe](https://github.com/aalto-ics-kepaco/ProjSe). Footnote 4: The experiments are run on a machine with this parameters: 12th Gen Intel Core\({}^{TM}\) i5-12600K * 10. ### Scalability demonstration with synthetic data This test is implemented via a scheme presented by (27) in Figure 4 and by (26). The components of the input matrix, \(\mathbf{X}\) and the components of a transformation matrix \(\mathbf{W}\) are independently sampled from normal distribution. Then output matrix is constructed, and finally random noise is added to the output. \[\begin{array}{cc}\text{Input}&\text{Linear transformation}&\text{Noise}\\ \hline&\mathbf{W}\sim[\mathcal{N}(0,\sigma)]^{n_{x}\times n_{y}}&\mathbf{E} \sim[\mathcal{N}(0,\sigma)]^{m\times n_{y}}\\ &\hskip-10.0pt\Downarrow\\ \mathbf{X}\sim[\mathcal{N}(0,\sigma)]^{m\times n_{x}}\Longrightarrow&\mathbf{ Y}=\mathbf{X}\mathbf{W}&\Longrightarrow&\tilde{\mathbf{Y}}=\mathbf{Y}+\mathbf{E}. \end{array} \tag{26}\] Figure 3: Efficient implementation of the kernelized realization of supervised variable selection by projection algorithm, ProjSe. Note the notation, e.g. \(\mathbf{R}^{(t)}=\mathbf{U}_{t}\phi(X)\). We apply ProjSe to this data with various sample sizes. Figure 4 presents the dependence of the selection time on the sample size, where the maximum sample size is \(10\) million and the number of variables is \(10\) - the variable selection is performed in less than four seconds. ### Feature selection from biological data In this set of experiments, we compare our approach to (Brouard et al., 2022) - a kernel-based feature selection method, where kernels are considered traditionally on data samples instead of features. We experiment with the two gene expression datasets, "Carcinoma", "Glioma", considered there for unsupervised feature selection. While this kind of setting with one-view fat data is not the one our method was developed for, as the scalability we propose is first and foremost for large sample sizes, these experiments still serve for illustrative comparison of feature selection performance. As the data is only available in one view in unsupervised setting, we apply our algorithm by using the data in both views: as the reference/label view and as the view the feature selection is performed on. Intuitively, this would filter out the noise and redundant features in the view. In our method we consider linear kernel, \(k(\mathbf{x},\mathbf{z})=\mathbf{x}^{T}\mathbf{z}\), polynomial \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{_Carcinoma (m=174, n\({}_{x}\)=9182, C=11)_} & \multicolumn{3}{c}{_Glioma (m=50, n\({}_{x}\)=4434, C=4)_} \\ & NMI(10) & NMI(300) & t (s) & NMI(10) & NMI(300) & t (s) \\ \hline lapl & 0.36 (0.02) & 0.64 (0.04) & **0.25** (0.04) & **0.50** (0.03) & 0.47 (0.06) & **0.02** (0.00) \\ NDFS & 0.22 (0.28) & 0.78 (0.03) & 6,162 (305) & 0.20 (0.04) & 0.36 (0.07) & 368 (21) \\ UKFS & **0.57** (0.03) & 0.75 (0.05) & 326 (52) & 0.26 (0.05) & 0.42 (0.05) & 23.74 (4.03) \\ \hline ProjSe lin & 0.31 (0.01) & 0.58 (0.03) & 1,146\({}^{1}\) & 0.37 (0.01) & **0.52** (0.03) & 210\({}^{1}\) \\ ProjSe poly & 0.34 (0.01) & 0.79 (0.03) & 1,239\({}^{1}\) & 0.13 (0.02) & 0.34 (0.07) & 299\({}^{1}\) \\ ProjSe RBF & 0.33 (0.01) & **0.82** (0.03) & 1,263\({}^{1}\) & 0.38 (0.05) & 0.23 (0.06) & 284\({}^{1}\) \\ \hline \hline \end{tabular} \end{table} Table 2: ProjSe clustering results (NMI and time) with selected features (10 or 300) compared to results reported in (Brouard et al., 2022). Running time of ProjSe for choosing 300 features; running time and variation of k-means is negligible. Figure 4: The dependence of the variable selection time on the sample size is shown in seconds on the left, and the random data generation scheme applied is on right Figure 5: Clustering results (NMI) on Carcinoma (right) and Glioma (left) datasets with ProjSe as functions of number of variables chosen, averaged over 20 runs of k-means. “Full” refers to results when full set of features is used. kernel of degree \(3\), \(k(\mathbf{x},\mathbf{z})=(\mathbf{x}^{T}\mathbf{z})^{3}\), and RBF kernel, \(k(\mathbf{x},\mathbf{z})=\exp(\|\mathbf{x}-\mathbf{z}\|^{2}/(2\sigma^{2}))\) with the kernel parameter \(\sigma\) set as mean of pairwise distances. We assess the performance of the feature selection by measuring the normalised mutual information (NMI) of k-means clustering results. Here the clusterer has been given the amount of classes in the data as the number of clusters. The results are displayed in Table 2, with comparison to selected methods from (Brouard et al., 2022): UKFS proposed there, as well as a scoring-based method "lapl" (He et al., 2005) that performed well on Glioma dataset, and NDFS (Li et al., 2012), a clustering-based approach that performed well with Carcinoma dataset. Our method is very competitive with these, sometimes achieving better performance. As in our method the kernel is calculated on features, the running time is slower than for UKFS where the kernel on samples is considered. However notably we are still competitive when compared to the NDFS. Additionally, Figure 5 displays more detailed clustering results with respect to the number of variables chosen by ProjSe. These results also highlight the differences that can be obtained by applying different kernels on the features: with Carcinoma dataset the non-linear kernels, RBF and polynomial kernel of degree 3, are clearly superior, while with Glioma linearity works the best. ### Feature selection from vector-valued output setting We next consider a setting more aligned with our method: supervised feature selection with vector-valued output as the reference view. Here we consider three datasets from the UEA & UCR Time Series Classification Repository5, Crop, Figure 6: Results with time series datasets. The top row reports the kernel alignment of the input kernel with chosen variables (either RBF or linear) to the ideal output kernel. The bottom row reports accuracy on test set, again with both linear and RBF kernels for SVM; comparison is shown to randomly selected features and to full feature set. The colors differentiate which kernel is used on features in ProjSe, while the line style indicates if traditional linear or RBF kernel is used on samples. \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset name & \# tr. samples & \# test samples & \# features & \# classes \\ \hline Crop & 7200 & 16800 & 46 & 24 \\ NonInvasiveFetalECGThorax1 & 1800 & 1965 & 750 & 42 \\ ShapesAll & 600 & 600 & 512 & 60 \\ \hline \hline \end{tabular} \end{table} Table 3: The time series classification datasets. NonInvasiveFetalECGThorax1 ("Thorax"), and ShapesAll, as detailed in Table 3. These datasets are associated with multi-class classification tasks, and we use the one-hot encoding of the labels as the vector-valued output to perform the feature selection with ProjSe. As before, we consider linear, polynomial and RBF kernels. We assess the success of the feature selection task by performing classification with SVM with the selected features - here we consider both linear and RBF kernels on the data samples. The results are displayed in Figure 6, where both kernel alignment (\(KA(\mathbf{K},\mathbf{K}^{\prime})=\langle\mathbf{K}_{c},\mathbf{K}^{\prime}_{ c}\rangle_{F}/(\|\mathbf{K}_{c}\|_{F}\|\mathbf{K}^{\prime}_{c}\|_{F})\) where \(c\) denotes centering) to the linear kernel on the one-hot-encoded outputs, and accuracy of SVM classification are shown. The different kernels used in feature selection give slightly different features; however the performance on the subsequent classification task is mostly dependent on which kernel is used on the samples. Especially for Thorax and ShapesAll datasets with higher dimensionality, it can be seen that all the ProjSe results with linear SVM outperform using the full set of features. ### Experiments with two-view data In our last set of experiments, we consider the following datasets: * MNIST handwritten digits [11, 12]: This dataset contains 60000 training and 10000 test examples of handwritten digits in greyscale. The number of pixels in each image is \(28\times 28=784\), resulting in total to 784 variables. To construct the two sets of variables, the image columns are split into half similarly as in [1]. Thus both views comprise of 392 variables. * MediaMill dataset [13]: This dataset contains 43907 examples which are extracted from keyframes of video shots. There are two views in this data: text annotations (101 variables) and visual features (120 variables). * Cifar100 dataset [14]: This dataset, chosen to demonstrate the scalability of ProjSe, contains 50000 training and 10000 test examples of color images. The number of pixels in each image is \(32\times 32=1024\), where to each pixel 3 colors are assigned. The examples belong to 100 classes, where each class contains 500 training and 100 test examples. The classes are represted by indicator vectors. We perform variable selection independently on both views in the datasets. After the variable selection is performed on both sides, we compute canonical correlations between all subset pairs of the extracted variables, starting from the first ones and incrementally growing them to the entire sets. To demonstrate the performance of the proposed variable selection algorithm ProjSe, it is compared to the following methods: large-scale sparse kernel canonical correlation analysis (GradKCCA), [23], deep canonical correlation analysis (DCCA), [1], randomized non-linear CCA (RCCA), [11], kernel non-linear orthogonal iterations (KNOI), [15], and CCA through Hilbert-Schmidt independence criterion (SCCA-HSIC), [23]. These CCA variants explicitly or implicitly rely on singular value decomposition, and their performance highly depends on the distribution of the singular values of the data matrices. Since the data matrices have small number of dominating singular values, we expect from a variable selection method that it can capture a relatively small set of variables to reproduce similar accuracy, measured in canonical correlation. We need to bear in mind that the CCA-based \begin{table} \begin{tabular}{c|c|c} \hline \hline & MediaMill & Cifar 100 \\ & Linear RBF & Linear RBF \\ \hline Computing \(K_{yx}\) & 0.015 0.019 & 0.221 0.470 \\ Computing \(K_{yy}\) & 0.007 0.015 & 0.006 0.025 \\ Eigen decomp. of \(K_{yy}\) & 0.003 0.001 & 0.001 0.001 \\ Centralization of \(\mathbf{K}_{xx}\)1 & 0.005 0.014 & 1.462 2.065 \\ 10 variables & 0.035 0.045 & 1.848 2.640 \\ 20 variables & 0.024 0.048 & 1.793 2.657 \\ 50 variables & 0.037 0.050 & 1.804 2.671 \\ 100 variables & 0.049 0.061 & 1.833 2.706 \\ \hline \hline \multicolumn{3}{l}{\({}^{a}\)Optional} \\ \end{tabular} \end{table} Table 4: The detailed computation times in seconds for the variable selection method, where 10, 20, 50 and 100 variables are extracted. methods add up all information represented by the full collection of variables, however the projector based selection only relies a relatively small subset of those variables. Figure 7 shows the performance of ProjSe on MNIST and MediaMill datasets with both linear and Gaussian kernels, as functions of the number of selected variables. The results are measured by canonical correlation between the subsets of variables selected from the two views. Comparing the results to the other CCA methods in Table 5 (taken from [20]), we observe ProjSe obtaining comparable performance after 20 or 50 selected variables, while being orders of magnitude faster than the other methods. Since ProjSe is fully deterministic, there is no variance is reported for it. The heaviest computation for ProjSe is in the beginning when eigenvalue decomposition of \(\mathbf{K}_{YY}\) is calculated (see Table 4). Thus the running time varies only minimally when different number of variables is selected. This is also demonstrated in 4 where the running times for MNIST and Cifar100 datasets are detailed. As a variable selection method, we are interested in evaluating the stability of the selection process. In order to measure this, we here consider the stability index [21] and the average Pearson correlation of relevance [17]. In both measures, higher values indicate higher stability; the maximum value in both is 1. First the number of extracted variables is chosen from \((1,2,\ldots,5)\) in MNIST, and from \((10,20,\ldots,50)\) in MediaMill. For each number of selected variables the subsamples are taken with the following percentage of the entire training sets: \((10\%,20\%,\ldots,50\%)\). Then random subsets are extracted \(10\) times of the size given above. The scores are computed for each number of selected variables, and for each subsample size and finally averaged on all random samples. They are shown in Figure 8, where the averages for all pairs of subsets of variables and for all subsample sizes are presented. \begin{table} \begin{tabular}{c|c c|c c} \hline & \multicolumn{2}{c|}{MNIST} & \multicolumn{2}{c}{MediaMill} \\ & \(\rho_{\text{TEST}}\) & TIME (s) & \(\rho_{\text{TEST}}\) & TIME (s) \\ \hline Generic CCA & 0.923 & 2.40 & 0.675 & 0.429 \\ \hline GradKCCA & 0.952 \(\pm\) 0.001 & 56 \(\pm\)6 & 0.657 \(\pm\) 0.007 & 8 \(\pm\)4 \\ DCCA & 0.943 \(\pm\) 0.003 & 4578 \(\pm\)203 & 0.633 \(\pm\) 0.003 & 1280 \(\pm\)112 \\ RCCA & 0.949 \(\pm\) 0.010 & 78 \(\pm\)13 & 0.626 \(\pm\) 0.005 & 23 \(\pm\)9 \\ KNOI & 0.950 \(\pm\) 0.005 & 878 \(\pm\)62 & 0.645 \(\pm\) 0.003 & 218 \(\pm\)73 \\ SCCA-HSIC & 0.934 \(\pm\) 0.006 & 5611 \(\pm\)193 & 0.625 \(\pm\) 0.002 & 1804\(\pm\)143 \\ \hline ProjSe 10 var. & 0.847 & & 0.542 & \\ ProjSe 20 var. & 0.890 & & 0.586 & \\ ProjSe 50 var. & 0.918 & & 0.631 & \\ ProjSe 100 var. & 0.935 & 0.451 & 0.672 & 0.041 \\ \hline \end{tabular} \end{table} Table 5: CCA comparison on the MNIST and MediaMill datasets. Figure 7: Variable selection results w.r.t the number of selected variables. ## 6 Conclusion In this paper we introduced a novel variable selection method for two-view settings. Our method is deterministic, selecting variables based on correlation defined with projection operators. The kernelised formulation of our approach paves way for efficient and highly scalable implementation, allowing the application of our method to datasets with millions of data samples. We empirically demonstrated this efficiency and the suitability of our approach for feature selection task, with both synthetic and real data. ## Declarations The authors wish to acknowledge the financial support by Academy of Finland through the grants 334790 (MAGITICS), 339421 (MASF) and 345802 (AIB), as well as the Global Programme by Finnish Ministry of Education and Culture
2302.02857
Topological Analysis of Temporal Hypergraphs
In this work we study the topological properties of temporal hypergraphs. Hypergraphs provide a higher dimensional generalization of a graph that is capable of capturing multi-way connections. As such, they have become an integral part of network science. A common use of hypergraphs is to model events as hyperedges in which the event can involve many elements as nodes. This provides a more complete picture of the event, which is not limited by the standard dyadic connections of a graph. However, a common attribution to events is temporal information as an interval for when the event occurred. Consequently, a temporal hypergraph is born, which accurately captures both the temporal information of events and their multi-way connections. Common tools for studying these temporal hypergraphs typically capture changes in the underlying dynamics with summary statistics of snapshots sampled in a sliding window procedure. However, these tools do not characterize the evolution of hypergraph structure over time, nor do they provide insight on persistent components which are influential to the underlying system. To alleviate this need, we leverage zigzag persistence from the field of Topological Data Analysis (TDA) to study the change in topological structure of time-evolving hypergraphs. We apply our pipeline to both a cyber security and social network dataset and show how the topological structure of their temporal hypergraphs change and can be used to understand the underlying dynamics.
Audun Myers, Cliff Joslyn, Bill Kay, Emilie Purvine, Gregory Roek, Madelyn Shapiro
2023-02-06T15:25:05Z
http://arxiv.org/abs/2302.02857v1
# Topological Analysis of Temporal Hypergraphs ###### Abstract In this work we study the topological properties of temporal hypergraphs. Hypergraphs provide a higher dimensional generalization of a graph that is capable of capturing multi-way connections. As such, they have become an integral part of network science. A common use of hypergraphs is to model events as hyperedges in which the event can involve many elements as nodes. This provides a more complete picture of the event, which is not limited by the standard dyadic connections of a graph. However, a common attribution to events is temporal information as an interval for when the event occurred. Consequently, a temporal hypergraph is born, which accurately captures both the temporal information of events and their multi-way connections. Common tools for studying these temporal hypergraphs typically capture changes in the underlying dynamics with summary statistics of snapshots sampled in a sliding window procedure. However, these tools do not characterize the evolution of hypergraph structure over time, nor do they provide insight on persistent components which are influential to the underlying system. To alleviate this need, we leverage zigzag persistence from the field of Topological Data Analysis (TDA) to study the change in topological structure of time-evolving hypergraphs. We apply our pipeline to both a cyber security and social network dataset and show how the topological structure of their temporal hypergraphs change and can be used to understand the underlying dynamics. + Footnote †: Information release number: PNNL-SA-181478 ## 1 Introduction Complex networks are a natural tool for studying dynamical systems where elements of the system are modeled in a dyadic way and evolve over time. There are many real-world examples, such as social networks [28], disease spread dynamics [18], manufacturer-supplier networks [31], power grid networks [26], and transportation networks [9]. The underlying complex dynamical systems driving these networks cause temporal changes to their structure, with connections and elements added and removed as the dynamical system changes. We can summarize this category of complex network as dynamical networks [17] where the resulting graph is a temporal graph with temporal attributes associated to each connection and/or element of the complex network. While temporal networks are useful in understanding systems with dyadic relations between elements, the complex network is not always satisfactory for modeling the relationship between multiple entities [11]. For data with multi-way relations that cannot be described by dyadic connections, hypergraphs capture richer information about community structure. For example, in Section 3.1 we explore a hypergraph built from Reddit data (PAPERCRANE [5]) on threads about COVID-19. A dyadic model, where an edge links two users if and only if they posted in the same thread, loses all information about thread size. In contrast, a hypergraph, where each thread is an edge and a user is in a thread if and only if they posted in that thread, retains the total structure of the data. In this way, hypergraph analytics are a powerful tool when higher order structure is of interest. Some instances where hypergraphs have been useful include human gene sets [19, 12] where genes interact in complex combinations, cyber data [19] with the domain name systems mapping multiple domains and IPs, and social networks with interactions between large groups [11]. In many use cases, individual snapshots of a complex system are less important than analysis of how the system _changes_. Often, these networks are further improved by modeling them as Temporal HyperGraphs (THG) in the same way as temporal graphs, with temporal attributes (e.g., intervals or times) associated to the multi-way connections and elements. Examples can be found in many settings, such as anomaly detection in automotive data (CAN bus) [16] and cybersecurity using the Operationally Transparent Cybersecurity data we consider in Section 3.2[15]. Many common tools for studying the characteristics of THGs are based on summary statistics of the underlying hypergraph snapshots. These statistics provide insight to dynamic changes in the structure of the underlying hypergraph. For example, in [8], Cencetti _et al._ studied temporal social networks as hypergraphs and were able to measure the burstiness of multi-way social interactions using a burstiness statistic. While statistics such as this can be informative for change detection and insights into the system dynamics, they are lacking in their ability to interpret changes in the structure of the temporal hypergraph. Another approach for studying temporal hypergraphs is through visual analytics. In [13], a matrix based visual analytics tool was designed for temporal hypergraph analysis which provides insights into the dynamic changes of the hypergraph. However, visualization tools are naturally limited in their ability to be automatically interpreted and often require expertise to properly understand. What distinguishes hypergraphs from graphs is that hyperedges come not only in arbitrary sizes, but also connected into arbitrarily complex patterns. As such, they can actually have a complex mathematical topology1 as complex "gluings" of multi-dimensional objects which can have a complex shape and structure. Studying the topology of hypergraphs is a becoming an increasingly large area, frequently exploiting their representation as Abstract Simplicial Complexes (ASCs). Footnote 1: Notice we use “topology” here in the formal sense, as distinct from how this is used informally in graph applications to refer to connectivity patterns in networks. The field of Topological Data Analysis (TDA) [10, 33] aims to measure the shape of data. Namely, data is represented as an ASC, whose homology is then measured to detect overall topological shape, including components, holes, voids, and higher order structures. However, this often requires the choice of parameters to generate the ASC from the data, which is typically nontrivial. Another, more automatic, approach for measuring the shape of data is to use persistent homology from TDA. This method for studying the shape of data extracts a sequence of ASCs from the data, which is known as a filtration. Persistent homology has been successfully applied to a wide application domains, including manufacturing [20, 32], biology [4], dynamical systems [22, 29], and medicine [27]. Many of the applications either represent the data as point clouds or graphs. For point cloud data, filtrations are commonly generated as a collection of Vietoris-Rips complexes [10] determined by identifying points within a Euclidean distances of an increasing radius parameter. For graph data a similar process would be to use a distance filtration with the shortest path distance [22, 3]. Hypergraphs have also been studied using tools from TDA. Namely, the work in [14] shows how the homology of hypergraphs can be studied using various ASC representations such as the associated ASC [25] or the relative/restricted barycentric subdivision. However, a requirement for applying persistent homology is that there is a monotonic inclusion mapping between subsequent members of a sequence of ASCs (i.e., each subsequent ASC in the sequence has its previous as a subset). Many sequences of ASCs associated with data sets are not monotonic, however, we still want to track their changing structure. This is commonly true for temporal data, where, for example, hypergraph edges can appear and then disappear over time, which would break the monotonicity requirement for persistent homology. To solve this problem, zigzag persistence [7] can be applied. Instead of measuring the shape of static point cloud data through a distance filtration (e.g., a Vietoris-Rips filtration), zigzag persistence measures how long a topology generator persists in a sequence of ASCs by using a an alternating sequence of ASCs, called a "zigzag filtration". Both PH and zigzag persistence track the formation and disappearance of the homology through a persistence diagram or barcode as a two-dimensional summary consisting of persistence pairs \((b,d)\), where \(b\) is the birth or formation time of a generator of a "hole" of a certain dimension, and \(d\) is its death or disappearance time. For example, in [30] the Hopf bifurcation is detected through zigzag persistence of Vietoris-Rips complexes over sliding windows using the one-dimensional homology. Another recent application [23] studies temporal networks, where graph snapshots were represented as ASCs using the Vietoris-Rips complex with the shortest path distance. However, both of these methods require a distance parameter to be selected to form the ASC at each step, which is typically not a trivial choice. The resulting persistence barcodes from zigzag persistence can also be vectorized using methods such as persistence images [1] or persistence landscapes [6]. This allows for the resulting persistence diagrams to be analyzed in automatic methods using machine learning for classification or regression. In this work we leverage zigzag persistence to study THGs. By measuring the changing structure of the temporal hypergraph through an ASC representation of the hypergraph, we are able to detect the formation, combination, and separation of components in the hypergraph as well as higher dimensional features such as loops or holes in the hypergraph and voids. The detection of these higher dimensional features is critical for temporal hypergraph analysis as they may be of consequence depending on the application domain. Additionally, in comparison to creating an abstract ASC from point cloud or graph data, no distance parameter needs to be chosen as there are natural representations of a hypergraph as an ASC [14]. In Section 2 of this paper we introduce THGs and an ASC representation of hypergraphs as well as an overview of zigzag persistence and how these are incorporated into our method of applying zigzag persistence for studying THGs. This section also includes a toy example demonstrating each step in the pipeline. In Section 3 we demonstrate how our method can be applied to two data sets drawn from social networks and cyber data. Lastly, in Section 4 we provide conclusions and future work. ## 2 Method and Background In this section the method for studying temporal hypergraphs using zigzag persistence is developed alongside the necessary background material. Our method is a confluence of zigzag persistence and the ASC representation of hypergraphs for the topological analysis of THGs. Namely, we develop a pipeline for applying zigzag persistence to study changes in the shape of a temporal hypergraph using a sliding window procedure. This pipeline is outlined in Fig. 1. We begin with a temporally edge-attributed hypergraph in Fig. 1-_Temporal Hypergraph_, where each edge has active intervals associated to it as described in Section 2.1. Next, we use a Fig. 1-_Sliding Window_ procedure, where we choose a window size \(w\) and shift \(s\) that is slid along the time domain of the set of intervals in discrete steps. Using each sliding window, we generate Fig. 1-_Hypergraph Snapshots_ at each window, which is described in Section 2.2. We then represent each snapshot as a Fig. 1-_ASC_ using the associated ASC in Section 2.3. Next, we introduce simplicial homology for studying the shape of an ASC in Section 2.4. This leads to the method for relating the homology within a sequence of ASCs known as zigzag persistent homology in Section 2.5, which is used for calculating the persistent homology of the temporal hypergraph represented as a barcode of persistent diagram (Fig. 1-_Barcodes_). To illustrate our procedure we provide a simple example throughout each step in the pipeline. For the example and the remaining results we use the Python packages HyperNetX4 to generate the hypergraphs and Dionysus25 to calculate the zigzag persistence. Footnote 4: HyperNetX: [https://pnnl.github.io/HyperNetX](https://pnnl.github.io/HyperNetX) Footnote 5: Dionysus2: [https://mrzv.org/software/dionysus2/](https://mrzv.org/software/dionysus2/) ### Temporal Hypergraphs A graph \(G(V,E)\) is composed of a set of vertices connected using a set of edges with \(E\subseteq\binom{V}{2}\). A hypergraph \(H(V,E)\) is composed of a set of vertices \(V\) and a family of edges \(E\), where for each \(E_{i}\in E,E_{i}\subseteq V\). In this way a hypergraph can capture a connection between \(k\) vertices as a \(k\)-edge. For example, consider the toy hypergraph in Fig. 1(a) with four nodes \(V=\{A,B,C,D\}\) and five hyperedges \(E=\{E_{1},E_{2},E_{3},E_{4},E_{5}\}\). These hyperedges in the example range in size from edge \(E_{2}=(D)\) as a 1-edge to edge \(E_{4}=(A,B,C)\) as a 3-edge. A temporal hypergraph \(H(V,E,T)\) is a replica of its underlying static hypergraph with the addition of temporal attributes \(T\) associated to either the vertices, edges, or incidences. An attribute to an incidence occurs when the temporal information associated to a node is relative to the hyperedge. In this work we only use temporal information attributed Figure 1: Pipeline for applying zigzag persistence to temporal hypergraphs. to the edges. However, our pipeline could be adapted to any or all of the three temporal attribution types. Returning to our toy example hypergraph \(H\) in Fig. 1(a), we include temporal information as a set of intervals associated to the time when each edge is active (e.g., \(E_{2}\) is active for the point interval \([0,0]\) and interval \([7,8]\)). ### Sliding Windows for Hypergraph Snapshots The sliding window procedure is a ubiquitous part of signal processing, in which a time series or signal is segmented into discrete windows that slide along its time domain. Specifically, Given a time domain \([t_{0},t_{f}]\), window size \(w\), and shift \(s\), we create a set of windows that cover the time domain interval as \[\mathcal{W}=\{[t_{0},t_{0}+w],[t_{0}+s,t_{0}+s+w],[t_{0}+2s,t_{0}+2s+w],\ldots, [t_{0}+\ell s,t_{0}+\ell s+w]\}, \tag{1}\] The window size and shift should be such that \(s\leq w\). In this way the union of all windows covers the entire domain and adjacent windows do not have a null intersection. For each sliding window \(W_{i}\in\mathcal{W}\) we create a sub-hypergraph snapshot using an intersection condition between the sliding window interval \(W_{i}\) and the collection of intervals associated to each edge in the temporal hypergraph. The intervals are considered closed intervals in this work. This procedure is done by including an edge if there is a nonempty intersection between the edge's interval set and the sliding window interval \(W_{i}\). We formalize this as \[H_{i}=\{E_{j}\in E\mid I(E_{j})\cap W_{i}\neq\emptyset\}, \tag{2}\] where \(E_{j}\in E\) is an edge in the set of edges of the static hypergraph and \(I(E_{j})\) is the interval collection for edge \(E_{j}\). The resulting sub-hypergraph snapshot collection of \(\mathcal{W}\) is \[\mathcal{H}=\{H_{0},H_{1},\ldots,H_{t},\ldots,H_{\ell}\}.\] We can cast this collection as a discrete dynamical process \(H_{t}\mapsto H_{t+1}\) to gain understanding of the underlying system's dynamics. Figure 3: Sequence of sub-hypergraphs \(\mathcal{H}\) from the sliding window procedure with corresponding ASCs. Figure 2: Toy example temporal hypergraph. To demonstrate the sliding window procedure for getting hypergraph snapshots we use the toy example temporal hypergraph from Fig. 2 and window parameters \(w=2\) and \(s=2\). Using these parameters we get the sliding windows as \[\mathcal{W}=\{[0,2],[2,4],[4,6],[6,8],[8,10]\}.\] Hypergraphs from each window are generated as subsets of the static \(H\) depending on the overlap of the window and the activity intervals associated to each edge. For example, window \(W_{2}=[4,6]\) has the hypergraph \(H_{2}\) with edges \(\{E_{1},E_{3},E_{5}\}\) based on the overlap between \(W_{2}\) and the collection of intervals of each edge shown in Fig. 1(b). Additionally, each hypergraph has now both an index and time associated to it. The index is as was previously stated (e.g., \(H_{2}\) has index 2) and the time is the average time of the corresponding window (e.g., \(W_{2}\) has an average time of \((4+6)/2=5\)). Applying this hypergraph snapshot procedure using the sliding windows we get the five hypergraphs shown in Fig. 3. ### Associated ASC of a Hypergraph An ASC \(K\) is a collection of simplices, with a simplex \(\sigma\subseteq P\) as a subset of \(n\) points from a set of points \(P\) and simplex dimension \(n-1\). This results in points (1-edge) as 0-simplices, lines (2-edge) as 1-simplices, triangles (3-edge) as 2-simplices, etc. We denote the simplex \(\sigma\) as a face if \(\sigma\subseteq\tau\) with \(\tau\) as another simplex. Additionally, a simplex \(\sigma\) of dimension \(n-1\) is required to be closed under face relation, which is all of its subsimplices (faces) as the power set of the simplex. The dimension of an ASC is the dimension of the largest simplex. ASCs are often used to represent geometric structures and as such are referred to as geometric simplicial complexes. However, we can also refer to them as abstract simplicial complexes for purely combinatorially purposes. We can generate the associated ASC of a hypergraph [25] using the simplices associated to each hyperedge and building the closure under face relations, which is the power set of each hyperedge. To apply zigzag persistence to study the changing topology of our hypergraph snapshots, we need to first represent our collection of hypergraph snapshots \(\mathcal{H}\) as a sequence of ASCs \(\mathcal{K}\) which will later be used to create the zigzag persistence module. While there are several methods for representing a hypergraph as an ASC [14], we leverage an adaptation of the associated ASC method from [25]. The associated ASC of a hypergraph \(H\) is defined as \[K(H)=\{\sigma\in\mathcal{P}(E_{i})\setminus\emptyset\mid E_{i}\in E\}, \tag{3}\] where \(E\) is the edge set of the hypergraph \(H\), \(E_{i}\in E\), and \(\mathcal{P}(E_{i})\) is the power set of \(E_{i}\). Equation 3 provides a first starting point for calculating the zigzag persistence, however, it is computationally cumbersome. Specifically, for a large \(k\)-edge the computational requires \[\sum_{j=0}^{k}\binom{k+1}{j+1}=2^{k+1}-1\] subsimplices. However, the computation of homology of dimension \(p\) only requires simplices of size \(p+1\) to be included in the ASC. As such, we define the _modified associated ASC_ as \[K(H,p)=\{\sigma\in\mathcal{P}_{p+1}(E_{i})\setminus\emptyset\mid E_{i}\in E\}, \tag{4}\] where \(\mathcal{P}_{p+1}\) is the modified power set to only include elements of the set up to size \(p+1\) or \(\binom{E_{i}}{p+1}\). The modified associated ASC reduces the computational demand by only requiring \[\sum_{j=0}^{p+1}\binom{k+1}{j+1}\] subsimplices for a \(k\)-edge. Applying Eq. (4) to each hypergraph in \(\mathcal{H}\) allows us to get a corresponding sequence of ASCs as \(\mathcal{K}\). For the hypergraph snapshots \(\mathcal{H}\) shown in Fig. 3 the modified associated ASCs \(\mathcal{K}\) are shown in Fig. 4. ### Simplicial Homology Simplicial homology is an algebraic approach for studying the shape of an ASC by counting the number of \(p\)-dimensional holes, where \(p=0\) are connected components, \(p=1\) are graph triangles, \(p=2\) are three-dimensional hollow tetrahedrons, and so on. We can represent the collection of \(p\)-dimensional holes of an ASC \(K\) as the Betti vector \(\beta(K)=[b_{0},b_{1},b_{2},\ldots]\), where \(b_{p}\) is the number of \(p\)-dimensional holes known as a Betti number. In this work we do not overview the details on how the Betti numbers are calculated, but we direct the reader to [24, 21] for a formal introduction. By calculating the Betti numbers for our sequence of ASCs in Fig. 4, we get the Betti vectors in Fig. 5. These Betti numbers are informative on the changing topology of the hypergraph snapshots in Fig. 3; however, they do not capture information on how the topology between the snapshots are related. For example, by observation of the hypergraph snapshots we know that there is one main component that persists through the entire sequence of ASCs, but this information can not be known directly from the Betti numbers. The Betti numbers do not tell the complete story of this component persisting the whole time. While they do tell us there is at least one component in each snapshot, these components do not necessarily need to be the same component in each snapshot to get the same Betti vectors. As such, we need to use a method to track how the homology is changing and related between the sequence of ASCs. To do this we implement zigzag persistent homology. ### Zigzag Persistent Homology This section provides a conceptual introduction to persistent homology and how it generalizes to zigzag persistent homology. We suggest [24, 21] for a detailed introduction on persistent homology. Persistent homology [33], a filtration tool from the field of Topological Data Analysis (TDA) [10, 33], is used to gain a sense of the shape and size of a dataset at multiple dimensions and filtration values. For example, it can measure connected components (dimension zero), holes (dimension one), voids (dimension two), and higher dimensional analogues, as well as an idea of their general size or geometry. Persistent homology measures these shapes using a parameterized filtration to detect when homology groups are born (appear) and die (disappear). To compute persistent homology a parameterization function is applied to the dataset to create a nested sequence of ASCs \[K_{0}\subseteq K_{1}\subseteq K_{2}\subseteq\ldots\subseteq K_{n}. \tag{5}\] Figure 4: Sequence of associated ASCs from hypergraph snapshots in Fig. 3. Figure 5: Betti numbers for ASCs in Fig. 4. We can then calculate the homology of dimension \(p\) for each complex, \(H_{p}(K_{i})\), which is a vector space representing the \(p\)-dimensional structure of the space such as components, holes, voids, and higher dimensional features. However, this information does not yet yield how the homology of each ASC is related to the next ASC. To get this information, persistent homology uses the inclusions on the ASCs to induce linear maps on the vector spaces resulting in a construction called a persistence module \(\mathcal{V}\): \[H_{p}(K_{\alpha_{0}})\hookrightarrow H_{p}(K_{\alpha_{1}})\hookrightarrow H_{ p}(K_{\alpha_{2}})\hookrightarrow\ldots\hookrightarrow H_{p}(K_{\alpha_{n}}), \tag{6}\] where \(\hookrightarrow\) are the maps induced by the inclusion map between ASCs. It should be noted that in the sequence of ASCs, each vertex must be unique and consistently identified. The appearance and disappearance of classes at various dimensions in this object can be tracked, resulting in a summary known as a persistence barcode (alternatively a persistence diagram) \(\mathcal{D}=\{D_{0},D_{1},\ldots,D_{p}\}\). For each homology generator which appears at \(K_{b}\) and disappears at \(K_{d}\), we draw an interval \([b,d]\) in the barcode. Taken together is the persistence barcode, which is the collection of persistence intervals (also called persistence pairs in the persistence diagram). This persistent homology framework can be applied to study hypergraphs directly where a persistence module \(\mathcal{V}\) is generated from a hypergraph, as described in [25], by generating a sequence of subset ASC representations of a hypergraph. However, a limitation of persistent homology is it requires each subsequent ASC to be a subset of the previous ASC to form the persistence module as shown in Eq. (5), which means at each step we are not allowed to remove simplices in the next ASC. There are many cases of real-world applications where we have a parameterized sequence of ASCs where simplices can both enter and exit the complex throughout the sequence. To alleviate this issue zigzag persistence [7] can be applied, which allows for arbitrary subset directions in the ASC sequence: \[K_{0}\leftrightarrow K_{1}\leftrightarrow K_{2}\leftrightarrow\ldots \leftrightarrow K_{n}, \tag{7}\] where \(\leftrightarrow\) denotes one of the two inclusion maps: \(\hookrightarrow\) or \(\hook\). A common special case of this definition is where the left and right inclusions alternate or zigzag. For most data analysis applications using zigzag persistent we artificially construction a sequence of ASCs taking this form by interweaving the original ASCs with either unions or intersections of adjacent ASCs. For example, in Fig. 6a we use the union between the associated ASCs of the original hypergraph snapshots from Fig. 3. This sequence of interwoven ASCs fulfills the criteria of the zigzag inclusion map directions as \[K_{0}\hookrightarrow K_{0,1}\hook K_{1}\hookrightarrow K_{1,2}\hook K_{2} \hookrightarrow\ldots\hook K_{\ell-1}\hookrightarrow K_{\ell-1,\ell} \hook K_{\ell}. \tag{8}\] for unions or \[K_{0}\hook K_{0,1}\hookrightarrow K_{1}\hook K_{1,2}\hookrightarrow K_{2} \hook\ldots\hookrightarrow K_{\ell-1}\hook K_{\ell-1,\ell} \hookrightarrow K_{\ell} \tag{9}\] for intersections, where \(K_{i,i+1}=K_{i}\cup K_{i+1}\). The inclusion maps are extended to linear maps between homology groups resulting in the zigzag persistence module tracking the changing homology of Eq. (8) or (9) just as was the case for standard persistent homology. Focusing on the case of the union, the zigzag persistent homology module is \[H_{p}(K_{0})\hookrightarrow H_{p}(K_{0,1})\hookrightarrow H_{p}(K_{1}) \hookrightarrow H_{p}(K_{1,2})\hookrightarrow H_{p}(K_{2})\hookrightarrow\ldots \hook H_{p}(K_{n-1})\hookrightarrow H_{p}(K_{n-1,n})\hook H_{p}(K_{n}). \tag{10}\] The same algebra leveraging the linear maps between homology groups to track persistence pairs for a standard filtration in persistent homology makes it possible to compute where (when) homology features are born and die based on the zigzag persistence module, however some of the intuition is lost. Namely, we can again track the persistent homology using a persistence diagram \(D=\{D_{0},D_{1},\ldots,D_{p}\}\) consisting of half-open intervals (persistence pairs) \([b,d)\); however, we now use the indices of the ASCs as the birth and death times instead of the filtration parameter. For example, if there is one-dimensional homology (i.e., a loop) that appears at \(K_{2}\) and persists until it disappears at \(K_{3}\), we represent this as the persistence pair (2,3). In the case of a class appearing or disappearing at the union (or intersection) complex \(K_{i,i+1}\), we use the half index pair \(i,i+1\). If a topological feature persists in the last ASC in the zigzag persistence module we set its death past the last index with the pair \(\ell,\ell+1\), where \(\ell\) is the number of ASCs (without interwoven unions or intersections). To demonstrate how zigzag persistence tracks the changing topology in a sequence of ASCs we use a simple sequence of ASCs in Fig. 4, which were derived from the toy example in Fig. 2 using a sliding window procedure outlined in section 2.2. As a first example of the application of zigzag persistence to study temporal hypergraphs we return to our toy example. We used the unions between ASCs to get the ASCs shown as \([K_{0},K_{0,1},K_{1},\ldots,K_{3,4},K_{4}]\) in Fig. 5(a) and the resulting zigzag persistence barcodes in Fig. 5(b). For this example we are only investigating the topological changes in dimensions \(0\) and \(1\) since there are no higher dimensional features. There are two main changes in the homology of the ASCs that are captured in the persistence barcodes. For dimension \(0\), we are tracking the connected components and how they relate. At \(K_{0}\) we have two connected components (the \(2\)-simplex as the triangle and \(0\)-simplex as the point). As such, we set the birth of the two components at the index which they both appear: \(0\). Next, at \(K_{0,1}\) the components combine as two conjoined \(2\)-simplices. The joining of components forces one of the components to die while the other persists; the smaller of the two components dies (the \(0\)-simplex) dies at the index \(0,1\) with persistence interval \((0,(0,1))\) shown in the \(D_{0}\) barcode of Fig. 5(b). The combined component never separates or combines with another component again and therefor it persists for the remaining persistence module finally dying after \(K_{4}\) or index \(4,5\) (shown as the dashed red line) having the persistence interval \((0,(4,5))\) in \(D_{0}\). Moving to dimension \(1\), we are now interested in showing how the persistence barcode captures the formation and disappearance of loops in the persistence module. A loop is first formed in \(K_{2}\) and persists until \(K_{3}\). Therefor, this feature is represented as the persistence interval \((2,3)\) in \(D_{1}\) of Fig. 5(b). This example highlights how zigzag persistence captures changes in the topology of a sequence of ASCs. Figure 6: Zigzag persistence module and resulting barcodes for dimensions \(0\) and \(1\) for toy example introduced in Fig. 2. Figure 7: Sequence of ASCs from the sliding window hypergraph snapshots for both union and intersections. Zigzag persistence barcodes for temporal hypergraph example with time associated ASCs. In this work we are interested in the analysis of temporal hypergraphs, and as such we instead want to have the barcodes track the time at which homology appears and disappears instead of the indices. To do this we substitute the index for the average time of the window associated to each ASC as shown in Fig. 7. For the intermediate ASCs (unions or intersections) we use the average time of the two windows. The only difference between the ASC sequence in Fig. 5(b) and Fig. 6(b) is that Fig. 6(b) has the times from the windows associated to the ASCs when computing the zigzag persistence. As such, the persistence barcode has time on the horizontal axis with the two intervals in \(D_{0}\) and one in \(D_{1}\) having the same sources (generators) as described in Fig. 5(b). The resulting barcodes in Fig. 7 shows that both the intersection and union methods for interweaving ASCs provide similar barcodes. We also found this same result when applying zigzag persistence to the data sets studied in this work. For the remainder of this work we will use the union method for studying temporal hypergraphs using zigzag persistence. ## 3 Applications ### Social Network Analysis To demonstrate the functionality of analyzing temporal hypergraph data through zigzag persistence we use Reddit data with COVID-related subreddits. This data is known as the PAPERCRANE dataset [5]. The dataset subset we use spans from 1/20/20 to 3/31/20. This section captures the initial formation of the subreddits during the onset of COVID-19. The active subreddits related to COVID-19 in the dataset during this time are listed in Table 1 with summary statistics on the number of threads and authors. In this analysis we only use the nCoV subreddit due to its manageable size and interpretability. The temporal intervals for the edges are constructed from the author interaction information. We construct edge intervals based on the first and last times an author posted in each thread. These intervals are visualized in the top subfigure of Fig. 8. We set the window size of 1 hour with a shift of 15 minutes. This window size captures the necessary granularity to see changes in the dynamics of the subreddit. Applying this sliding window results in 6899 windows. The number of nodes and edges of each hypergraph snapshot is shown in Fig. 8. This initial data exploration shows that the size of the subreddit initially increases to a peak popularity at approximately two weeks into the subreddit or day 14. After this, the size steadily decreases. The edge intervals in the top subfigure of Fig. 8 shows that the majority of intervals are very short, while a few exhibit long intervals lasting as long as 38 days. This initial exploration does not capture how the shape of the underlying hypergraph snapshots is evolving. \begin{table} \begin{tabular}{l l r r} Subreddit & Active Dates & Threads & Authors \\ \hline CCP\_virus & 3/27 - 3/31 & 169 & 79 \\ COVID19 & 2/15 - 3/31 & 8668 & 22020 \\ COVID19positive & 3/13 - 3/31 & 1462 & 6682 \\ China\_Flu & 1/20 - 3/31 & 55466 & 62944 \\ Coronavirus & 1/20 - 3/31 & 153025 & 396427 \\ CoronavirusCA & 3/01 - 3/31 & 2930 & 5370 \\ CoronavirusRecession & 3/19 - 3/31 & 1574 & 6548 \\ CoronavirusUK & 2/15 - 3/31 & 8654 & 10230 \\ CoronavirusUS & 2/16 - 3/31 & 18867 & 29809 \\ Covid2019 & 2/16 - 3/31 & 2437 & 1531 \\ cvnews & 1/25 - 3/31 & 4233 & 2181 \\ nCoV & 1/20 - 3/31 & 3949 & 1902 \\ \hline \end{tabular} \end{table} Table 1: Subreddits related to covid from the PAPERCRANE datatset with number of threads and authors of each subreddit There are many questions about the underlying network that can not be directly answered from these simple summary statistics. For example, is each thread dominated by unique authors or do many threads share users? Is the social network dense, centralized, fragmented? Do any of these characteristics change over time? Understanding the topological summary of the hypergraph snapshots is important to understand the type of communication that is occurring. For example, many one-dimensional homology features are representative of disconnected conversations of holes in the communication structure. However, this could be captured just using the Betti sequence at each snapshot. What the zigzag persistence also captures is the severity of the holes based on their longevity. Consider a hole in communication that persists for several days. This could be representative of a lack of information communication throughout the community. These summary statistics additionally do not provide any information on how the threads in the subreddit are related and their longevity. Using zigzag persistence we can capture information about the longevity of communications using the zero-dimensional homology. A long interval in the zero-dimensional zigzag persistence barcode is representative of a conversation persisting over a long period of time. In Fig. 9 are the resulting zigzag persistence barcodes using the union between the associated ASCs of the hypergraph snapshots. First, we see that we can capture how fragmented the social network is with one main component shown in the zero-dimensional barcode that persists for almost the entire duration of the subreddit. Additionally, the short intervals in dimension zero are characteristic of other side conversations, which either split from or merged into the main conversation or were entirely separate conversations. An example of one of these conversations is shown in the hypergraph snapshot at day 10 in Fig. 9 where the main component is composed of all of the threads with exception to one thread between just two authors. Having the main component suggests that many of the threads in the subreddit share at least one common author between threads. We can also demonstrate that the network shows a change in its centralization over time. Specifically, during regions where many \(D_{1}\) persistence intervals are present we know that the network has several loops, which are characteristic of non-centralized social networks. These changes from centralized to non-centralized social hypergraph snapshots are likely due to the number of authors active and a bifurcation of social network dynamics. For example, in the snapshot at day 10 in Fig. 9 there is a main loop in the main component of the hypergraph snapshot captured, and the main component does not have a clearly centralized structure. However, approximately one week later at day 18, there is a Figure 8: Summary statistics for size of temporal hypergraph snapshots. The top is the interval associated to each edge (sorted by start time), the middle figure is the number of edges in the hypergraph snapshots, and the bottom figure is the number of vertices in the hypergraph snapshots. clearly centralized structure to the hypergraph which has no one-dimensional features. With both a low number of (or no) one-dimensional features and only one component, the zigzag persistence can give insight into the centralization of the hypergraph and underlying social network. ### Cyber Data Analysis For the analysis of cyber data we use the Operationally Transparent Cyber dataset (OpTC) [2] created by the Defense Advanced Research Projects Agency (DARPA). This dataset consists of network and host logging from hundreds of windows hosts over a week period. The dataset consists of two groups of user activity: benign and malicious. The malicious activity occurs over a three day period in which several attacks are executed. Our goal is to analyze demonstrate how these attacks show up in the zigzag persistence barcodes for a hypergraph formation from the data log. The data log is composed of 64 columns describing each action in the network. In this section we only use the timestamps, src_ip, image_path, and dest_port, as these are needed to construct the temporal hypergraph representation of the data we study using zigzag persistence. We construct hypergraph snapshots by again using a sliding window procedure, but now the intervals associated to each edge are only time points as the cyber data only has the time stamp at which the action occurred. We used a sliding window with width \(w=30\) minutes and shift \(s=5\) minutes. We chose this window size based on the duration of malicious activity lasting for approximately 2 hours with 30 minute windows being fine grained enough to capture the transition from benign to malicious. To demonstrate how zigzag persistence can detect a cyber attack we will look at two instances of malicious activity on two different hosts. Namely, we investigate two cases of a cyber attack; the first on 9/23/19 from red agent LU-AVR71T with source IP 142.20.56.202 on host 201 and the second on 9/24/19 from agent 4BW2MKUF with source IP 142.20.57.246 on host 501. The first sequence of attack beings at approximately 11:23 to 13:24 on 9/23/19 and the second sequence from approximately 10:46 to 13:11. The hypergraphs were constructed using the destination ports as the hyperedges and the image paths as nodes. This formation captures the structure of the cyber data in the sense that the destination ports as hyperedges capture the relation between the actions (image paths) used. Additionally, we only use a subset of the full data for a single source IP. By only looking at this sub-hypergraph we capture information about the specific agent associated to the source IP. The zigzag persistence barcodes associated the the destination port/image path hypergraph snapshots for the first sequence of attacks are shown in Fig. 9(a). Before 9:00 there was no cyber activity and as such no barcodes during that Figure 9: Zigzag Persistence of temporal hypergraph representation of the CCP_virus subreddit with example hypergraph snapshot and associated ASC. period. The region highlighted in red from 11:23 to 13:24 is the active time of the cyber attacks. During this region we highlight a specific hypergraph for the window starting at approximately 12:35 which is exemplary of malicious activity. Additionally, at approximately 21:50, we show another exemplary window on standard benign activity. During this activity there are typically only two singletons which persist over time. A similar pair of hypergraphs for malicious and benign activity are shown in the second sequence of malicious activity on 9/24/19. However, what is not captured by the snapshots are the dynamics and quickly changing topology of the snapshots during malicious activity and relatively stationary dynamics and simple topology during benign activity. Zigzag persistence is able to capture the changing dynamics and topology that is characteristic of malicious cyber activity. This is shown in both the barcodes for \(D_{0}\) and \(D_{1}\) for both sequences of malicious activity as shown in Fig. 10. Figure 10: Zigzag persistence barcodes with example hypergraphs at two windows for OpTC data during an attack on the 23rd and 24th. The region highlighted in red is the time the red team agent was activity attacking. Specifically, during malicious activity there tends to be more, short-lived persistence pairs in \(D_{0}\) and the appearance of one-dimensional homology in \(D_{1}\). In comparison, during benign activity, there is little to no one-dimensional homology and little change in the number of components captured through \(D_{0}\). ## 4 Conclusion In this work we developed an implementation of zigzag persistence for studying temporal hypergraphs. To demonstrate the functionality of our method we apply it to study both social network and cyber security data represented as temporal hypergraphs. For the social network analysis we were able to show how the resulting zigzag persistence barcodes capture the dynamics of the temporal hypergraphs topology which captures information about the changing centrality of the hypergraphs through \(D_{1}\). Furthermore, we show that the conversation is composed of one main component that persists over the entire time period of the social network we studied. When studying the cyber data we found that we were able to detect malicious from benign activity with zigzag persistence. During malicious activity we showed that there tends to be persistence pairs in \(D_{1}\) as well as more persistence pairs in \(D_{0}\) in comparison to during benign activity. Future work for this method includes an investigation of vectorization techniques of the zigzag persistence diagrams for automating cyber security analysis. We also plan to study how we can leverage the temporal hypergraph representation and zigzag persistence for detecting bot activity in social network data.
2305.18670
SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-driven Video Editing
Text-to-Image (T2I) diffusion models have achieved remarkable success in synthesizing high-quality images conditioned on text prompts. Recent methods have tried to replicate the success by either training text-to-video (T2V) models on a very large number of text-video pairs or adapting T2I models on text-video pairs independently. Although the latter is computationally less expensive, it still takes a significant amount of time for per-video adaption. To address this issue, we propose SAVE, a novel spectral-shift-aware adaptation framework, in which we fine-tune the spectral shift of the parameter space instead of the parameters themselves. Specifically, we take the spectral decomposition of the pre-trained T2I weights and only update the singular values while freezing the corresponding singular vectors. In addition, we introduce a spectral shift regularizer aimed at placing tighter constraints on larger singular values compared to smaller ones. This form of regularization enables the model to grasp finer details within the video that align with the provided textual descriptions. We also offer theoretical justification for our proposed regularization technique. Since we are only dealing with spectral shifts, the proposed method reduces the adaptation time significantly (approx. 10 times) and has fewer resource constraints for training. Such attributes posit SAVE to be more suitable for real-world applications, e.g. editing undesirable content during video streaming. We validate the effectiveness of SAVE with an extensive experimental evaluation under different settings, e.g. style transfer, object replacement, privacy preservation, etc.
Nazmul Karim, Umar Khalid, Mohsen Joneidi, Chen Chen, Nazanin Rahnavard
2023-05-30T01:00:31Z
http://arxiv.org/abs/2305.18670v2
# SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing ###### Abstract Text-to-Image (T2I) diffusion models have achieved remarkable success in synthesizing high-quality images conditioned on text prompts. Recent methods have tried to replicate the success by either training text-to-video (T2V) models on a very large number of text-video pairs or adapting T2I models on text-video pairs independently. Although the latter is computationally less expensive, it still takes a significant amount of time for per-video adaption. To address this issue, we propose SAVE, a novel spectral-shift-aware adaptation framework, in which we fine-tune the spectral shift of the parameter space instead of the parameters themselves. Specifically, we take the spectral decomposition of the pre-trained T2I weights and only control the change in the corresponding singular values, i.e. spectral shift, while freezing the corresponding singular vectors. To avoid drastic drift from the original T2I weights, we introduce a spectral shift regularizer that confines the spectral shift to be more restricted for large singular values and more relaxed for small singular values. Since we are only dealing with spectral shifts, the proposed method reduces the adaptation time significantly (\(\sim 10\times\)) and has fewer resource constraints for training. Such attributes posit _SAVE_ to be more suitable for real-world applications, e.g. editing undesirable contents during video streaming. We validate the effectiveness of _SAVE_ with an extensive experimental evaluation under different settings, e.g. style transfer, object replacement, privacy preservation, etc. Code is available at [https://github.com/nazmul-karim170/SAVE-Tex2Video](https://github.com/nazmul-karim170/SAVE-Tex2Video) Figure 1: Our proposed method _SAVE_ enables text-based video editing (e.g. shape, style, etc.) by instilling both spatial and temporal awareness into image diffusion models. Introduction Diffusion models [1] have shown tremendous success in the text-guided synthesis of diverse and high-quality media contents such as images [2; 3] and videos [4; 5; 6; 7]. Due to the strong data modeling capabilities of these models, diversified generation [8] of a wide range of objects, shapes, and styles has become possible with remarkable realism. In recent times, several diffusion-based editing methods [9; 10; 11; 12; 13; 14] also made their way into generative AI research. For example, customizable or personalized image diffusion models [15; 16] leverage parameters fine-tuning for adapting the model to user-specific editing requirements, e.g. shape, style, etc. Although image editing has earned its popularity, we focus on designing an _efficient text-guided video-editing framework_ that also supports _zero-shot video generation_. In general, there are two primary ways for video generation: 1) training a T2V diffusion model on a large-scale multimodal (text-video pairs) dataset [17; 18; 4; 5] and 2) modifying the existing T2I diffusion models to fit the video generation process [19; 20]. The latter garnered more interest due to the inherent challenges associated with acquiring a large-scale text-video dataset as compared to a text-image dataset. Furthermore, it is computationally expensive to train a video model as it requires a larger parameter space (having to accommodate temporal dimension) compared to a T2I model. Consequently, adapting image diffusion models presents a more feasible option due to their widespread availability [21; 22; 8]. Although generative priors from a T2I model supplement the spatial component of the video generation process, they lack temporal awareness making it harder to model the motion and 3D shape understanding. Tune-A-Video [19] and Text2LIVE [20] tried to address this by adding temporal layers to the T2I model for instilling temporal awareness. Despite the promising results, the issue of computational overhead is prominent as we still have to fine-tune a large number of parameters. Furthermore, _tuning a large number of parameters on a single video could also lead to severe overfitting and compromised generalization ability of the original diffusion model_. To address these issues, we propose to tune the spectral shift of the parameter space such that the underlying motion concept as well as content information in the input video is learned. For spectral shift tuning, we first take the singular value decomposition (SVD) of the pre-trained weights from each layer and freeze the singular vectors while updating only the singular values iteratively. We also notice that unconstrained optimization of spectral shifts can lead to drastic updates of larger singular values which can be catastrophic considering we want our updated model to be as close to as the original model. As a remedy, we propose a novel _spectral shift regularizer_ that allows minimal changes to larger singular values. In addition, we conduct a comprehensive study of different spatiotemporal attentions for video editing and propose single _frame attention_ for better computational efficiency. To this end, we propose a novel **S**pectral shift **A**ware **V**ideo **E**diting (**SAVE**) technique that only fine-tunes the spectral shift of the parameter space for efficient adaptation. By tuning only the singular values, we reduce the trainable parameters by almost **100\(\times\)** and speed up the adaptation proportionally. Our contributions can be summarized as follows: * We propose a novel text-guided video editing framework that adapts an image diffusion model by only fine-tuning the spectral shift of its parameter space. This allows us to have a significantly reduced number of tunable parameters with better computational efficiency. To the best of our knowledge, we are the first to address the video generation problem from the spectral shift perspective. * A spectral shift regularizer is introduced to restrict the change in larger singular values more. Such regularization allows the model to grasp the motion information without compromising the novel scene generation capability of the model. * Based on our comprehensive study, we propose to incorporate a frame attention mechanism that enforces spatial and temporal consistency across the frames and also offers better efficiency. We extensively evaluate the effectiveness of our method in different benchmarks with both qualitative and quantitative results; a snapshot of which is shown in Fig. 1. ## 2 Related Work **Text-to-Image Diffusion Models.** The field of Text-to-Image (T2I) generation has been extensively investigated, with many models based on transformers being proposed in recent years [23; 24; 25; 26; 27]. In an effort to enhance the quality of generated images, several T2I generative models have incorporated diffusion models [1]. For example, GLIDE incorporated classifier-free guidance within the diffusion framework to enhance image quality [28], while DALLE-2 improved text-image alignments through the utilization of CLIP feature space [3]. Imagen employed cascaded diffusion models for generating high-definition videos [4], and subsequent works such as VQ-diffusion and Latent Diffusion Models (LDMs) operated in the latent space to enhance training efficacy [29; 8]. **Text-to-Video Generative Models.** While remarkable progress has been made in text-to-image (T2I) generation, the field of generating videos from text prompts still lags behind. This is primarily due to the limited availability of large-scale text-video datasets and the inherent challenges in modeling temporal consistency and coherence. Early works in this domain, such as VideoGAN [30], ImagineGAN [31], and CrossNet [32], primarily focused on generating simple videos, such as moving digits or specific human activities. More recently, GODIVA [33] introduced a model that utilizes a 2D Vector Quantized Variational Autoencoder (VQ-VAE) with sparse attention for text-to-video (T2V) generation, enabling more realistic scene synthesis. NUWA [34] proposed a multitask learning framework to extend the work of GODIVA. To further improve T2V generation performance, CogVideo [35] was developed by incorporating temporal attention modules on top of a pretrained T2I model called CogView2 [26]. Video Diffusion Models (VDM) [36] builds upon the advancements of T2I models by employing a space-time factorized U-Net architecture and training with both image and video data, thereby achieving improved performance in video generation tasks. [4] further enhanced VDM by employing cascaded diffusion models to generate high-definition videos. Make-A-Video [5] and MagicVideo [6] pursued similar goals of transferring progress from T2I generation to T2V generation. Recently, a few LDM stable diffusion-based methods [37; 38; 39] are proposed for efficient video generation. In our approach, we extend the capabilities of LDMs by expanding the 2D model into the spatiotemporal domain within the latent space that efficiently fine-tunes pre-trained T2I diffusion models on a single text-video pair. **Text-Driven Video Editing.** With the success of diffusion-based image editing works [40; 41; 42; 9; 10; 43; 44; 45; 46; 14; 12; 47; 48; 15], a few diffusion-based video-editing frameworks have been proposed. Dreamix [49], Gen-1 [50], and Tune-A-Video [19] either employ VDM or leverage the pre-trained T2I models for video editing. Although these approaches have shown impressive results, it is important to note that VDMs are computationally challenging and require large-scale captioned images and videos for training. One-shot T2I-based video editing methods such as Tune-A-Video [19] employ the model inflation technique and fine-tunes the temporal attention weights. However, their editing capabilities are limited by the pre-trained T2I models while our work opens up new avenues for the efficient and effective fine-tuning text-to-image diffusion models for personalization and customization for video editing tasks. ## 3 Method Let \(\mathcal{X}=\{x_{i}|i\in[1,F]\}\) be a video comprising \(F\) frames and \(\mathcal{P}\) be the input prompt that describes the content in \(\mathcal{X}\). Our objective is to generate a novel video \(\mathcal{X}^{*}\) with editing commands coming from the prompt \(\mathcal{P}^{*}\). Although a pre-trained Text-to-Video (T2V) diffusion model can be employed to edit \(X\), training such a model can be computationally expensive [17; 18; 4; 5]. In this paper, we propose a novel video editing technique, _SAVE_, that achieves the same objective by leveraging a publicly available pre-trained Text-to-Image (T2I) model and a single text-video pair. In the following, we provide a brief background on diffusion models in Section 3.1, followed by the details of our proposed method in Sections 3.2 and 3.2.1. An overview of our framework is illustrated in Fig. 2. ### Preliminaries Diffusion Models.Stable diffusion (SD) [8] model operates on the latent space of an autoencoder \(\mathcal{D}(\mathcal{E}(\cdot))\), namely VQ-GAN [52] or VQ-VAE [33]. Here, \(\mathcal{E}\) is the encoder that compresses an RGB image \(x\) to a low-resolution latent \(z=\mathcal{E}(x)\), which can be recovered using the decoder \(x\sim\mathcal{D}(z)\). The diffusion forward process iteratively adds Gaussian noise to the signal \(z\): \[q(z_{t}|z_{t-1})=\mathcal{N}(z_{t};\sqrt{1-\beta_{t}}z_{t-1},\beta_{t}I),\ t=1,2 \ldots,T, \tag{1}\] where \(q(z_{t}|z_{t-1})\) is the conditional density of \(z_{t}\) given \(z_{t-1}\), \(\{\beta_{t}\}_{t=1}^{T}\) are hyperparameters. \(T\) is chosen large enough such that \(z_{T}\sim\mathcal{N}(0,I)\). After getting the noisy latents \(\{z_{t};t=1,2,\ldots,T\}\), a U-Net [53] composed of convolutional as well as self and cross attentional blocks with parameters \(\theta\) is trained for the backward process, a.k.a denoising, using the objective function: \[\min_{\theta}E_{z_{0},\varepsilon\sim N(0,I),t\sim\text{Uniform }(1,T)}\left\| \varepsilon-\varepsilon_{\theta}\left(z_{t},t,p\right)\right\|_{2}^{2}, \tag{2}\] where \(p\) is the embedding of prompt \(p=\mathcal{C}(\mathcal{P};\phi)\) and \(\varepsilon_{\theta}\) is the model predicted noise at time \(t\). DDIM Sampling and Inversion.During inference, we apply deterministic DDIM sampling [54] to convert a random noise \(z_{T}\) to a clean latent \(z_{0}\) with the help of trained diffusion model (\(\theta\)): \[z_{t-1}=\sqrt{\alpha_{t-1}}\left(\frac{z_{t}-\sqrt{1-\alpha_{t}}\epsilon_{ \theta}(z_{t})}{\sqrt{\alpha_{t}}}\right)+\sqrt{1-\alpha_{t-1}}\epsilon_{ \theta}(z_{t}),\quad t=T,\ldots,1, \tag{3}\] where \(\alpha_{t}=\prod_{i=1}^{t}(1-\beta_{i})\) is a parameter for noise scheduling [54; 1]. DDIM Inversion is the reverse process of DDIM sampling where we can map a clean latent \(z_{0}\) to a noisy latent \(\hat{z}_{T}\): \[\hat{z}_{t}=\sqrt{\alpha_{t}}\left(\frac{\hat{z}_{t-1}-\sqrt{1-\alpha_{t-1}} \epsilon_{\theta}(\hat{z}_{t-1})}{\sqrt{\alpha_{t-1}}}\right)+\sqrt{1-\alpha _{t-1}}\epsilon_{\theta}(\hat{z}_{t-1}),\quad t=T,\ldots,1, \tag{4}\] For applying DDIM inversion to a video, we invert each frame of the input video to the noise space. To reconstruct the original latent space using \(\mathcal{P}\), we set the classifier-free guidance scale \(s_{cfg}\) to 1. To perform editing operations, one needs to find the variables in the latent space that corresponds to the frame contents. After that, we can edit contents by finding editing directions in the latent space. The editing direction is usually provided by \(\mathcal{P}^{*}\) while setting \(s_{cfg}>>1\). While using a large \(s_{cfg}\) gives more freedom in editing, this freedom can also lead to frame inconsistency. Furthermore, the issue of error accumulation can also cause such inconsistency given that we consider 50 DDIM inversion steps. These issues are less prominent once we fine-tune the T2V model with text-video pair (\(X\),\(\mathcal{P}\)) that aligns the text embedding with the video content. To obtain better alignment, we also fine-tune the text encoder \(C\) for improved text-video alignment. ### Spectral-Shift-Aware Video Editing (SAVE) For video diffusion, we need to generate \(F\) images instead of a single image. However, the pre-trained U-Net model consists of 2D convolution blocks that perform sequential downsampling followed by upsampling passes with skip connections. To process \(F\) frames, we follow VDM [36] to inflate the 2D convolution to pseudo-3D convolution layers by replacing 2D kernels with 3D kernels. For the attention block, we replace self-attention with spatiotemporal attention layers that take into account information from multiple frames. However, architectural changes are not enough as the model still has to learn the motion information. For that, we need to fine-tune the newly constructed T2V model. In our work, we propose a novel way of fine-tuning by only controlling the spectral shift of the model. Our method is motivated by FSGAN [55] where it has been shown that fine-tuning only the singular values of the pre-trained weights is enough to adapt to new concepts. Following Figure 2: **An illustration of our text-to-video editing framework**. We initialize our T2V model with a pre-trained inflated T2I model, where we repeat the weights in the temporal dimension. We feed the denoising T2V model with an input video (\(X\)) and a corresponding prompt (\(\mathcal{P}\)) before converting the clean latents into noisy latents using forward diffusion process. Along with cross-attention, we have a frame attention layer for learning motion information as well as content consistency over frames. We only update the weights corresponding to query (\(W^{Q}\)). However, instead of updating all parameters, we merely fine-tune the singular values (\(\Sigma=diag(\sigma)\)). This reduces the number of trainable parameters significantly. We also update the CLIP [51] text encoder (C) for better conditional guidance. At inference, we pass the editing prompt (\(\mathcal{P}^{*}\)) that warrants the required changes in the output video. Depending on the classifier-free guidance scale \(s_{cfg}\), the editing results may vary. With \(s_{cfg}=1\), we should be able to recover the original video. TAV [7], we only fine-tune the query matrices (\(\mathbf{W}^{Q}\)) of attention layers and freeze all convolution layers. Consider, we have the pre-trained query weight matrices (\(\mathbf{W}^{Q}=[\mathbf{W}^{Q}_{1},\mathbf{W}^{Q}_{2},...,\mathbf{W}^{Q}_{L}]\)) from a T2I model with \(L\) number of attention layers. As shown in Fig. 2, we take spectral decomposition of \(\mathbf{W}^{Q}_{i}=\mathbf{U}_{i}\mathbf{\Sigma}_{i}\mathbf{V}_{i}^{T}\in\mathbb{R}^{M\times N}\), where \(\Sigma_{i}=\text{diag}(\sigma_{i})\) and \(\sigma_{i}=[\sigma_{i}^{1},\sigma_{i}^{2},...,\sigma_{i}^{M}]\) are singular values arranged in a descending order. Here \(M\) is the query embedding dimension of the \(i^{th}\) layer. The spectral shift of the parameter space is defined as the difference between singular values of original \(\mathbf{W}^{Q}_{i}\) and the updated \(\widehat{\mathbf{W}}^{Q}_{i}\), and can be expressed as \(\delta_{i}=[\delta_{i}^{1},\delta_{i}^{2},...,\delta_{i}^{M}]\). Here, \(\delta_{i}^{1}\) is the difference between individual singular value \(\sigma_{i}^{1}\). To optimize the spectral shift our diffusion model, we express the updated singular values as \(\mathbf{\Sigma}^{\delta_{i}}_{i}=\text{diag}(\text{ReLU}(\sigma_{i}+\delta_{i}))\) and updated weights \(\widehat{\mathbf{W}_{i}}=\mathbf{U}_{i}\mathbf{\Sigma}^{\delta_{i}}_{i}{\mathbf{V}_{i}}^{T}\). The loss function we employ for optimizing the total spectral shift \(\delta=[\delta_{1},\delta_{2},...,\delta_{M}]\) is \[\mathcal{L}(\delta)=E_{z_{0}}\left\|\varepsilon-\varepsilon_{\theta_{\delta}} \left(z_{t},t,p\right)\right\|_{2}^{2}, \tag{5}\] where \(\theta\) is our diffusion model weights. Although this objective function can reasonably adapt the model, it does not exploit the prior information on the spectral space as there are no additional constraints on the spectral shift. Such unconstrained optimization of \(\delta\) can lead to drastic changes in the singular values. However, a significant change in the few largest singular values can lead to poor editing performance. We verify this by freezing the first \(r\) (=1%) of singular values and updating others. Fig. (b)b in Section 4.3 shows the editing performance with and without restricting the singular values. Freezing more important singular values allows us to preserve the generalization power of the original model while learning the underlying motion information from the input video. However, instead of hard thresholding with a fixed value of \(r\), we make this process smoother by designing a regularizer that will induce the necessary restriction on the change in \(\delta\). #### 3.2.1 Regularized Spectral Shift To develop a proper regularizer, we first analyze the data generation process from the perspective of spectral decomposition. To this aim, a new class of linear and simple generators is defined which is referred to as linear spectral generators (LSGs). Next, we will see how this constructive perspective inspires us to develop a regularized spectral shift. **Linear Spectral Generators.** Despite the simplicity of linear models, they are useful tools for the definition of more complicated models. For example, convolution is a linear operator and it is well known that a multi-layer of convolutions with non-linear activation functions is capable of performing complicated tasks. Here, we define a basic generator network that can be extended to a deep network. Let \(\mathbf{D}\) be a matrix whose columns are denoted by \(\mathbf{d}_{i}\sim\mathcal{P}_{D}\), and we are to generate a new sample from the distribution of \(\mathcal{P}_{D}\). Let \(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T}\) denote SVD of \(\mathbf{D}\). Let us define \(\mathbf{C}=\mathbf{U}^{T}\mathbf{D}\) as spectral coefficients of the data set. Each row of \(\mathbf{C}\) shows the contribution of the corresponding spectral component in all samples. The mean and variance of each row point to a normal distribution \(\mathcal{N}(m_{i},v_{i})\). Here \(m_{i}\) and \(v_{i}\) correspond to mean and variance of the \(i^{\text{th}}\) row of \(\mathbf{C}\), respectively. Drawing a random noise for each spectral component gives us \(\mathbf{c}_{g}\) which can be transferred back to \(\mathbf{d}_{g}=\mathbf{U}\mathbf{c}_{g}\) as the generated sample. **Theorem 1**: _Let \(\mathbf{D}\in\mathbb{R}^{N\times M}\) and we generate \(M\) samples using a linear spectral generator as columns of matrix \(\mathbf{D}_{g}\). Then, the absolute difference of the \(n^{\text{th}}\) singular value of \(\mathbf{D}\) and the expected value of the \(n^{\text{th}}\) singular value of \(\mathbf{D}_{g}\) is less than \(\delta_{n}^{max}\) which is defined as_ \[\delta_{n}^{max}=\frac{2|m_{n}\sum_{i\neq n}m_{i}|}{\sigma_{n}(\mathbf{D})+\sigma_ {n}(\mathbf{D}_{g})}. \tag{6}\] _Here, \(\sigma_{n}(\mathbf{D})\) and \(\sigma_{n}(\mathbf{D}_{g})\) are the \(n^{\text{th}}\) singular value of \(\mathbf{D}\) and \(\mathbf{D}_{g}\), respectively._ _Proof: Provided in the Supplementary._ Now, consider we have a one-layer from a pre-trained network (\(\mathbf{W}^{Q}\)) characterized by \(\mathbf{U}\), \(\mathbf{\Sigma}\) and \(\mathbf{V}\) as its spectral components. Then, fine-tuning \(\mathbf{W}^{Q}\) in the space of spectrally shifted replicas is equivalent to generating \(\mathbf{W}^{Q}_{g}(=\widehat{\mathbf{W}}^{Q})\) which pursuits singular values of the pre-trained \(\mathbf{W}^{Q}\) with a uniform maximum distance from the original singular values. However, Theorem 1 suggests we have a regularized neighborhood for each singular value accordingly. This fact encourages us to have a revised loss function for fine-tuning as follows, \[\mathcal{L}(\delta)=E_{z_{0}}\left\|\varepsilon-\varepsilon_{\theta_{\delta}} \left(z_{t},t,p\right)\right\|_{2}^{2}+\lambda\mathcal{L}_{r}(\delta), \tag{7}\] where \(\mathcal{L}_{r}(\delta)=\sum_{i=1}^{L}\delta_{i}^{T}\Sigma_{i}\delta_{i}\) is the regularizer that confines the perturbation (\(\delta\)) to be more restricted for large singular values and more relaxed for small singular values and \(\lambda\) is the regularizer coefficient which is set to be 1e-3. The overall objective in Eq. 7 allows the model to adapt to new data without compromising the generalization capability of the model. Note that, our aim is to learn the underlying motion of the original video through fine-tuning while editing shape, color, and style using text prompts. As long as we preserve the generalization capability of the original model, it should be able to generate new objects even after fine-tuning. However, drastic changes through overfitting on a single video may compromise the generalization capability. In our work, we take a more conservative approach to changing the original model. Note that the analogy between \(\mathbf{W}_{g}^{Q}\) and \(\mathbf{D}_{g}\) was introduced for justifying the reason behind our regularizer. There is no direct relationship between them, e.g. we are not generating \(\mathbf{D}_{g}\) using \(\mathbf{W}_{g}^{Q}\). #### 3.2.2 Temporal Modeling To enhance temporal coherence for video generation purposes, self-attention in the T2I [8] model needs to be replaced with cross-frame attention [19; 56; 36]. Various options exist to achieve such cross-frame attention such as full spatio-temporal attention, causal attention [56; 36; 39], and sparse-causal attention [19] as shown in Fig. 3. It can be observed that the sparse-causal attention presents a relatively efficient alternative spatio-temporal and causal attention with a computational complexity of \(\mathcal{O}((mN)^{2})\), where \(m\) is generally set to 2. To this end, we argue and establish that simple frame attention, as illustrated in Fig. 3, is sufficient for DDIM inversion editing methods as the reversed latent features can capture the temporal information. Therefore, the attention that achieves the desired editing performance with the proposed framework is implemented as, \(\mathrm{Attention}(Q,K,V)=\mathrm{Softmax}(\frac{QK^{T}}{\sqrt{d}})\cdot V\), where \[Q=W^{Q}z_{i},K=W^{K}z_{0},V=W^{V}z_{0}. \tag{8}\] Here, \(W^{Q}\), \(W^{K}\), and \(W^{V}\) denote trainable matrices that project the inputs to the query, key, and value components, respectively, \(z_{i}\) represents the latent features of frame \(x_{i}\), and \(d\) represents the output dimension of the key and query features. We observed that although the sparse-causal attention mechanism outperforms frame attention while generating videos from random noise, its performance is compromised in video editing tasks, especially in rapid-motion scenarios. Furthermore, frame attention exhibits advantageous qualities such as memory conservation and expedited processing speed as indicated by estimated FLOPs in Table 1. In addition to frame-attention, our framework also relies on (1) cross-attention that considers the correspondence between pixels and conditional inputs (e.g. \(\mathcal{P}\)) and (2) additional temporal attention to predict noise based on the input video as in [19]. We employ our efficient fine-tuning techniques to refine the query projection matrices (\(W^{Q}\)) of both frame and cross-attentions, along with additional temporal attention. Subsequently, the proposed T2V model becomes capable of generating image sets that exhibit semantic consistency while maintaining high-quality frames. ### Zero-Shot Text-to-Video Generation For zero-shot, we generate videos without any sort of fine-tuning. However, additional motion information along with the text prompt is required for a successful generation. Two ways motion information can be extracted are i) a reference video with the designated action ii) key points, optical flow, etc. Due to space constraints, we show the generation performance in _Supplementary_. \begin{table} \begin{tabular}{c|c} \hline \hline Attention Mechanism & Attention FLOPs/Unet Block \\ \hline Spatio-temporal & \(Attention_{c}\times F\times H\) \\ Sparse-causal-Temporal & \(Attention_{c}\times[F+2H]\) \\ Frame+Temporal & \(Attention_{c}\times[F+H]\) \\ \hline \hline \end{tabular} \end{table} Table 1: **FLOPs comparison** of various attention variants for Unet blocks. Here, \(Attention_{c}=4\times B^{\prime}\times F\times H\times W\), and \(B^{\prime}=\) Batch Size \(\times\#\) of attention heads. In our settings, we use frame attention along with the temporal-attention. Figure 3: Showcasing various **cross-frame attention mechanisms**. In the visual representation, the query and keys are denoted by the color orange and blue, respectively. The variables H, W, and F correspond to the height, width, and number of frames in the input video, respectively. ## 4 Evaluation ### Implementation Details Our approach is built upon Latent Diffusion Models [8], also known as Stable Diffusion, and utilizes the publicly available pre-trained weights. We extract a set of uniformly spaced frames from the input video, each with a resolution of \(512\times 512\). Subsequently, we fine-tune the model for the 200 epochs method for iterations, employing a learning rate of \(1e^{-3}\) and a batch size of 1. During inference, we utilize the DDIM sampler [54] along with classifier-free guidance [18] in our experiments. For each individual video, the fine-tuning process requires approximately 3 minutes, while the sampling process takes approximately 20 seconds on an NVIDIA 3090 GPU. ### Experimental Results Style transfer.The style of a video is manifested through its comprehensive spatial and temporal attributes. In the second row of Fig. 4, we introduce the "comic style" to the input video of the rabbit eating the watermelon. It can be observed that our method exhibits the ability to seamlessly transform all frames into the desired style while preserving the semantic content of the original video intact. Object Replacement.To showcase the efficacy of our method in subject editing, we perform a transformation in the third row of Fig. 4 by replacing the watermelon with an orange. Similarly, Figure 4: **Sample editing results of our proposed method. Zoom in for better visibility.** Figure 5: Transforming attributes for **privacy-preserving applications. Zoom in for better visibility.** we replace the Jeep with a Porsche car. Notably, our editing outcomes not only align harmoniously with the accompanying text descriptions but also preserve fidelity to the original videos. Moreover, our method possesses the capability to edit multiple properties simultaneously. For instance, we demonstrate the simultaneous replacement of subjects (substituting the rabbit with the dog, and the watermelon with an orange). **Background change.** Rows 3, 4, 6, and 7 of Fig. 4 exhibit the outcomes achieved by our approach in background editing. Remarkably, the backgrounds are effectively transformed into a beach, snow, and desert. Notably, we observed that while editing the background, the model adapts accordingly to enhance the overall realism of the effect. As depicted in row 2 of Fig. 4, when prompted with "A jeep is moving on the snow," the model aligns the background with the given prompt. **Privacy Preservation.** A pivotal application of our proposed methodology is to attain privacy preservation in videos, particularly in the context of surveillance videos. By leveraging the generative capabilities of our proposed efficient diffusion generation approach, specific personal attributes can be modified or obfuscated, thereby safeguarding the privacy of individuals featured in the videos as depicted in Fig. 5. This alteration process involves transforming sensitive personal attributes such as facial features, clothing, or other identifiable characteristics while maintaining the overall appearance, action information, and realism of the video. ### Comparison with Baselines We compare our method against three baselines: 1) _Tune-A-Video (TAV)_[19] 2) _CogVideo_[35], 3) _Text2LIVE_[20]. **Quantitative Comparison.** To evaluate the effectiveness of our methodology, we select a total of 42 representative videos sourced from the DAVIS dataset [57]. To generate the video content, we employ an off-the-shelf captioning model [58] as in [19]. Additionally, we curate a set of 50 edited prompts, specifically tailored to our applications as outlined in Section 4.2, through manual design. This meticulous approach ensures comprehensive evaluation across various scenarios and provides valuable insights into the performance and applicability of our proposed approach. Our quantitative evaluation encompasses two main aspects: 1) _CLIP score_, and 2) _Human Feedback_. To evaluate _frame consistency_, we employ CLIP [51] image embeddings to compute the average cosine similarity between all pairs of video frames in the generated output videos. To assess _textual alignment_, we compute the average CLIP score between \begin{table} \begin{tabular}{l|c|c|c} \hline \hline & \multicolumn{2}{c|}{CLIP Score} & \multicolumn{1}{c}{} \\ \cline{2-4} Method & Frame & Textual & \multirow{2}{*}{Avg Edit Time} \\ & Consistency & Alignment & \\ \hline CogVideo [35] & 91.25 & 24.40 & N/A \\ Text2LIVE [20] & 92.66 & 26.75 & 35 mins. \\ Tune-a-Video [19] & 93.89 & 28.86 & 28 mins. \\ **Ours (SAVE)** & **94.81** & **29.30** & **3 mins.** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative evaluation** against baselines. Figure 6: **Qualitative comparison** between the proposed work and established baselines. the frames of the output videos and their corresponding edited prompts (shown in Tab. 2). The detailed study of _Human Feedback_ is in the _Supplementary_. Qualitative Comparison.We present a visual comparison of our proposed approach with baseline methods in Fig. 6. We observe that Text2LIVE allows for localized area editing, minimizing the overall influence on the rest of the video, but it struggled to edit the black swan into white color. Tune-A-Video lacks the ability to selectively edit specific objects without altering the entire video content as the results against the prompt, "A whiteswan is swimming on the water" indicate. In contrast, our proposed framework allows for localized area editing, minimizing the overall influence on the rest of the video. Further, our evaluation results reported in Fig. 6(c) indicate that Text2LIVE struggled to edit the jeep into Porshe. Based on the reported qualitative results, it can be observed that our proposed method generates temporally-coherent videos that preserve the structural information from the input video, effectively aligning with the edited words and details. Ablations.In order to investigate the impact of temporal modeling design, we conducted additional ablation experiments. Firstly, we examined the removal of dense spatial-temporal attention, which enables bidirectional temporal modeling, and replaced it with Sparse-Causal Attention. The results in the second row of Fig. 6(a) demonstrate that the horse running in the initial frames of the video becomes distorted. This distortion primarily arises from the overemphasis on previous frames by the Sparse-Causal Attention, leading to error propagation and significant artifacts in subsequent frames, especially. Across all of our experiments, we observed comparable performance between frame attention with temporal attention in relation to the spatio-temporal attention. ## 5 Conclusion We propose a novel adaptation approach for text-to-video editing. Instead of directly adjusting the pre-trained weights, we focus on fine-tuning the spectral shift of the weight space. For that, we decompose the weights spectrally and manipulate only the change in singular values, known as the spectral shift while keeping the corresponding singular vectors fixed. To prevent significant deviation from the original T2I weights, we employ a spectral shift regularizer that limits the spectral shift differently based on the magnitude of the singular values. We support the effectiveness of SAVE through comprehensive experimental evaluations across various scenarios. Limitations.The utilization of pretrained weights from the image diffusion model in our method entails potential limitations inherited from the off-the-shelf image generation model. Notably, our approach lacks temporal and motion priors due to the absence of video data during the training of the image diffusion model. Consequently, our method is not directly suitable for editing actions in videos, as evidenced by the challenges encountered in effectively modifying verbs within the source prompts which are discussed in the _Supplementary_. These limitations highlight the need for further exploration and development for more comprehensive video editing capabilities. Broader Impact.Although we listed privacy preservation as one of the potential applications of our proposed work, our framework can be used to create fake surveillance videos through the process of synthesizing realistic-looking scenes and events that appear to be captured by surveillance cameras to avoid any prosecution. A "fake content" detector/classifier can be utilized as a preventive measure. Figure 7: Showcasing the **a)** results of **distinct attention mechanisms** against the prompt, "A horse is running on the beach, anime style". **b)** fine-tuning performance **with and without the restriction on the singular values**\(\Sigma\). **c)** superior performance of our method as compared to Text2LIVE which struggles to **edit shapes**.
2304.10105
Automatic Procurement Fraud Detection with Machine Learning
Although procurement fraud is always a critical problem in almost every free market, audit departments still have a strong reliance on reporting from informed sources when detecting them. With our generous cooperator, SF Express, sharing the access to the database related with procurements took place from 2015 to 2017 in their company, our team studies how machine learning techniques could help with the audition of one of the most profound crime among current chinese market, namely procurement frauds. By representing each procurement event as 9 specific features, we construct neural network models to identify suspicious procurements and classify their fraud types. Through testing our models over 50000 samples collected from the procurement database, we have proven that such models -- despite having space for improvements -- are useful in detecting procurement frauds.
Jin Bai, Tong Qiu
2023-04-20T06:22:43Z
http://arxiv.org/abs/2304.10105v1
# Automatic Procurement Fraud Detection with Machine Learning ###### Abstract Although procurement fraud is always a critical problem in almost every free market, audit departments still have a strong reliance on reporting from informed sources when detecting them. With our generous cooperator, SF Express, sharing the access to the database related with procurement took place from 2015 to 2017 in their company, our team studies how machine learning techniques could help with the audition of one of the most profound crime among current chinese market, namely procurement frauds. By representing each procurement event as 9 specific features, we construct neural network models to identify suspicious procurements and classify their fraud types. Through testing our models over 50000 samples collected from the procurement database, we have proven that such models - despite having space for improvements - are useful in detecting procurement frauds. auditing, procurement fraud, machine learning, artificial neural network ## 1 Introduction Procurement fraud, sometimes called contract fraud, is believed by professionals to be one of the most common and costly white-collar crime. [1] It is defined to be an intentional act by one or more individuals among management, those charged with governance, employees, or third parties, involving the use of deception to obtain an unjust or illegal advantage. [2] Fraudulent process in procurement has been bothering various corporations all over the world, including governmental departments [3] for a long time. Problems such as collusion between bidders and bid inviters, buyers' acceptance of bribes and creations of fictitious transactions all contain great possibilities of causing both financial and assets damages to the corporations who desire purchases of products from suppliers in the most economical way. As stated by Ying, vice president of Higher Education Forensic Accounting Professional Core Course Textbook Editorial Board, despite the ubiquity and profoundness of procurement fraud, the main auditing methods used by most audit departments are still book audit and reports reviewing. [4] During our contact with SF Express, such facts were admitted by the manager of auditing department of SF Express as well. In other words, even being the greatest express company in China, SF Express still lack the ability to execute initiative fraud auditing, not to mention other corporations who do not have equal scales as SF Express. After detailed communication with SF auditing department, we arrived at the conclusion that such currently unavoidable dependence on informers is mainly caused by both the huge amount of procurements operated in the company annually and the complicated steps needed in the procurement process, including bidding, enquiry, contract management and order management. Apparently, such enormous amount and complication are making the department unable to carefully audit procurements one by one with mere human power. Considering the need for handling data of large scale and relationship between different steps and elements in a complete procurement process, we figured out that a machine learning model established with artificial neural network algorithms could be a handful tool. We decided to rely on computers' great ability of process large-scale data and take advantages of specialties of a artificial neural network including its ability to perform nonlinear computing to deal with the complex relationships between different steps and elements. We expected our model to have the ability to inform its users the probability that one procurement involves fraudulence after a set of data related to that procurement was inputted. ## 2 Background ### History of Machine Learning Machine learning refers to the study of giving computers "the ability to learn without being explicitly programmed". [5] Growing from the field of artificial intelligence, machine learning researches gradually shifted their focus from serving as a tool for AI to solving practical problems using models from probability theory and statistics. [6] Main types of problems for machine learning include classification, regression, clustering and estimation, which relate very well with our objectives of procurement fraud detections. Neural networks arise from a branch of machine learning called deep learning, which contains multiple hidden layers in the learning models - in contrast to only one layer in traditional (shallow) models. Compare to the shallow ones, deep learning models are more capable to handle handle areas superior to human insights, but also require more computing power in general. [7] The invention of Nvidia's CUDA framework on its Graphical Processing Units (GPUs) in late 2000s resulted in the "big bang" of deep learning, as it continues to become a very hot and trending subject in an enormous number of disciplines. [8] ### Prior Work on Machine Learning Various efforts have already been made to take advantage of machine learning, especially neural networks, in the area of fraud detection. The most typical case is the detection of frauds in credit card transactions. A fair number of scholarly articles have supported the idea that a neural network is a feasible approach in credit card fraud detection. [9] In addition, Kaggle, a world-famous data science website, recently hosted a machine learning challenge in exactly this area. [10] Based on a pre-collected, pre-labelled dataset of transactions, participants in this challenge are required to come up with the best machine learning model to identify frauds as accurately as possible. These articles and events can benefit us a lot when moving this idea into the area of procurement auditing. They not only build up our confidence that our approach has a high chance to succeed in auditing, but also provide us with good neural network models or other learning algorithms as a starting point. On the other hand, since the Kaggle challenge relies on a detailed and organized dataset, and since the purpose of the challenge is to find the best machine learning model, we should always be aware of the two key components in this study - reliable and comprehensive data and a carefully structured model. ### Prior Work on Procurement Fraud Detection As pointed out by Hugo, Badenhorst-Weiss and Van Rooyen, compared with researches on other business functions, there has been a shortage of study on procurement fraud from the risk management perspective.[11] Moreover, the power of machine learning has not yet been applied to procurement fraud detection and prediction at the time this study is conducted. How to actively and effectively detect procurement has been a problem bothering enterprises and governments for a long time. Previous studies of procurement fraud investigation stated that purely manual investigation has an overwhelmingly high demand of well-trained investigators that could be unaffordable for even government departments. [12] Prior researchers have also built risk management models for procurement frauds and successfully proved these models to be effective. [13] However, they still fail to put auditors on the aggressive side of investigation. Last but not least, the study group designs the research to be Chinese-market-oriented due to the fact that all data we use is from a Chinese enterprise. We base our understanding of procurement fraud on Chinese laws [14] as well as studies of procurement auditing by other Chinese scholars. [15][16] ## 3 Data Specification ### Data Collection As we are expecting a machine learning model to be used in procurement auditing, a database containing historical records of real procurements conducted in one company is indeed necessary for machine training. Therefore, in order to make sure our machine could be "well trained", our first step is to collect data and establish a database with both huge enough scale and a balanced ratio between the amounts of positive cases (procurements that involve fraudulence) and negative cases (procurements that do not involve fraudulence). Being our cooperator, SF Express provides us with access to records of all their previously audited procurements dated after January 1st, 2015, as well as all procurements take place after January 1st, 2015, whose relevant information is stored in the systems applications and products database (SAP database). Considering the extreme asymmetry between the number of positive and negative cases, the research team decides to alleviate such asymmetry by conducting sampling on positive cases. As we have around 25000 negative cases in total, we also pulled 25000 positive cases out of the SAP database, to ensure a 1 to 1 ratio. When sampling positive cases, we first choose 5 random dates between 2015.1.1 and 2017.5.31 and make sure none of the 25000 fraudulence procurements take place on these dates. We then manually collect 5000 procurements from all procurements happened on each 5 dates and summarize them into one sheet using Arbuttus Analyzer. For all variable listed below(see _3.3 Input Variables_), we directly used the raw data stored in SF's SAP system in order to test the ability of the machine to function efficiently in real business and to test its multiusability in different businesses. However, data inputted will be further normalized during the process of machine learning.(see _5.1 Implementation Details_) ### Choice of Variables As SF Express only provides us with the access to the user interface of SAP system, other than the admin zone of it, we have no choice but to limit the number of variables used in machine learning. The research team decide to choose input variables which satisfy the criterions listed by Robert May, Graeme Dandy and Holger Maier [17]. That is, all variables must be chosen with the following five factors considered: _Relevance_ Apparently, all input variables of the model must contain their own relevance with the output. In our research, input variables need to have at least one feature that could be the direct cause of fraud or an obvious clue of fraud detection. Nevertheless, we also predict some variables might not be proved to have such required features as they were previously studied by linear models even though non-linear models as Neural Network may actually discover such features of these variables. #### Computational Effort With the increase of number of input variable, the server will have to bear more computational burden. The most direct effect of such burden would be the substantial expansion of computing time. In pratice, considering the concrete use of our model, efficiency should always be a necessary trait as the model is designed mainly for business use. After all, there is no point of using a machine learing model if it is even less efficient than human censoring. #### Training Difficulty Another problem that could be caused by the increasing number of variables is the difficulty for Nueral Network builders to sufficiently train their models. Redundant and irrelevant variables can slower the training speed since they increase the number of possible combinations of parameters, which will create locally optimal error values. Also, the fact that such variables are redundant and irrelevant can result in longer time for the machine to recognize their relationship with the error and to successfully map such ambiguous relationship. Unfortunately, in our study, redundant and irrelevant facts about procurements occupy a great amount of the whole SF database and thus result in the very limited number of variables eventually put into study. This problem will be further discussed in our expectation of future studies. #### Dimensionality A critical fact that one should learn about artificial neural network is the _curse of dimensionality_[18], the fact that as the dimensianlity of the model increases linearly, the total volumn of the domain of the modelling problem would increase exponentially. As the completion of one procurement requires step by step operation that involves the supplier, the purchaser and the regulatory authorities, the establishment of a multi-dimensional model is indeed unavoidable. In the study, our team decides to establish a total of four dimensions for the training machine, including: 1. numerical data of the procurement 2. information about the supplier 3. information about the purchaser 4. the property of fraud. #### Comprehensibility While early researchers and modellers like to refer to artificial neural network as a "black box" [19], recent studies ask models to be more self-explanatory. In particular, the fulfillments of the following 3 purposes are required: 1. The inputs should have a certain domain that produce certain outputs, which can be useful knowledge in the neural network itself. 2. The model should be able to verify that the response trends between the input and output data make sense. 3. The model should be able to discover new relationships between inputs and outputs, which reveal previously unknown insights into the underlying physical process. [20] ### Input Variables 1. **Procurement Serial Number (PSN)**: This is the serial number used in SAP system to identify each procurement. With each serial number, details of individual procurement could be traced. We choose serial number as a variable because when series of fraud take place, they could have similar or consecutive serial numbers. Another reason to choose this variable is that similar or consecutive serial numbers are capable of leading auditors to procurements which belongs to the same procurement contract or procurement program. 2. **Procurement Group Number (PGN)**: The very basic logic to choose procurement group number as an input variable is that crime and deviance have the likelihood to repeat in specific circumstances or neighborhoods. [21] With the memory function of machine learning, we expect the machine to conclude which type of procurement group would have a higher chance of committing fraud. On the other hand, superior's negligence is a cause of fraud. Therefore, we assume a specific procurement group can have more possibility to conduct fraud than another group. 3. **Procurement Organization Number (PON)**: Procurement organization number as a variable is chosen for almost the same reason as procurement group number. In previous procurement fraud studies, researches proved the importance of management in fraud prevention. [22] Unlike procurement group number, which refers more to a specific group who organize the procurement in detail, procurement organization number is rather a representation of the manager who leads the groups. 4. **Material Group Number (MGN)**: We decide to include this input variable based on the assumption that certain types of fraud like receiving kickbacks are more likely to happen during the purchase of specific types of products. What's more, the material group that a product belongs to also contains information of both the product itself and its possible compliments and substitutes. Such information could be helpful in further building of the model when computations of price elasticities are used to judge the legitimacy of one procurement. 5. **Net Price (NP)**: People's intention to conduct procurement fraud is related with the profit they could get from such action, which is then related to the net price of the product being purchased. In early studies of fraud auditing, net price is proved to have direct relationship with the occurence of fraud as well as the shrinkage of a company's economic benefit. [23] 6. **Purchase Amount (PA)**: One forgettable fact that is related to procurement fraud is that fraudsters can gain a large amount of profit even when his (or her) profit per unit is negligible. Especially in large scale companies who execute a substantial amount of purchases monthly, a fraudster gaining a little profit per unit can result in huge losses for the company. Also, according to experiences of workers of SF auditing department, smallness of unit profit usually increases the difficulty of manual auditing. However, that would surely not bother a model who has hundreds of millions of times computing power. 7. **Procurement Total Price (PTP)**: This is an inclusive representation of the above two factors, which more generally reflect the size of one business. It will not only provide the machine a detailed insight of how scale of business is related with possibility of fraud but also give the machine a better idea on the classification of each case. (See below, _Fraud Type_) 8. **Fraud Type (FT; If no fraud is contained, this value would be set to 0)**: We train the machine with fraud cases that are already classified so that it would have the ability to predict not only the possibility of fraud but also what kind of fraud one specific case might belong to. In practice, this feature would make auditors more efficient when they perform more detailed examination of a procurement at a later time. Also, this variable would help the machine discover special relationships between one type of fraud and other parameters, which exactly meets the need of the 3rd standard of comprehensiveness listed in _Choice of Variables_. **Supplier Serial Number (SSN)**: Previous studies on procurement fraud have found fraud undetectable because it usually involves collusion between members of staff and suppliers. [24] With supplier being one important part (as both conductor and benefiter) of conducting procurement, we expect suppliers who have a crime history to have more chances to break the law again. In the logic of machine learning, we feed the machine with suppliers who have crime histories so that it would automatically build a blacklist which makes the machine more "cautious" when same suppliers appear again. Below is a sample input chart to help readers better understand what our input actually look like. ## 4 Neural Network Model ### From Basic Concepts to MLP A neural network consists of several layers that manipulate on multi-dimensional arrays of data, often called tensors. [25] Each layer can be viewed as a function, whose both input and output are tensors of particular shapes. Internally, it has a set of parameters, which, given an input, compute the output based on a generalized version of matrix multiplication. Multiple such layers are stacked together, so that an input tensor of data is "fed forward" through these layers to produce the output, which is distinct enough for a machine to generate different predictions from different inputs. The training of a neural network is, therefore, a process to improve the parameters of the layers, so that the output could lead to the right prediction of a given task. Given an input to the model, the prediction generated by the output of the network is compared to the "label" - the ground truth corresponding to that input. If the results do not coincide, a "loss function" is calculated and applied to every layer to compute its "gradient" - the optimal direction to update this layer so it can generate outputs for more accurate predictions. The layer is then slightly adjusted along the gradient direction, a process called "gradient descent". [26] The above process is repeated thousands or millions Figure 1: Input sample of times, until the network reaches its optimum, where, given any input, the model has a highest chance to predict the correct label for it. Various types of neural network layers are commonly used for deep learning researches. Based on the type of the input, a layer can be designed to have parameters of different sizes and shapes, so that the layer can take maximal advantage of the input's spatial patterns. For example, if the input is an image, a "convolutional layer" can have a set of filters that acts on nearby pixels to generate output features, so that the spatial similarity between neighboring pixels is extracted. If the input is an English sentence, a "recurrent layer" can have a recursive structure within itself, so the relation between two adjacent words is represented. The choice of layer type is highly related to how the input data is formatted. Multi-layer perceptron network (abbr. MLP network) is a common class of neural networks that consists of linear layers - the most basic type of network layer. The parameters in a linear layer is simply a matrix: given an input of a 1-dimensional vector, the layer performs matrix multiplication on the vector and produces the result as another vector, a process equivalent to linear transformation. Often times, a fixed activation function acts on the result vector, so that non-linearity is added to the layer to make it more adjustable. The MLP network stacks several linear layers together, so that an input data vector is transformed into a feature vector, consisting of useful information for the machine to generate predictions. The intermediate linear layers are often called "hidden layers" of the network. Below is a figure illustrating an MLP network. MLP network is most suitable for input data that can be represented as 1-D vectors, in which different fields are not correlated with each other. As discussed in Section 3.2, the data we've designed highly resembles this structure, so MLP network is in fact the best choice of our task. ### Our Model Design Let's now go back to our task at hand: given a huge database of procurements, auditors need to first identify any suspicious procurements from the rest, then give each case a suitable label, so that punitive measures could be taken. So in fact, two models are needed for the above purposes: a binary model to distinguish between suspicious procurements and not suspicious ones, and a multiclass model to classify what kind of fraud is contained in a procurement already labeled as suspicious. In fact, the two models can both be formulated as one single classification problem: the binary model needs to choose a prediction between class (1) this procurement is suspicious, or class (2) not suspicious, and the multiclass model needs to choose from a list of classes of possible fraud types. So in the process of designing our network, we can inherit our model from a given layer structure Figure 2: The concept of a MLP network that has been proven to work with this type of inputs, and add a specific layer at the end for each model, in order to accommodate the different number of output classes. A "softmax layer" serves exactly this purpose: given a 1-dimensional tensor of many features, it can produce an output vector \(v=(v_{1},\ldots,v_{n})\) of fixed length \(n\), where each \(v_{i}\) denotes the probability that this input should be classified as belonging to class \(i\). We decide to use the following structure for our network, provided in the example of training an MLP network on the MNIST dataset, which achieves state-of-the-art accuracy of 98.4%: [27][28] In the above figure, the network contains two large, dense layers of 512 neurons, each followed by a dropout layer with ratio \(d=0.2\). The purpose of the dropout layers is to prevent overfitting, which is a quite common problem in MLP networks. Next, the output layer is a softmax layer, as mentioned above, which fixes the number of output prediction classes, and should have different sizes for our two models. Note that although the two models share mainly the same structure, there are no relations between them in the training and predicting process, since each model is given a different ground truth label for an input data, and so the gradient descent process would happen in different directions, diverging the two models eventually to have very different parameters. ## 5 Implementation, Testing and Results ### Implementation Details In the actual coding, we choose to use two popular neural network libraries to implement our models: tensorflow and keras. First, tensorflow serves as the backend of keras, and provides strong GPU computing support when we train our networks on our large number of data samples. Second, keras is well-known for its easy, clean abstractions of MLP network layers, which save us a lot of time when we code up, test, and optimize our neural network models. In order to better initialize our networks and so achieve better performance, we decide to normalize all our input data so they fall into the range \([0,1]\). However, since the procurements are actually conducted at different scales, it is impossible to find a uniform standard of normalization for all data. So we make one pass through all samples, record the maximum and minimum values in Figure 3: Structure of our MLP network each column, and perform linear normalization on each point by the formula: \[normalized=\frac{original-min}{max-min} \tag{1}\] Our models are tested after training for 20 epochs, with a sample size of 50000, as specified in section 3.1. In addition, we implement 10-fold cross validation to get the average performance of our models on the whole dataset. ### Accuracy Results Below are the results of our binary model(left) and multiclass model (right), respectively: \[\begin{array}{|c|c|c|}\hline\text{\bf Fold \#}&\text{\bf Loss}&\text{\bf Accuracy} \\ \hline 1&0.3417&0.8297\\ \hline 2&0.2936&0.8476\\ \hline 3&0.2755&0.9273\\ \hline 4&0.3378&0.8235\\ \hline 5&0.3343&0.8251\\ \hline 6&0.3376&0.7841\\ \hline 7&0.2985&0.8460\\ \hline 8&0.3392&0.8343\\ \hline 9&0.4273&0.8333\\ \hline 10&0.3022&0.8974\\ \hline Average&0.3288&\text{\bf 0.8448}\\ \hline\end{array} \tag{2}\] Besides cross-validation, we've also extracted some actual data (different from the training samples above) to form a testing dataset. We choose 20 random rows from the dataset and record the prediction results of both models below. If we count the hits and misses, we can see that the accuracies shown here match quite well with the results obtained from above procedures. Quite obviously, both models perform quite well in the training process. For the binary model, the result of 84% accuracy is acceptable, based on the limited number of features. In an actual scenario, 16% wrong prediction rate means that most of the procurement frauds would be caught by this model, which is far more efficient than traditional auditing techniques. Still, there could be ways to improve this result, such as adding more training features, or using more complicated network structures; it is possible to expect such a model to hit an accuracy of over 95% if everything went smooth. And for the multiclass model, 98% accuracy is actually the state-of-the-art performance of a multiclass MLP network - the classification of fraud types is now working at its full power. Figure 4: Binary model predictions Figure 5: Multiclass model predictions Future Research & Conclusions ### Future Research The following subjects are to be studied. * Different information relevant to a single procurement might be stored in distinct databases. Such databases usually have different design methods and lengths of use, which makes data collection unnecessarily tedious. We suggest future researchers make "creating a more integrated database" one of their main targets. * Historical information related with fraud procurements is collected with high reliance on reports written manually. This fact significantly affects the number of procurements that could be used in machine learning. Considering the large-scale database needed in effective machine learning, more work should be done to increase the amount of cases that can be used to train the program. * Since raw data from databases have various formats, normalization measures need to be taken before we feed data to our models. However, due to the lack of consistency between columns of data, finding a good normalization method for all data is exceptionally hard. In our current experiments, we rely on some pre-determined assumptions on those data to construct different normalization formulae for different columns. It is urgent that a uniformed, concise and effective normalization method be worked out in future studies. ### Conclusions This research reveals both the probability of utilizing machine learning in traditionally manpower-consuming areas like fraud auditing and the difficulties that such utilization might face. The model testing part evidently shows that our model has a good performance in predicting the existence and further classification of fraudulence. With the SAP database provided by SF Express, our artificial neural network exhibits decent calculating speed and accuracy, which lead us to the confidence of further applying machine learning in different types of audits. A few incompatibilities still exist and further modifications of the model are required. Also, considering the fact that the database a company use directly decides the data to be inputted in training, the company itself should bring more uniformity to the database it use as well. ## Acknowledgments We would like to offer special thanks to Zhijun Lin, Raymond Li, Peng Su, Chunlei Zhang, Jiansheng Fan and the auditing department of SF Express for all information and data concerning procurement fraud, as well as every single support they provided. Also, we want to thank our reviewers and all of those who provided comments on prior drafts of this paper.
2310.12855
Convective scale and subadiabatic layers in simulations of rotating compressible convection
(abridged) Context: Rotation is thought to influence the size of convective eddies and the efficiency of convective energy transport in the deep convection zones of stars. Rotationally constrained convection has been invoked to explain the lack of large-scale power in observations of solar flows. Aims: The main aims are to quantify the effects of rotation on the scale of convective eddies and velocity, the depths of convective overshoot, and the subadiabatic Deardorff layers. Methods: Three-dimensional hydrodynamic simulations of rotating convection in Cartesian domains were run. The results were compared with theoretical scaling results that assume a balance between Coriolis, inertial, and buoyancy (Archemedean) forces (CIA balance). Results: The scale of convective eddies decreases as rotation increases, and ultimately reaches a rotationally constrained regime consistent with the CIA balance. Using a new measure of the rotational influence on the system, it is shown that even the deep parts of the solar convection zone are not in the rotationally constrained regime. The simulations capture the slowly and rapidly rotating scaling laws predicted by theory, and the Sun appears to be in between these two regimes. Both, the overshooting depth and the extent of the Deardorff layer, decrease as rotation becomes more rapid. For sufficiently rapid rotation the Deardorff layer is absent. Conclusions: Relating the simulations with the Sun suggests that the convective scale even in the deep parts of the Sun is only mildly affected by rotation and that some other mechanism is needed to explain the lack of strong large-scale flows in the Sun. Taking the current results at face value, the overshoot and Deardorff layers are estimated to span roughly five per cent of the pressure scale height at the base of the convection zone in the Sun.
Petri J. Käpylä
2023-10-19T16:06:00Z
http://arxiv.org/abs/2310.12855v1
# Convective scale and subadiabatic layers in ###### Abstract Context:Rotation is thought to influence the size of convective eddies and the efficiency of convective energy transport in the deep convection zones of stars. Rotationally constrained convection has been invoked to explain the lack of large-scale power in observations of solar flows. Aims:The main aims are to quantify the effects of rotation on the scale of convective eddies and velocity, the depths of convective overshoot, and the subadiabatic Deardorff layers. Methods:Moderately turbulent three-dimensional hydrodynamic simulations of rotating convection in local Cartesian domains were run. The rotation rate and luminosity of the simulations are varied to probe the dependency of the results on Coriolis, Mach, and Richardson numbers measuring the incluences of rotation, compressibility, and stiffness of the radiative layer. The results were compared with theoretical scaling results that assume a balance between Coriolis, inertial, and buoyancy (Archemedean) forces, which is also referred to as the CIA balance. Results:The horizontal scale of convective eddies decreases as rotation increases, and ultimately reaches a rotationally constrained regime consistent with the CIA balance. Using a new measure of the rotational influence on the system, it is shown that even the deep parts of the solar convection zone are not in the rotationally constrained regime. The simulations capture the slowly and rapidly rotating scaling laws predicted by theory, and the Sun appears to be in between these two regimes. Both, the overshooting depth and the extent of the Deardorff layer, decrease as rotation becomes more rapid. For sufficiently rapid rotation the Deardorff layer is absent due to the symmetrization of up- and downflows. However, for the most rapidly rotating cases the overshooting increases again due to unrealistically large Richardson numbers that allow convective columns penetrate deep into the radiative layer. Conclusions:Relating the simulations with the Sun suggests that the convective scale even in the deep parts of the Sun is only mildly affected by rotation and that some other mechanism is needed to explain the lack of strong large-scale flows in the Sun. Taking the current results at face value, the overshoot and Deardorff layers are estimated to span roughly five per cent of the pressure scale height at the base of the convection zone in the Sun. ## 1 Introduction The theoretical understanding of solar and stellar convection was shaken roughly a decade ago when helioseismic analysis suggested that the velocity amplitudes in the deep solar convection zone are orders of magnitude smaller than anticipated from theoretical and numerical models (Hanasoge et al., 2012). Significant effort has been put in refining these estimates but a gaping discrepancy between numerical models and helioseismology remains (e.g. Hanasoge et al., 2016; Proxaut, 2021); see, however Greer et al. (2015). This issue is now refererred to as the convective conundrum (O'Mara et al., 2016). Several solutions to this conundrum have been proposed, including high effective Prandtl number (e.g. Karak et al., 2018), rotationally constrained convection (Featherstone & Hindman, 2016), and that the superadiabatic layer in the Sun is much thinner than thus far thought (Brandenburg, 2016); see also Kapyla et al. (2023) and references therein. The two latter ideas are explored further in the current study. The idea that convection is rotationally constrained in the deep convection zone (CZ) is already borne out of mixing length models of solar convection that imply velocity amplitudes \(u_{\rm conv}\) of about \(10\) m s\({}^{-1}\) for the deep convection zone, while the convective length scale \(\ell_{\rm conv}\), which is the mixing length, is of the order of \(100\) Mm, yielding a Coriolis number \({\rm Co}_{\odot}=2\Omega_{\odot}\ell_{\rm conv}/u_{\rm conv}\) of the order of 10 (e.g. Ossendrijver, 2003; Schumacher & Sreenivasan, 2020). However, this estimate does not take into account the decreasing length scale due to the rotational influence on convection. Assuming that the Coriolis, inertial, and buoyancy (Archimedean) forces balance, also known as the CIA balance (e.g. Stevenson, 1979; Ingersoll & Pollard, 1982; King & Buffett, 2013; Barker et al., 2014; Aurnou et al., 2020; Vasil et al., 2021), implies that the convective scale is given by \(\ell_{\rm conv}\propto{\rm Co}^{-1/2}\), where \({\rm Co}=2\Omega H/u_{\rm conv}\) is a global Coriolis number, where \(\Omega\) is the rotation rate and where \(\dot{H}\) is a length scale corresponding to the system size (e.g. Aurnou et al., 2020). This idea has been explored recently by Featherstone & Hindman (2016) and Vasil et al. (2021) who suggested that the largest convectively driven scale in the Sun coincides with that of supergranulation due to rotationally constrained convection in the deep CZ. These studies assumed from the outset that convection is strongly rotationally affected. Here a somewhat different perspective is taken in that an attempt is made to assess whether this assumption holds for the deep solar CZ. Furthermore, in addition to \(\ell_{\rm conv}\), the scalings of various quantities based on predictions from the CIA balance are studied over a wide range of rotation rates. Simulations of stratified overshooting convection have revealed that deep parts of CZs are often weakly stably stratified (e.g., 1993; 1993; 2015; 2017; 2017; 2017; 2019). This is interpreted such that convection is driven by the cooling at the surface that induces cool downflow plumes which pierce through the entire convection zone and penetrate deep into the stable layers below. This process has been named entropy rain (e.g. 2016) and goes back to ideas presented by 1997 and the simulations of 1998; 1998. This picture of convection is a clean break from the canonical view in which convection is driven throughout the convection zone by a superadiabatic temperature gradient, an idea which is also encoded into the mixing length concept (e.g. 1953; 1958). Theoretically this can be understood such that the convective energy flux that is traditionally proportional to the entropy gradient is supplemented by a non-gradient term proportional to the variance of entropy fluctuations (1961, 1966). Analysis of the force balance of up- and downflows in non-rotating hydrodynamic simulations supports the idea of surface-driven non-local convection (e.g. 2017; 2019, 2021). Thus far these studies have mostly concentrated on non-rotating convection (see, however 2019; 2021). Here rotation is included to study its impact on the formation and extent of stably stratified Deardorff layers where the convective flux runs counter to the entropy gradient. Another aspect of interest in astrophysics is convective overshooting (see, e.g. 2023, for a recent review). Numerical studies targeting specifically overshooting have largely concentrated on non-rotating cases (e.g. 1995, 1998; 2000; 2017; 2019; 2022), and the effects of rotation have received much less attention (e.g. 2003; 2004; 2004). It is generally thought that rotation leads to reduction of overshooting depth (e.g. 2003) but a comprehensive study of this is still lacking. The remainder of the paper is organized as follows: the model is described in Section 2, whereas the results and conclusions of the study are presented in Sections 3 and 4, respectively. The derivations related to the CIA balance are presented in Appendix A. ## 2 The model The model is the same as that used in (2019, 2021). The Pencil Code (Pencil Code Collaboration et al., 2021)1 was used to produce the simulations. Convection is modeled in a Cartesian box with dimensions \((L_{x},L_{y},L_{z})=(4,4,1.5)d\), where \(d\) is the depth of the initially convectively unstable layer. The equations for compressible hydrodynamics are solved: Footnote 1: [https://github.com/pencil-code/](https://github.com/pencil-code/) \[\frac{D\ln\rho}{Dt} = -\mathbf{\nabla}\mathbf{\cdot}\mathbf{u}, \tag{1}\] \[\frac{D\mathbf{u}}{Dt} = \mathbf{g}-\frac{1}{\rho}(\mathbf{\nabla}\mathbf{p}-\mathbf{\nabla}\mathbf{\cdot}2 \nu\mathbf{\rho}\mathbf{\mathsf{S}})-2\mathbf{\Omega}\times\mathbf{u},\] (2) \[T\frac{Ds}{Dt} = -\frac{1}{\rho}\left[\mathbf{\nabla}\mathbf{\cdot}(\mathbf{F}_{\rm rad}+\mathbf{F }_{\rm SGS})-\mathcal{C}\right]+2\nu\mathbf{\mathsf{S}}^{2}, \tag{3}\] where \(D/Dt=\partial/\partial t+\mathbf{u}\mathbf{\cdot}\mathbf{\nabla}\) is the advective derivative, \(\rho\) is the density, \(\mathbf{u}\) is the velocity, \(\mathbf{g}=-g\mathbf{e}_{z}\) is the acceleration due to gravity with \(g>0\), \(p\) is the pressure, \(T\) is the temperature, \(s\) is the specific entropy, \(\nu\) is the constant kinematic viscosity, and \(\mathbf{\Omega}=\Omega_{0}(-\sin\theta,0,\cos\theta)^{T}\) is the rotation vector, where \(\theta\) is the colatitude. \(\mathbf{F}_{\rm rad}\) and \(\mathbf{F}_{\rm SGS}\) are the radiative and turbulent subgrid scale (SGS) fluxes, respectively, and \(\mathcal{C}\) describes cooling near the surface. \(\mathbf{\mathsf{S}}\) is the traceless rate-of-strain tensor with \[\mathsf{S}_{ij}=\tfrac{1}{2}(u_{i,j}+u_{j,i})-\tfrac{1}{3}\delta_{ij}\mathbf{ \nabla}\mathbf{\cdot}\mathbf{u}. \tag{4}\] The gas is assumed to be optically thick and fully ionized, where radiation is modeled via the diffusion approximation. The ideal gas equation of state \(p=(c_{\rm P}-c_{\rm V})\rho T=\mathcal{R}\rho T\) applies, where \(\mathcal{R}\) is the gas constant, and \(c_{\rm P}\) and \(c_{\rm V}\) are the specific heats at constant pressure and volume, respectively. The radiative flux is given by \[\mathbf{F}_{\rm rad}=-K\mathbf{\nabla}T, \tag{5}\] where \(K\) is the radiative heat conductivity \[K=\frac{16\sigma_{\rm SB}T^{3}}{3\kappa\rho}, \tag{6}\] where \(\sigma_{\rm SB}\) is the Stefan-Boltzmann constant and \(\kappa\) is the opacity. Assuming that the opacity is a power law of the form \(\kappa=\kappa_{0}(\rho/\rho_{0})^{a}(T/T_{0})^{b}\), where \(\rho_{0}\) and \(T_{0}\) are reference values of density and temperature, the heat conductivity is \[K(\rho,T)=K_{0}(\rho/\rho_{0})^{-(a+1)}(T/T_{0})^{3-b}. \tag{7}\] The choice \(a=1\) and \(b=-7/2\) corresponds to the Kramers opacity law (2004), which was used in convection simulations by 1990 and 2000. Additional turbulent SGS diffusivity is applied for the entropy fluctuations with \[\mathbf{F}_{\rm SGS}=-\rho T\chi_{\rm SGS}\mathbf{\nabla}s^{\prime}, \tag{8}\] where \(s^{\prime}(\mathbf{x})=s(\mathbf{x})-\overline{s}\) with the overbar indicating horizontal averaging. The coefficient \(\chi_{\rm SGS}\) is constant in the whole domain and \(\mathbf{F}_{\rm SGS}\) has a negligible contribution to the net energy flux such that \(\overline{\mathbf{F}}_{\rm SGS}\approx 0\). The cooling at the surface is described by \[\mathcal{C}=\rho c_{\rm P}\frac{T_{\rm cool}-T}{\tau_{\rm cool}}f_{\rm cool}(z), \tag{9}\] where \(\tau_{\rm cool}\) is a cooling time, \(T=e/c_{\rm V}\) is the temperature where \(e\) is the internal energy, and where \(T_{\rm cool}=T_{\rm top}\) is a reference temperature corresponding to the fixed value at the top boundary. The advective terms in Equations (1) to (3) are written in terms of a fifth-order upwinding derivative with a hyperdiffusive sixth-order correction with a local flow-dependent diffusion coefficient; see Appendix B of 2006. ### Geometry, initial and boundary conditions The computational domain is a rectangular box where the vertical coordinate is \(z_{\rm bot}\leq z\leq z_{\rm top}\) with \(z_{\rm bot}/d=-0.45\), \(z_{\rm top}/d=1.05\). The horizontal coordinates \(x\) and \(y\) run from \(-2d\) to \(2d\). The initial stratification consists of three layers. The two lower layers are polytropic with polytropic indices \(n_{1}=3.25\) (\(z_{\rm bot}/d\leq z/d\leq 0\)) and \(n_{2}=1.5\) (\(0\leq z/d\leq 1\)). The former follows from a radiative solution that is a polytrope with index \(n=(3-b)/(1+a)\); see Barekat & Brandenburg (2014), Appendix A of Brandenburg (2016), and Figure 1. The latter corresponds to a marginally stable isentropic stratification. Initially the uppermost layer above \(z/d=1\) is isothermal, mimicking a photosphere where radiative cooling is efficient. Convection ensues because the system is not in thermal equilibrium due to the cooling near the surface and due to the inefficient radiative diffusion in the layers above \(z/d=0\). The velocity field is initially seeded with small-scale Gaussian noise with amplitude \(10^{-5}\sqrt{dg}\). The horizontal boundaries are periodic and the vertical boundaries are impenetrable and stress free according to \[\frac{\partial u_{x}}{\partial z}=\frac{\partial u_{y}}{\partial z}=u_{z}=0. \tag{10}\] A constant energy flux is imposed at the lower boundary by setting \[\frac{\partial T}{\partial z}=-\frac{F_{\rm bot}}{K_{\rm bot}}, \tag{11}\] where \(F_{\rm bot}\) is the fixed input flux and \(K_{\rm bot}=K(x,y,z_{\rm bot})\). Constant temperature \(T=T_{\rm top}\) is imposed on the upper vertical boundary. ### Units and control parameters The units of length, time, density, and entropy are given by \[[x]=d,\ \ [t]=\sqrt{d/g},\ \ [\rho]=\rho_{0},\ \ [s]=c_{\rm P}, \tag{12}\] where \(\rho_{0}\) is the initial value of density at \(z=z_{\rm top}\). The models are fully defined by choosing the values of \(\nu\), \(\Omega_{0}\), \(\theta\)\(g\), \(a\), \(b\), \(K_{0}\), \(\rho_{0}\), \(T_{0}\), \(\tau_{\rm cool}\), and the SGS Prandtl number \[{\rm Pr}_{\rm SGS}=\frac{\nu}{\chi_{\rm SGS}}, \tag{13}\] along with the cooling profile \(f_{\rm cool}(z)\). The values of \(K_{0}\), \(\rho_{0}\), \(T_{0}\) are subsumed into another constant \(\widetilde{K}_{0}=K_{0}\rho_{0}^{a+1}T_{0}^{b-3}\) which is fixed by assuming the radiative flux at \(z_{\rm bot}\) to equal \(F_{\rm bot}\) at \(t=0\). The cooling profile \(f_{\rm cool}(z)=1\) above \(z/d=1\) and \(f_{\rm cool}(z)=0\) below \(z/d=1\), connecting smoothly across the interface over a width of \(0.025d\). The quantity \(\xi_{0}=H_{\rm p}^{\rm top}/d={\cal R}T_{\rm top}/gd\) sets the initial pressure scale height at the surface and determining the initial density stratification. All of the current simulations have \(\xi_{0}=0.054\). Prandtl number based on the radiative heat conductivity is \[{\rm Pr}(\mathbf{x})=\frac{\nu}{\chi(\mathbf{x})}, \tag{14}\] where \(\chi(\mathbf{x})=K(\mathbf{x})/c_{\rm P}\rho(\mbox{\boldmath $x$})\), quantifies the relative importance of viscous to temperature diffusion. Unlike many other simulations, \({\rm Pr}\) is not an input parameter because of the non-linear dependence of the radiative diffusivity on the ambient thermodynamics. The dimensionless normalized flux is given by \[{\cal F}_{\rm n}=\frac{F_{\rm bot}}{\rho(z_{\rm bot})\epsilon_{\rm s}^{3}(z_{ \rm bot})}, \tag{15}\] where \(\rho(z_{\rm bot})\) and \(c_{\rm s}(z_{\rm bot})\) are the density and the sound speed, respectively, at \(z=z_{\rm bot}\) at \(t=0\). At the base of the solar CZ \({\cal F}_{\rm n}\approx 4\cdot 10^{-11}\)(e.g. Brandenburg et al. 2005), whereas in the current fully compressible simulations several orders of magnitude larger values are used. The effect of rotation is quantified by the Taylor number \[{\rm Ta}=\frac{4\Omega_{0}^{2}d^{4}}{\nu^{2}}, \tag{16}\] which is related to the Ekman number via \({\rm Ek}={\rm Ta}^{-1/2}\). The Rayleigh number based on the energy flux is given by \[{\rm Ra}_{\rm F}=\frac{gd^{4}F_{\rm bot}}{c_{\rm P}\rho T\nu\chi^{2}}. \tag{17}\] This can be used to construct a flux-based diffusion-free modified Rayleigh number (e.g. Christensen 2002; Christensen & Aubert 2006) \[{\rm Ra}_{\rm F}^{\star}=\frac{{\rm Ra}_{\rm F}}{{\rm Pr}^{2}{\rm Ta}^{3/2}}, \tag{18}\] In the current set-up \({\rm Ra}_{\rm F}^{\star}\) is given by \[{\rm Ra}_{\rm F}^{\star}=\frac{gF_{\rm bot}}{8c_{\rm P}\rho T\Omega_{0}^{3}d^{ 2}}. \tag{19}\] A reference depth needs to be chosen because \({\rm Ra}_{\rm F}^{\star}={\rm Ra}_{\rm F}^{\star}(z)\). Furthermore, \(H\equiv c_{\rm P}T/g\) is a length scale related to the pressure scale height. The choice \(d=H=H_{\rm p}\), where \(H_{\rm p}\equiv-(\partial\ln p/\partial z)^{-1}\) is the pressure scale height at the base of the convection zone, leads to \[{\rm Ra}_{\rm F}^{\star}=\frac{F_{\rm bot}}{8\rho\Omega^{3}H_{\rm p}^{3}}. \tag{20}\] ### Diagnostics quantities The global Reynolds and SGS Peclet numbers describe the strength of advection versus viscosity and SGS diffusion \[{\rm Re}=\frac{u_{\rm rms}}{\nu k_{1}},\ \ \ {\rm Pe}_{\rm SGS}=\frac{u_{\rm rms}}{\chi_{\rm SGS }k_{1}}, \tag{21}\] where \(u_{\rm rms}\) is the volume averaged rms-velocity, and where \(k_{1}=2\pi/d\) is an estimate of the largest eddies in the system. The Reynolds and Peclet number based on the actual convective length scale \(\ell\) are given by \[{\rm Re}_{\ell}=\frac{u_{\rm rms}\ell}{\nu},\ \ \ {\rm Pe}_{\ell}=\frac{u_{\rm rms }\ell}{\chi_{\rm SGS}}. \tag{22}\] Here \(\ell=k_{\rm mean}^{-1}\) is chosen, where \(k_{\rm mean}=k_{\rm mean}(z)\) is the mean wavenumber (e.g. Christensen & Aubert 2006; Schrinner et al. 2012), and which is computed from \[k_{\rm mean}(z)=\frac{\int kE(k,z)dk}{\int E(k,z)dk}, \tag{23}\] where \(E(k,z)\) is the power spectrum of the velocity field with \(\mathbf{u}^{2}(z)=\int E(k,z)dk\). In general the total thermal diffusivity is given by \[\chi_{\rm eff}(\mathbf{x})=\chi_{\rm SGS}+\chi(\mathbf{x}). \tag{24}\] However, in all of the current simulations \(\chi\ll\chi_{\rm SGS}\) in the CZ such that the Prandtl and Peclet numbers based on \(\chi_{\rm eff}\) differ very little from \({\rm Pr}_{\rm SGS}\) and \({\rm Pe}_{\rm SGS}\). The Rayleigh number is defined as \[{\rm Ra}\ =\ \frac{gd^{4}}{\nu\chi}\left(-\frac{1}{c_{\rm P}}\frac{{\rm d}s}{{ \rm d}z}\right)_{\rm hs}, \tag{25}\] which varies as a function of height and is quoted near the surface at \(z/d=0.85\). The Rayleigh number in the hydrostatic, non-convecting, state is measured from a one-dimensional model that is run to thermal equilibrium, and where the convectively unstable layer is confined to the near-surface layers (Brandenburg, 2016); see also Figure 1. In the hydrostatic case \(\chi=\chi(z)\) and \(\chi_{\rm{SGS}}\), which affects only the fluctuations, plays no role. The turbulent Rayleigh number is quoted from the statistically stationary state using the horizontally averaged mean state, \[{\rm{Ra}}_{\rm{t}} = \left.\frac{gd^{4}}{\nu\overline{\chi}_{\rm{eff}}}\left(-\frac{1 }{c_{\rm{P}}}\frac{{\rm{d}}\overline{s}}{{\rm{d}}z}\right)\right|_{z/d=0.85}, \tag{26}\] where the overbar denotes temporal and horizontal averaging. Rotational influence on the flow is measured by several versions of the Coriolis number. First, the global Coriolis number is defined as \[{\rm{Co}}=\frac{2\Omega_{0}}{u_{\rm{rms}}k_{1}}, \tag{27}\] where \(k_{1}=2\pi/d\) is the wavenumber corresponding to the system scale. This definition neglects the changing length scale as a function of rotation and overestimates the rotational influence when rotation is rapid and the convective scale is smaller. A definition that takes the changing length scale into account is given by the vorticity-based Coriolis number \[{\rm{Co}}_{\omega}=\frac{2\Omega_{0}}{\omega_{\rm{rms}}}, \tag{28}\] where \(\omega_{\rm{rms}}\) is the volume-averaged rms-value of the vorticity \(\mathbf{\omega}=\mathbf{\nabla}\times\mathbf{u}\). Another definition of the Coriolis number taking into account the changing integral length scale is given by \[{\rm{Co}}_{\ell}=\frac{2\Omega_{0}\ell}{u_{\rm{rms}}}, \tag{29}\] where \(\ell=(\overline{k}_{\rm{mean}})^{-1}\) where the overbar denotes averaging over time and CZ. This is a commonly used choice in simulations of convection in spherical shells (Schrinner et al., 2012; Gastine et al., 2014); see also Aumou et al. (2020) who considered convection in the limits of slow rotation and rapid rotation. Let us further define a flux Coriolis number \({\rm{Co}}_{\rm{P}}\)2 as Footnote 2: The same quantity was referred to as stellar Coriolis number in Kapylá (2023). \[{\rm{Co}}_{\rm{P}}\equiv\frac{2\Omega_{0}H_{\rm{p}}}{u_{\rm{flux}}}=2\Omega_{ 0}H_{\rm{p}}\left(\frac{\rho_{\star}}{F_{\rm{bot}}}\right)^{1/3}, \tag{30}\] where \(u_{\rm{flux}}\) is a reference velocity obtained from \[F_{\rm{tot}}=\rho_{\star}u_{\star}^{3}, \tag{31}\] where \(\rho_{\star}\) is a reference density, taken here at the bottom of the CZ. \(u_{\rm{flux}}\) does not, and does not need to, correspond to any actual velocity and it rather represents the available energy flux. Therefore \({\rm{Co}}_{\rm{P}}\) does not depend on any dynamical flow speed or length scale which are set by complicated interactions of convection, rotation, magnetism, and other relevant physics. On the other hand, \({\rm{Co}}_{\rm{P}}\) depends only on quantities that can either be measured (\(F_{\rm{tot}}\), \(\Omega_{0}\)) or deduced from stellar structure models with relatively little ambiguity (\(H_{\rm{p}}\), \(\rho_{\star}\)). The significance of \({\rm{Co}}_{\rm{P}}\) is seen when rearranging Eq. (20) to yield \[(2\Omega_{0}H_{\rm{p}})^{3}\left.\frac{\rho}{F_{\rm{bot}}}=({\rm{Ra}}_{\rm{F} }^{\star})^{-1}.\right. \tag{32}\] Identifying the lhs with \({\rm{Co}}_{\rm{F}}^{3}\), Eq. (30), gives \[{\rm{Co}}_{\rm{P}}=({\rm{Ra}}_{\rm{F}}^{\star})^{-1/3}. \tag{33}\] An often used phrase in the context of convection simulations targeting the Sun is that while all the other system parameters are beyond the reach of current simulations, the rotational influence on the flow can be reproduced (e.g. Kapylá et al., 2023). Equation (33) gives this a more precise meaning in that the solar value of \({\rm{Ra}}_{\rm{F}}^{\star}\) needs to be matched by any simulation claiming to model the Sun. The net vertical energy flux consists of contributions due to radiative diffusion, enthalpy, kinetic energy flux, and viscous fluxes as well as the surface cooling: \[\overline{F}_{\rm{rad}} = -\overline{K}\frac{{\rm{d}}\overline{T}}{{\rm{d}}z}, \tag{34}\] \[\overline{F}_{\rm{enth}} = c_{\rm{P}}\overline{(\rho u_{z})^{T}},\] (35) \[\overline{F}_{\rm{kin}} = \frac{1}{2}\rho\overline{u^{2}u_{z}^{\prime}},\] (36) \[\overline{F}_{\rm{visc}} = -2\nu\overline{\rho u_{i}}\overline{\zeta_{iz}}\] (37) \[\overline{F}_{\rm{cool}} = -\int_{z_{\rm{bot}}}^{z_{\rm{top}}}\overline{\mathcal{C}}{\rm{d}}z. \tag{38}\] Here the primes denote fluctuations and overbars horizontal averages. The total convected flux (Cattaneo et al., 1991) is the sum of the enthalpy and kinetic energy fluxes: \[\overline{F}_{\rm{conv}}=\overline{F}_{\rm{enth}}+\overline{F}_{\rm{kin}}, \tag{39}\] which corresponds to the convective flux in, for example, mixing length models of convection. Another useful diagnostic is buoyancy or Brunt-Vaisala frequency, which is given by \[N^{2}=\frac{g}{c_{\rm{P}}}\frac{ds}{dz}, \tag{40}\] and describes the stability of an atmosphere with respect to buoyancy fluctuations if \(N^{2}>0\). Finally, the Richardson number related to rotation in the stably stratified layers is defined as \[{\rm{Ri}}_{\Omega}=\frac{N^{2}}{\Omega_{0}^{2}}. \tag{41}\] Averages denoted by overbars are typically taken over the horizontal directions and time, unless specifically stated otherwise. ## 3 Results Three sets of simulations with varying \(\mathscr{F}_{\rm{n}}\) and approximately the same values of \({\rm{Co}}_{\rm{F}}\) are presented. These will be referred to as Sets A, B, and C. The non-rotating runs in these sets correspond respectively to Runs K3, K4, and K5 in Kapylá (2019) in terms of \(\mathscr{F}_{\rm{n}}\), although lower values of \(\nu\) and \(\chi_{\rm{SGS}}\) were used in the runs of the present study. Note that when \(\mathscr{F}_{\rm{n}}\) is varied between the sets of simulations, the rotation rate \(\Omega_{0}\), and the diffusivities \(\nu\) and \(\chi_{\rm{SGS}}\) are varied at the same time proportional to \(\mathscr{F}_{\rm{n}}^{1/3}\)(see, e.g. Kapylá et al., 2020, and Appendix A for more details). Furthermore, the cooling time \(\tau_{\rm{cool}}\) is varied proportional to \(\mathscr{F}_{\rm{n}}\). The current simulations have modest Reynolds and Peclet numbers in comparison to astrophysically relevant parameter regimes (e.g. Ossendrijver, 2003; Kupka and Muthsam, 2017; Kapylá et al., 2023); see Table 1. Earlier studies from non-rotating convection suggest that results obtained at such modestly turbulent regimes remain robust also at the highest resolutions affordable (Kapylá, 2021). This is due to the fact that the main energy transport mechanism (convection) and the main driver of convection (surface cooling) are not directly coupled to the diffusivities. However, the current cases with rotation are more complicated because the supercriticality of convection decreases with increasing rotation rate (e.g. Chandrasekhar 1961; Roberts 1968). The effects of decreasing supercriticality are not studied systematically here, but subsets of the runs in Set A were repeated with higher resolutions (\(576^{3}\) and \(1152^{3}\)) and correspondingly higher \(\rm{Ra}_{F}\), \(\rm{Re}\), and \(\rm{Pe}\); see Sets Am and Ah in Table 1. ### Hydrostatic solution Earlier studies have shown that a purely radiative hydrostatic solution with the Kramers opacity law is a polytrope with index \(n_{\rm rad}=3.25\)(Barekat & Brandenburg 2014; Brandenburg 2016). Such a solution arises in the case where \(K={\rm const.}\) and \(\nabla_{z}T={\rm const.}\) To see if this configuration is recovered with the current set-up, Equations (1) to (3) were solved numerically in a one-dimensional \(z\)-dependent model with otherwise the same parameters as in the 3D simulations corresponding to the runs in Set A. The resulting temperature profile is shown in Fig. 1(a) along with a corresponding horizontally averaged profile from convecting Runs A0, A6, and A9. The stratification is consistent with a polytrope corresponding to \(n_{\rm rad}\) up to a height of roughly \(z/d=0.75\). Near the nominal surface of the convection zone, \(z/d=1\), the temperature gradient steepens sharply because the cooling term relaxes the temperature toward a constant (\(z\)-independent) value near the surface. Therefore, neither \(K\) nor \(\nabla_{z}T\) are constants in this transition region between the \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline Run & \(\rm{Ra}_{F}[10^{13}]\) & \(\rm{Ra}_{F}^{*}\) & \(\rm{Ta}\) & \(\mathscr{F}_{n}[10^{-6}]\) & Co & Co\({}_{\omega}\) & Co\({}_{\omega}\) & \(\rm{Co}_{F}\) & \(\rm{Re}\) & \(\rm{Ra}_{n}[10^{6}]\) & \(\rm{Ra}_{n}^{BZ}\) & grid \\ \hline A0 & 0.5 & \(-\) & 0 & 4.6 & 0.00 & 0.00 & 0.00 & 0.00 & 38.7 & 4.1 & \(-\) & \(288^{3}\) \\ A1 & 0.5 & \(2.5\cdot 10^{2}\) & \(1.0\cdot 10^{4}\) & 4.6 & 0.07 & 0.02 & 0.08 & 0.28 & 38.9 & 4.3 & \(1.4\cdot 10^{4}\) & \(288^{3}\) \\ A2 & 0.5 & 31 & \(4.0\cdot 10^{4}\) & 4.6 & 0.13 & 0.03 & 0.16 & 0.55 & 39.6 & 4.3 & \(3.3\cdot 10^{3}\) & \(288^{3}\) \\ A3 & 0.5 & 3.9 & \(1.6\cdot 10^{5}\) & 4.6 & 0.26 & 0.06 & 0.31 & 1.04 & 39.4 & 4.6 & \(8.0\cdot 10^{2}\) & \(288^{3}\) \\ A4 & 0.5 & 0.25 & \(1.0\cdot 10^{6}\) & 4.6 & 0.63 & 0.15 & 0.71 & 2.38 & 40.0 & 5.1 & \(1.2\cdot 10^{2}\) & \(288^{3}\) \\ A5 & 0.5 & 0.11 & \(1.7\cdot 10^{6}\) & 4.6 & 0.82 & 0.19 & 0.87 & 3.04 & 40.0 & 5.5 & \(7.1\cdot 10^{4}\) & \(288^{3}\) \\ A6 & 0.5 & \(3.1\cdot 10^{-2}\) & \(4.0\cdot 10^{6}\) & 4.6 & 1.27 & 0.29 & 1.26 & 4.56 & 39.8 & 6.1 & 30 & \(288^{3}\) \\ A7 & 0.6 & \(3.9\cdot 10^{-3}\) & \(1.6\cdot 10^{7}\) & 4.6 & 2.62 & 0.57 & 2.17 & 9.17 & 38.7 & 7.5 & 7.5 & \(288^{3}\) \\ A8 & 0.6 & \(2.5\cdot 10^{-4}\) & \(1.0\cdot 10^{8}\) & 4.6 & 7.21 & 1.38 & 3.88 & 25.1 & 35.1 & 10.9 & 1.2 & \(288^{3}\) \\ A9 & 0.7 & \(3.0\cdot 10^{-5}\) & \(4.0\cdot 10^{8}\) & 4.6 & 16.5 & 2.67 & 6.31 & 52.9 & 30.7 & 16.0 & 0.32 & \(288^{3}\) \\ \hline B0 & 1.9 & \(-\) & 0 & 1.8 & 0.00 & 0.00 & 0.00 & 0.00 & 38.3 & 4.4 & \(-\) & \(288^{3}\) \\ B1 & 1.9 & \(2.5\cdot 10^{2}\) & \(1.0\cdot 10^{4}\) & 1.8 & 0.07 & 0.02 & 0.08 & 0.27 & 38.5 & 4.4 & \(2.5\cdot 10^{4}\) & \(288^{3}\) \\ B2 & 1.9 & \(32\) & \(4.0\cdot 10^{4}\) & 1.8 & 0.13 & 0.03 & 0.17 & 0.54 & 38.9 & 4.5 & \(6.2\cdot 10^{3}\) & \(288^{3}\) \\ B3 & 1.8 & 4.0 & \(1.6\cdot 10^{5}\) & 1.8 & 0.26 & 0.06 & 0.32 & 1.03 & 39.0 & 4.8 & \(1.5\cdot 10^{3}\) & \(288^{3}\) \\ B4 & 1.8 & 0.26 & \(1.0\cdot 10^{6}\) & 1.8 & 0.65 & 0.15 & 0.70 & 2.34 & 39.3 & 5.3 & \(2.2\cdot 10^{2}\) & \(288^{3}\) \\ B5 & 1.8 & \(9.3\cdot 10^{-2}\) & \(2.0\cdot 10^{6}\) & 1.8 & 0.90 & 0.20 & 0.97 & 3.21 & 39.4 & 5.7 & \(1.1\cdot 10^{2}\) & \(288^{3}\) \\ B6 & 1.8 & \(3.2\cdot 10^{-2}\) & \(4.0\cdot 10^{6}\) & 1.8 & 1.30 & 0.29 & 1.27 & 4.49 & 39.1 & 6.3 & 55 & \(288^{3}\) \\ B7 & 1.9 & \(4.0\cdot 10^{-3}\) & \(1.6\cdot 10^{7}\) & 1.8 & 2.69 & 0.56 & 2.09 & 8.98 & 37.7 & 7.8 & 14 & \(288^{3}\) \\ B8 & 2.0 & \(2.6\cdot 10^{-4}\) & \(1.0\cdot 10^{8}\) & 1.8 & 7.32 & 1.36 & 3.90 & 24.4 & 34.6 & 11.4 & 2.3 & \(288^{3}\) \\ B9 & 2.2 & \(3.1\cdot 10^{-5}\) & \(4.0\cdot 10^{8}\) & 1.8 & 1.66 & 2.67 & 6.32 & 51.4 & 30.5 & 16.2 & 0.58 & \(288^{3}\) \\ \hline C0 & 5.0 & \(-\) & 0 & 0.9 & 0.00 & 0.00 & 0.00 & 0.00 & 38.1 & 4.7 & \(-\) & \(288^{3}\) \\ C1 & 5.0 & \(2.6\cdot 10^{2}\) & \(1.0\cdot 10^{4}\) & 0.9 & 0.07 & 0.01 & 0.08 & 0.27 & 38.2 & 4.8 & \(4.0\cdot 10^{4}\) & \(288^{3}\) \\ C2 & 4.9 & 32 & \(4.0\cdot 10^{4}\) & 0.9 & 0.13 & 0.03 & 0.16 & 0.53 & 38.4 & 4.9 & \(9.8\cdot 10^{3}\) & \(288^{3}\) \\ C3 & 4.8 & 4.0 & \(1.6\cdot 10^{5}\) & 0.9 & 0.26 & 0.06 & 0.31 & 1.01 & 38.7 & 5.0 & \(2.3\cdot 10^{3}\) & \(288^{3}\) \\ C4 & 4.7 & \(0.26\cdot 10^{6}\) & 0.9 & 0.65 & 0.14 & 0.73 & 2.31 & 39.2 & 5.5 & \(3. radiative and the cooling layers. In the initial state the stratification is isothermal above \(z/d=1\), but because the cooling profile \(f_{\rm cool}\) has a finite width, cooling also occurs below \(z/d=1\) and the isothermal layer is wider in the final thermally saturated state. This also depends on the value of \(\tau_{\rm cool}\). In the convective runs the stratification is nearly polytropic with index \(n_{\rm rad}\) near the base of the radiative layer and nearly isentropic with \(n_{\rm ad}\) in the bulk of the convection zone. The superadiabatic temperature gradient is defined as \[\Delta\nabla\equiv\nabla-\nabla_{\rm ad}=-\frac{H_{\rm p}}{c_{\rm P}}\frac{ \mathrm{d}s}{\mathrm{d}z}, \tag{42}\] where \(\nabla=\mathrm{d}\ln\overline{T}/\mathrm{d}\ln\overline{p}\) is the logarithmic temperature gradient and \(\nabla_{\rm ad}=1-1/\gamma\) is the corresponding adiabatic gradient. Comparison of the hydrostatic profile and the non-rotating convective model A0 shows that the convectively unstable layer in the former is much thinner than in the latter. This is a direct consequence of the strong temperature and density dependence of the Kramers opacity law. A similar conclusion applies also to the Sun, where the hypothetical non-convecting hydrostatic equilibrium solution has a very thin superadiabatic layer (Brandenburg, 2016). The steepness of the temperature gradient near the surface is characterized by the maximum value of \((\Delta\nabla)_{\rm hyd}\), which is 23.4. By comparison, in the convective Run A0 \(\Delta\nabla=0.12\). The Rayleigh number - measured at \(z/d=0.85\) - in the hydrostatic case is \(\mathrm{Ra}=5.4\cdot 10^{7}\), which is about an order of magnitude greater than \(\mathrm{Ra}_{\rm t}\) in Run A0. ### Qualitative flow characteristic as a function of rotation Figure 2 shows representative flow fields from runs with slow, intermediate, and rapid rotation, corresponding to Coriolis numbers \(0.13\), \(1.3\), and \(16.5\), respectively. The effects of rotation are hardly discernible in the slowly rotating case A2 with \(\mathrm{Co}=0.13\). In the run with intermediate rotation, Run A6 with \(\mathrm{Co}=1.3\), the convection cells are somewhat smaller than in the slowly rotating case and more vortical structures are visible near the surface. For the most rapidly rotating case, Run A9 with \(\mathrm{Co}=16.5\), the size of the convection cells is drastically reduced in comparison to the other two runs and clear alignment of the convection cells with the rotation vector is seen. ### Convective scale as function of rotation Power spectra \(E(k)\) of the velocity fields for the runs in Set A are shown in Fig. 3 from depths near the surface, at the middle and near the base of the CZ. As was already evident from visual inspection of the flow fields, the dominant scale of the flow decreases as the rotation rate increases. Quantitatively, the wavenumber \(k_{\rm max}\), where \(E(k)\) has its maximum, increases roughly in proportion to \(\mathrm{Co}^{1/2}\). The mean wavenumber \(k_{\rm mean}\), computed from Eq. (23), shows the same scaling for \(\mathrm{Co}\gtrsim 2\). This is explained by the broader distribution of power at different wavenumbers at slow rotation in comparison to the rapid rotation cases where fewer - or just a single - convective modes are dominant; see Figure 3. A decreasing length scale of the onset of linear instability under the influence of rotation was derived in (Chandrasekhar, 1961) with \(k_{\rm onset}\propto\mathrm{Ta}^{2/3}\). With \(\mathrm{Ta}\propto\mathrm{Co}^{2}\mathrm{Re}^{2}\), and with \(\mathrm{Re}\) being approximately constant, \(k_{\rm onset}\propto\mathrm{Co}^{1/3}\) is obtained. On the other hand, considering the CI part of the CIA balance in the Navier-Stokes equation gives (e.g. Aurnou et al., 2020, see Eq. (A.10) in Appendix A) gives \[\left(\frac{k_{\rm max}}{k_{1}}\right)^{2}\propto\frac{2\Omega}{k_{1}u}= \mathrm{Co}, \tag{43}\] or \(k_{\rm max}\propto\mathrm{Co}^{1/2}\). This is consistent with the current simulations; see the inset in the left panel of Fig. 3. The same result was obtained in Featherstone & Hindman (2016). Some nonlinear convection simulations show scalings that are similar but somewhat shallower than that obtained from the CIA balance; see, e.g., Viviani et al. (2018) and Currie et al. (2020). To estimate the convective length scale in the Sun based on the current results requires that the value of \(\mathrm{Co}_{\rm P}\) matches that of the deep solar CZ. The quantities on the rhs of Eq. (30) at the base of the solar convection zone are \(H_{\rm p}\approx 5\cdot 10^{7}\) m, \(\rho_{\star}\approx 200\) kg m\({}^{-3}\), \(F_{\rm bot}=L_{\odot}/(4\pi r_{\rm CZ}^{2})\approx 1.27\cdot 10^{8}\) kg s\({}^{-3}\), with \(r_{\rm CZ}=0.7R_{\odot}\approx 4.9\cdot 10^{8}\) and \(L_{\odot}=3.83\cdot 10^{26}\) W, and \(\Omega_{\odot}=2.7\cdot 10^{-6}\) s\({}^{-1}\). Inserting this data into Eq. (30) yields \(\mathrm{Co}_{\rm P}^{\oplus}\approx 3.1\). The values of \(\mathrm{Co}_{\rm P}\) are listed for all runs in the eight column of Table 1. The moderately rotating runs [A,B,C] correspond to the rotational constraint at the base of the solar CZ with \(\mathrm{Co}_{\rm P}=3.0\ldots 3.2\). The mean wavenumber \(k_{\rm mean}/k_{1}\approx 7\) in these simulations corresponds to a horizontal scale of \(\ell_{\rm conv}=L_{x}(k_{1}/k_{\rm mean})\approx 0.57d\). The pressure Figure 1: _(a)_ Temperature as a function of height from a 1D hydrostatic model (black solid line) as well as convective runs Run A0 (red dashed), A6 (blue dashed), and A9 (orange dashed). The black (red) dotted line shows a polytropic gradient corresponding to index \(n_{\rm rad}=3.25\) (\(n_{\rm ad}=1.5\)) for reference. _(b)_ Absolute value of the superadiabatic temperature gradient \(\Delta\nabla\) from the same runs as indicated by the legend. Red (blue) indicates convectively unstable (stable) stratification. The dotted vertical lines at \(z=0\) and \(z/d=1\) denote the base and top of the initially isentropic layer. scale height at \(z_{\rm DZ}\) is about \(0.49d\) such that \(\ell_{\rm conv}\approx 1.16H_{\rm p}\). Converting this to physical units using \(H_{\rm p}^{\odot}\approx 5\cdot 10^{6}\) m yields \(\ell_{\rm conv}\approx 58\) Mm. Following the procedure of Featherstone and Hindman (2016) and using \(k_{\rm max}\) instead of \(k_{\rm mean},k_{\rm max}/k_{1}=3\) and \(\ell_{\rm conv}\approx 130\) Mm. Both of these estimates are significantly larger that the supergranular scale of \(20\ldots 30\) Mm which was suggested to be the largest convectively driven scale in the Sun by Featherstone and Hindman (2016) and Vasil et al. (2021). On the other hand, a rapidly rotating run of Featherstone and Hindman (2016) with \(\ell_{\rm conv}\approx 30\) Mm, had Rossby number \({\rm Ro}_{\rm FH}=\tilde{U}/(2\Omega H)=0.011\), where \(\tilde{U}\) is a typical velocity amplitude and \(H\) is the shell thickness. This corresponds to a global Coriolis number \({\rm Co}_{2}=2\pi{\rm Ro}_{\rm FH}^{-1}\approx 14.5\) in the conventions of the current study. In the current runs A9, B9, and C9, \({\rm Co}\approx 17\) and \(k_{\rm max}\approx k_{\rm mean}\approx 17\), corresponding to \(\ell_{\rm conv}\approx 26\) Mm. Therefore the current simulations give a very similar estimate for \(\ell_{\rm conv}\) at comparable values of \({\rm Co}\) despite all of the differences between the model set-ups. However, the values of \({\rm Co}_{\rm F}\) in runs A9, B9, and C9 are at least 16 times higher than in the Sun, suggesting that the simulations of Featherstone and Hindman (2016) were also rotating much faster than the Sun3. Therefore the current results suggest that rotationally constrained convection cannot explain the appearance of supergranular scale as the largest convective scale in the Sun. Footnote 3: For example, their run with \({\rm Ro}=0.011\) has \({\rm Ra}_{\rm F}=6.81\cdot 10^{6}\), \({\rm Ek}=1.91\cdot 10^{-4}\), and \({\rm Pr}=1\), and corresponds to \({\rm Ra}_{\rm F}^{*}={\rm Ra}_{\rm F}{\rm Ek}^{3}/(8{\rm Pr})=5.9\cdot 10^{-6}\), or \({\rm Co}_{\rm F}=({\rm Ra}_{\rm F}^{*})^{-1/3}\approx 55\). This yields the current results suggest that rotationally constrained convection cannot explain the appearance of supergranular scale as the largest convective scale in the Sun. Figure 4 shows the velocity power spectra for the most rapidly rotating runs with \({\rm Co}\approx 17\) for \({\rm Re}=30\ldots 142\) from Runs A9, A9m, and A9h. There is a marked increase in the power at large scales, which begins to affect \(k_{\rm mean}\) at the highest Re or Run A9h. This is due to the gradual onset of large-scale vorticity production, most likely due to two-dimensionalisation of turbulence, that has been observed in various earlier studies of rapidly rotating convection (e.g. Chan, 2003, 2007; Chan and Mayr, 2013; Kapyla et al., 2011; Guervilly et al., 2014). Despite the rapid rotation with Coriolis numbers exceeding 16, the large-scale vorticity generated in the current simulations is relatively modest apart from Run A9h. A difference to many of the previous studies is that here the relevant thermal Prandtl number (\({\rm P}_{\rm TSGS}\)) is of the order of unity whereas in many of the earlier studies \({\rm Pr}\) was lower. Large-scale vorticity production was indeed observed in an additional run which is otherwise identical to A9 except that \({\rm Pr}_{\rm TSGS}=0.2\) instead of \({\rm Pr}_{\rm TSGS}=1\) (not shown). Figure 3: Normalized velocity power spectra near the surface (left panel), middle (middle), and base (right) of the CZ from runs in Set A with \({\rm Co}\) varying between \(0\) and \(16.5\). The inset in the left panel shows the mean scale \(k_{\rm mean}\) and wavenumber of the where \(E(k)\) has its maximum (\(k_{\rm max}\)) as functions of \({\rm Co}\) for \(z/d=0.85\). The error bars indicate the standard deviation. The gray dashed line shows a power law proportional to \({\rm Co}^{1/2}\). Figure 2: Flow fields from Runs A2 with \({\rm Co}=0.13\) (left), A6 with \({\rm Co}=1.3\) (middle), and A9 \({\rm Co}=16.5\) (right) at the north pole (\(\theta=0^{\circ}\)). The colours indicate vertical velocity and the contours indicate streamlines. ### Measures of rotational influence #### 3.4.1 Velocity-based \(\mathrm{Co}\) The suitability of different measures of rotational influence on the flow has been discussed in various works in the literature (e.g. Kapyli, 2023). A common - and justified - critique regarding the Coriolis number as defined in Equation (27) is that it does not appreciate the fact that \(\ell_{\mathrm{conv}}=\ell_{\mathrm{conv}}(\Omega)\)(e.g. Vasil et al., 2021). The most straightforward way is to measure the mean wavenumber and use Eq. (29). Figure 5 shows \(\mathrm{Co}_{\ell}\) as a function of \(\mathrm{Co}\) for all run listed in Table 1. For slow rotation, \(\mathrm{Co}\lesssim 1\), \(\mathrm{Co}_{\ell}\propto\mathrm{Co}\) because \(u_{\mathrm{conv}}\) and \(\ell_{\mathrm{conv}}\) are almost unaffected by rotation. For sufficiently rapid rotation this is no longer true because \(k_{\mathrm{mean}}\approx k_{\mathrm{max}}\propto\mathrm{Co}^{1/2}\) as indicated by Eq. (A.10) and the simulation results; see the inset of Figure 3. This implies that for rapid rotation \(\mathrm{Co}_{\ell}\propto\mathrm{Co}^{1/2}\); see also Eq. (A.12). This is consistent with the numerical results found in the most rapidly rotating cases; see Fig. 5. The higher resolution runs in Set Am have somewhat lower \(\mathrm{Co}_{\ell}\) than the corresponding runs in Set A because the convective velocities in the higher resolution cases are higher. This shows that the simulations are not yet in an asymptotic regime where the results are independent of the diffusivities. This is further demonstrated by the high resolution runs of Set Ah: Run A5h follows the trend set by Run A5m. The Run A9h with a significantly higher \(\mathrm{Co}_{\ell}\) than in Runs A9 and A9m is explained by the increasing \(k_{\mathrm{mean}}\) due to the large-scale vorticity generation in that case. Aurnou et al. (2020) showed that the dynamical Rossby number is related to the diffusion-free modified flux Rayleigh number \(\mathrm{Ra}^{*}_{\mathrm{F}}\), with different powers for slow and rapid rotation. The corresponding derivations for the Coriolis number \(\mathrm{Co}_{\ell}\) are presented in Appendix A, and which show that \(\mathrm{Co}_{\ell}=(\mathrm{Ra}^{*}_{\mathrm{F}})^{-1/3}\) (slow rotation) and \(\mathrm{Co}_{\ell}=(\mathrm{Ra}^{*}_{\mathrm{F}})^{-1/5}\) (rapid rotation). Both scalings are also supported by the simulation results; see the inset of Figure 5. #### 3.4.2 Vorticity-based \(\mathrm{Co}\) Another commonly-used definition, Equation (28), is used to take the changing length scale automatically into account. However, \(\mathrm{Co}_{\omega}\) comes with a caveat which has apparently not been discussed hitherto in the astrophysical literature. This is demonstrated by considering a set of rotating systems at asymptotically high \(\mathrm{Re}\) where \(u_{\mathrm{rms}}\) is independent of \(\mathrm{Re}\). The forcing is assumed fixed by a constant energy flux through the system, and the asymptotic value of \(u_{\mathrm{rms}}\) when \(\mathrm{Re}\to\infty\) as \(u_{\infty}\). Furthermore, in this regime the mean kinetic energy dissipation rate \[\overline{\epsilon}_{\mathrm{K}}=2\nu\overline{\mathbf{S}}^{2}, \tag{44}\] where the overbar denotes a suitably defined average, tends to a constant value when normalized by mean length and corresponding rms-velocity (e.g. Sreenivasan, 1984; Vassilicos, 2015). This value is denoted as \(\epsilon_{\infty}\). In low-Mach number turbulence, which is a good approximation of stellar interiors, as well as the current simulations with \(\mathrm{Ma}\sim\mathcal{C}(10^{-2})\), \[\overline{\epsilon}_{\mathrm{K}}=\nu\overline{\omega}^{2}=\nu\omega_{\mathrm{ rms}}^{2}. \tag{45}\] From the definition of system scale Reynolds number it follows that \[\mathrm{Re}=\frac{u_{\infty}}{\nu k_{1}}\propto\nu^{-1}, \tag{46}\] and from Eq. (45) that \[\omega_{\mathrm{rms}}=\left(\frac{\overline{\epsilon}_{\mathrm{K}}}{\nu} \right)^{1/2}=\left(\frac{\epsilon_{\infty}}{\nu}\right)^{1/2}\propto\nu^{-1/ 2}\propto\mathrm{Re}^{1/2}. \tag{47}\] Using Eq. (28) it is found that \[\mathrm{Co}_{\omega}\propto\mathrm{Re}^{-1/2},\ \ \mathrm{or}\ \mathrm{Co} \propto\mathrm{Re}^{1/2}\mathrm{Co}_{\omega}. \tag{48}\] This means that \(\mathrm{Co}_{\omega}\to 0\) as \(\mathrm{Re}\to\infty\) at constant \(\mathrm{Co}\), while the dynamics at large (integral) scales are unaffected. Therefore \(\mathrm{Co}_{\omega}\) underestimates the rotational influence at the mean scale \(k_{\mathrm{mean}}\) which dominates the dynamics, as opposed to Eq. (27) overestimating it. Equation (47) can also be written as \[\omega_{\mathrm{rms}}\equiv k_{\omega}u_{\mathrm{rms}}\propto\mathrm{Re}^{1 /2}. \tag{49}\] For sufficiently large \(\mathrm{Re}\), the theoretical prediction is that \(u_{\mathrm{rms}}\to u_{\infty}=\mathrm{const.}\) and \(k_{\omega}\propto\mathrm{Re}^{1/2}\). This has been confirmed from numerical simulations of isotropically forced homogeneous turbulence (e.g. Brandenburg and Petrosyan, 2012; Figure 4: Normalized velocity power spectra near the surface of simulations with \(\mathrm{Co}\approx 17\) and \(\mathrm{Re}=30\dots 142\)(Runs A9, A9m, and A9h). The dotted line shows a Kolmogorov \(k^{-5/3}\) scaling for reference. Candelaresi & Brandenburg (2013). Here the dependence of \(k_{\omega}\) on \(\mathrm{Re}\) is shown in the inset of Figure 6 for runs with \(\mathrm{Co}\approx 1.3\) and \(\mathrm{Re}\) ranging between \(40\) and \(174\). Here the results for \(k_{\omega}\) fall somewhat below theoretical \(\mathrm{Re}^{1/2}\) expectation. This is likely because the asymptotic regime requires still higher Reynolds numbers. On the other hand, the mean wavenumber \(k_{\mathrm{mean}}\) is essentially constant around \(k_{\mathrm{mean}}/k_{1}=7\) in this range of \(\mathrm{Re}\) because the dominating contribution to the velocity spectrum come from large scales that are almost unaffected by the increase in \(\mathrm{Re}\). ### Convective velocity as a function of total flux and rotation The scalings of convective velocity as a function of rotation are derived in Appendix A following the same arguments as in Aurnou et al. (2020). For slow rotation the convective velocity depends only on the energy flux: \[u_{\mathrm{rms}}\sim\left(\frac{F_{\mathrm{tot}}}{\rho}\right)^{1/3}=u_{\star}, \tag{50}\] where \(u_{\star}\) is defined via Eq. (31). This scaling is altered in the rapidly rotating regime, where \[u_{\mathrm{rms}}\propto\left(\frac{F}{\rho}\right)^{1/3}\mathrm{Co}^{-1/6}. \tag{51}\] This results agrees with Eq. (50d) Aurnou et al. (2020) and Table 2 of Vasil et al. (2021). Therefore the velocity amplitude in the rapidly rotating regime is expected to depend not only on the available flux but also on rotation. Fig. 7 shows the corresponding numerical results for the Sets A, B, C, and Am. For slow rotation, \(\mathrm{Co}\lesssim 0.3\), \(u_{\mathrm{rms}}\) is roughly constant around \(u_{\mathrm{rms}}\approx 1.55u_{\star}\) for Sets A, B, and C, and \(u_{\mathrm{rms}}\approx 1.65u_{\star}\) for Set Am. In the rapid rotation regime \(u_{\mathrm{rms}}\) follows a trend which is similar to that indicated in Eq. (51), but the agreement is not perfect. The simulations in this regime may suffer from the fact the supercriticality of convection decreases with \(\mathrm{Co}\). However, the medium resolution runs, visualized by the grey symbols in Fig. 7, do not show a significantly better agreement with theory. Nevertheless, the evidence for CIA balance being reached in the current simulations with rapid rotation is fairly convincing. ### Flow statistics Compressible non-rotating convection is characterized by broad upflows and narrow downflows (Stein & Nordlund 1989; Cattaneo et al. 1991); see also Figure 2. This can be described by the filling factor \(f\) of downflows as \[\overline{u}_{z}(z)=f(z)\overline{u}_{z}^{\perp}+[1-f(z)]\overline{u}_{z}^{ \perp}(z), \tag{52}\] where \(\overline{u}_{z}\) is the mean vertical velocity, whereas \(\overline{u}_{z}^{\perp}\) and \(\overline{u}_{z}^{\perp}\) are the corresponding mean up- and downflow velocities. It was shown in Kapyla (2021) that \(f\) is sensitive to the effective Prandtl number of the fluid such that a lower \(\mathrm{Pr}\) leads to a lower filling factor. Here a similar study is done as a function of rotation; see Fig. 8. The main result is that \(f\) approaches \(1/2\) in the rapid rotation regime. This is because in rapidly rotating convection the broad upwellings of non-rotating convection are broken up and the flow consist mostly of smaller scale helical columns where the up- and downflows are almost invariant. This is due to the Taylor-Proudmann constraint such that derivatives along the rotation axis vanish. Hence the tendency for larger structures to appear at greater depths is inhibited and the average size of convection cells as a function of depth is almost constant; see rightmost panel of Figure 2. Figure 8: Filling factor of downflows as a function of height \(f(z)\) for three runs with no (black), moderate (blue), and rapid (red) rotation. Figure 6: Normalized velocity power spectra near the surface of simulations with \(\mathrm{Co}\approx 1.3\) and \(\mathrm{Re}=40\dots 174\)(Runs A6, A6b, and A6c). The dotted line shows Kolmogorov \(k^{-5/3}\) scaling for reference. The inset shows \(k_{\mathrm{mean}}\) (black symbols) and \(k_{\omega}\) (red) as functions of \(\mathrm{Re}\). The dotted lines are proportional to powers \(0\), and \(1/2\) of \(\mathrm{Re}\). Figure 7: Root-mean-square velocity in the convection zone normalized by \(u_{\star}\). The dotted line is proportional to \(\mathrm{Co}^{1/6}\) as indicated by the theoretical CIA scaling; see Eq. (51). This is also apparent from the probability density functions (PDFs) of the velocity components \(u_{i}\), defined via \[\int\mathcal{P}(u_{i},z)\mathrm{d}u_{i}=1. \tag{53}\] Figure 9 shows representative examples of PDFs for the extreme cases (Run A0 with \(\mathrm{Co}=0\) and Run A9 with \(\mathrm{Co}\approx 16.5\)) and at an intermediate rotation rate (Run A6, \(\mathrm{Co}=1.3\)). In non-rotating convection the PDFs of the horizontal components of the velocity are nearly Gaussian near the surface whereas for \(u_{z}\) the distributions are highly skewed due to the up-/downflow asymmetry. In deeper parts also the horizontal velocities deviate from a Gaussian distribution in agreement with earlier works (e.g. Brandenburg et al., 1996; Hotta et al., 2015; Kapyla, 2021) As the rotation increases the asymmetry of the vertical velocity decreases such that in the most rapidly rotating cases considered here with \(\mathrm{Co}\approx 17\), \(u_{z}\) also approaches a Gaussian distribution. Only near the surface (\(z/d=0.85\)) a weak asymmetry remains. The horizontal components of velocity continue to have Gaussian distribution as rotation is increased, although there is not enough data to say anything concrete concerning the tails of the distributions at high velocity amplitudes. To further quantify the statistics of the flow, skewness \(\mathcal{S}\) and kurtosis \(\mathcal{K}\) are computed from: \[\mathcal{S}=\frac{\mathcal{M}^{3}}{\sigma_{u}^{3}},\;\mathcal{K}=\frac{ \mathcal{M}^{4}}{\sigma_{u}^{4}}, \tag{54}\] Figure 9: Probability density functions \(\mathcal{P}(u_{i})\) for \(u_{x}\) (left), \(u_{y}\) (middle), and \(u_{z}\) (right) for depths \(z/d=0.85\) (black), \(z/d=0.49\) (blue), and \(z/d=0.13\) (red) for runs with \(\mathrm{Co}=0\) (Run A0, top row), \(\mathrm{Co}=1.3\) (Run A6, middle), and \(\mathrm{Co}=16.5\) (Run A9, bottom). The tildes refer to normalization by the respective rms-values. where \(\sigma_{u}=(\mathcal{M}^{2})^{1/2}\), with \[\mathcal{M}^{n}(u_{i},z)=\int[u_{i}(\mathbf{x})-\overline{u}_{i}(z)]^{n}\mathcal{P}( u_{i},z)\mathrm{d}u_{i}. \tag{55}\] Figure 10 shows \(\mathcal{S}\) and \(\mathcal{K}\) for all \(u_{i}\) for the same runs as in Figure 9. The skewness in consistent with zero for the horizontal velocities which is expected as there is not anisotropy in the horizontal plane. The negative values of \(\mathcal{S}\) for \(u_{z}\) are a signature of the asymmetry between up- and downflows. As rotation is increased, \(\mathcal{S}\) approaches zero also for \(u_{z}\). Kurtosis \(\mathcal{K}\) is a measure of non-Gaussianity or intermittency. In the non-rotating case \(\mathcal{K}\) increases from roughly three - indicating Gaussian statistics - to roughly five for horizontal flows as a function of depth within the CZ. For \(u_{z}\) the increase of \(\mathcal{K}\) is much more dramatic below \(z/d\lesssim 0.3\). This is because downflows merge at deeper depths such that only a few of them survive deep in the CZ and especially in the overshoot region below roughly \(z=0\), where \(\mathcal{K}\) reaches a peak value of roughly 65 for Run A0. A similar, albeit lower, maximum appears also for the horizontal flows. At intermediate rotation (Run A6; \(\mathrm{Co}=1.3\)), \(u_{z}\) still exhibits strong intermittency below \(z\approx 0.1\) with \(\text{max}(\mathcal{K})\approx 54\) whereas \(\mathcal{K}\) for the horizontal flows is significantly reduced in comparison to the non-rotating case. This indicates that especially the vertical flows in this regime are not qualitatively different from those in the non-rotating regime, such that the downflows in the overshoot region are rather abruptly decelerated and diverted horizontally. For the most rapidly rotating case (Run A9; \(\mathrm{Co}=16.5\)), \(\mathcal{K}\approx 3\ldots 4\) throughout the simulation domain for both vertical and horizontal flows. This is explained by the almost complete wiping out of the up-/downflow asymmetry also in the deep parts of the CZ and in the overshoot region. The absence of a peak in the kurtosis in the overshoot region in the most rapidly rotating cases is likely due to the deeply penetrating vertical flows in those cases due to the unrealistically small Richardson number. This is discussed in more detail in Section 3.7. The average vertical rms-velocities from the same representative runs as in Figure 9 are shown in Figure 11. The average rms-velocity of the downflows (upflows) is always larger (smaller) than the average total vertical rms-velocity. However, the difference between the up- and downflows and the total rms-velocity diminish monotonically as a function of rotation such that for the most rapidly rotating case the three are almost the same. This is another manifestation of the symmetrization of up- and downflows. Another consequence of the symmetrization of the vertical flows is that the forces on the up- and downflows also approach each other; see Fig. 12, where \(\overline{f}_{z}=\overline{\rho}Du_{z}/Dt\). In accordance with earlier studies (Kapyla et al., 2017; Kapyla, 2019), in non-rotating convection the downflows are accelerated near the surface and decelerated roughly when the stratification turns Schwarzschild stable, whereas the upflows are accelerated everywhere except near the surface. This is interpreted such that the upflows are not driven by buoyancy but by pressure forces due to the deeply penetrating downflow plumes. This qualitative picture remains unchanged for slow rotation, but starts to change when \(\mathrm{Co}\) is of the order of unity although the region near the surface where the downflows are accelerated is shallower; see Fig. 12(b). For rapid rotation the forces on the up- and downflows are nearly identical. However, the situation continues to qualitatively deviate from the mixing length picture also in the rapidly rotating cases in that the downflows are accelerated only near the surface and braked throughout their descent through the superadiabatic CZ; see Fig. 12(c). Figure 11: Horizontally averaged vertical rms-velocity for the same runs as in Figure 9. The overall vertical velocity (\(\widetilde{u}_{z}^{\text{rms}}\)) is shown in black, and the corresponding quantities for up- (\(\widetilde{u}_{z}^{\text{rms}}\)) and downflows (\(\widetilde{u}_{z}^{\text{rms}}\)) are shown in red and blue, respectively. The tildes refers to normalization by \(\sqrt{gd}\). Figure 10: Skewness (\(\mathcal{S}\), dashed lines) and kurtosis (\(\mathcal{K}\), solid) from the same runs as in Figure 9. Black, blue, and red colour indicates data corresponding to \(u_{x}\), \(u_{y}\), and \(u_{z}\), respectively. Note the difference in scale between each of the panels. The insets show a zoom in of the region \(z/d\geq 0\). ### Overshooting and Deardorff layers The depths of the overshooting and Deardorff layers are studied as functions of rotation using the same definitions of overshooting and Deardorff layers as in previous studies (Kipyla, 2019, 2021). The bottom of the CZ is situated at the depth \(z_{\rm CZ}\) where \(\overline{F}_{\rm conv}\) changes from negative to positive with increasing \(z\). The top of the Deardorff zone (DZ) - or the bottom of the buoyancy zone (BZ) - \(z_{\rm BZ}\), is where the superadiabatic temperature gradient changes from negative to positive with increasing \(z\). Then the depth of the DZ is \[d_{\rm DZ}=\frac{1}{\Delta t}\int_{t_{0}}^{t_{1}}[z_{\rm BZ}(t)-z_{\rm CZ}(t) ]dt, \tag{56}\] where \(\Delta t=t_{1}-t_{0}\) is the length of the statistically steady part of the time series. A reference value of the kinetic energy flux (\(\overline{F}_{\rm kin}^{\rm ref}\)) is measured at \(z_{\rm CZ}\). The base of the overshoot layer is taken to be the location (\(z_{\rm OS}^{\rm kin}\)) where \(|\overline{F}_{\rm kin}|\) falls below \(0.01\overline{F}_{\rm kin}^{\rm ref}\), and \[d_{\rm os}^{\rm kin}=\frac{1}{\Delta t}\int_{t_{0}}^{t_{1}}[z_{\rm CZ}(t)-z_{ \rm OS}^{\rm kin}(t)]dt, \tag{57}\] This criterion breaks down in the current models when rotation begins to dominate the dynamics and where \(\overline{F}_{\rm kin}\to 0\). Therefore the convected flux \(\overline{F}_{\rm conv}\) was also used to estimate the depth of overshooting. The criterion involving \(\overline{F}_{\rm conv}\) takes the overshoot layer to end at the location (\(z_{\rm OS}^{\rm conv}\)) where \(|F_{\rm conv}|\) falls below \(0.02F_{\rm tot}\). The corresponding overshooting depth (\(d_{\rm os}^{\rm conv}\)) is computed analogously to Eq. (57). The layer below the OZ is the radiative zone (RZ). Figure 13 shows the energy fluxes from representative runs at different Coriolis numbers from Set A. For slow and moderate rotation up to \({\rm C}_{0}\approx 1\) the situation is qualitatively similar: the positive (upward) enthalpy flux exceeds \(F_{\rm tot}\) in the bulk of the CZ, and it is compensated by a negative (downward) kinetic energy flux \(\overline{F}_{\rm kin}\). As rotation increases the maxima of \(\overline{F}_{\rm enth}\) and \(|\overline{F}_{\rm kin}|\) decrease monotonically. Similarly, the extents of the overshoot and Deardorff layers diminish with rotation. For the most rapidly rotation case, Run A9 with \({\rm C}_{0}=16.5\), the kinetic energy flux is almost zero, and \(\overline{F}_{\rm conv}\approx\overline{F}_{\rm enth}\). This is yet another manifestation of the decreasing asymmetry between the up- and downflows. Moreover, the Deardorff layer vanishes in the rapidly rotating cases. The positions of the boundaries of the different layers and their depths are summarized for all runs in Table 2, and Fig. 14 shows a summary of the overshooting and Deardorff layer depths as a function of rotation from Sets A, B, and C. The main difference between the sets of simulations of the applied flux \(\mathscr{F}_{\rm in}\). The overshooting depth measured from the kinetic helicity flux decreases with increasing rotation as in earlier studies (e.g. Ziegler & Rudiger, 2003; Kapyla et al., 2004). However, the lowermost panel of Fig. 13 shows that the upper part of the radiative layer is mixed far beyond the regions where \(\overline{F}_{\rm kin}\) is non-negligible in the rapidly rotating cases. This is confirmed when the convected flux is used to estimate the overshooting depth. Furthermore, \(d_{\rm conv}^{\rm conv}\) increases with rotation for \({\rm C}_{0}\gtrsim 1\). This is explained by the fact that the Mach number, and therefore also the rotation rate \(\Omega_{0}\), in the current simulations are much larger than in real stars. This means that the convective, rotation, and Brunt-Vaisala frequencies are closer to each other in the simulations in comparison to, for example, the overshoot region of the Sun. For example, in the most rapidly rotating runs the Richardson number based on the rotation rate \({\rm Ri}_{\Omega}\) is smaller than unity; see the 11th panel of Table 1. This, in addition to the smooth transition from convective to radiative region, can lead gravity waves breaking in the radiative layer, thus contributing to the burrowing of the flows into the RZ (e.g. Lecoanet & Quataert, 2013). As a comparison, \({\rm Ri}_{\Omega}\) in the upper part of the solar radiative zone is expected of the order of \(10^{4}\). Another possibility is that shear due to the rotationally constrained convective columns lowers the corresponding shear Richardson number close to the limit where turbulence can occur also in thermally stable stratification. Lowering the luminosity in Sets B and C shows that both measures of \(d_{\rm os}\) decrease with \(\mathscr{F}_{\rm n}\) in qualitative accordance with earlier results (e.g. Kapyla, 2019). Even though \({\rm Ri}_{\Omega}\) is modestly increased in these runs (see the 11th column in Table 1), the most rapidly rotating cases even in the runs with the lowest luminosities continue to show deep mixing which is most likely due to the still unrealistically low \({\rm Ri}_{\Omega}\). It is numerically very expensive to increase the Richardson number in fully compressible simulations much further, at least without accelerated thermal evolution Figure 12: Horizontally averaged total force (black), and separately for up- (red) and downflows (blue). The dotted red/blue line shows the superadiabatic temperature gradient. Data is shown for _(a)_ a non-rotating run A0, _(b)_ an intermediate rotation rate (\({\rm Co}=1.3\), Run A6), and _(c)_ for rapid rotation (\({\rm Co}=16.5\), Run A9). methods (e.g. Anders et al. 2018, 2020). Comparing the overshooting depths between Runs [A,B,C]5 with solar \(\mathrm{Co_{P}}\) and the non-rotating Runs [A,B,C]0 shows a reduction between about a third to a half; see the seventh and eight columns in Table 2. In Kapylia (2019) the overshooting depth extrapolated to the solar value of \(\mathscr{F}_{\mathrm{n}}\) was found to be roughly \(0.1H_{\mathrm{p}}\), and the current results including rotation reduce this to \(0.05\ldots 0.07H_{\mathrm{p}}\). However, the dependence of the overshooting depth on \(\mathscr{F}_{\mathrm{n}}\) is here steeper (\([d_{\mathrm{os}}^{\mathrm{kin}},d_{\mathrm{os}}^{\mathrm{conv}}]\propto \mathscr{F}_{\mathrm{n}}^{0.15}\)) than in the nonrotating cases where Kapylia (2019) found \(d_{\mathrm{os}}\propto\mathscr{F}_{\mathrm{n}}^{0.08}\). On the other hand, the thickness of the Deardorff layer \(d_{\mathrm{DZ}}\) decreases monotonously as a function of \(\mathrm{Co_{i}}\). In the most rapidly rotating cases the Deardorff layer vanishes altogether and even reverses such that at the base of the CZ the stratification is unstably stratified but the convective flux is inward; see the lowermost panel of Figure 13. This is not significantly changed in more supercritical Runs A9m and A9h. In the entropy rain picture (e.g. Brandenburg 2016) cool material from the surface is brought down deep into otherwise stably stratified layers. This is mediated by relatively few fast downflows with filling factor \(f(z)<1/2\), that also produce a strong net downward kinetic energy flux as seen in the top panel of Figure 13; see also Fig. 8, and Table 1 and Sect. 3.3 in Brandenburg (2016). If, on the other hand, the up- and downflows are symmetrized such that \(f(z)=1/2\) and their velocities are nearly the same, \(\overline{F}_{\mathrm{kin}}\) vanishes and non-local transport due to downflows is no longer significant. Therefore the kinetic energy flux is a proxy of the non-local transport due to downflows and its absence signifies the absence of a Deardorff layer. The depth of the Deardorff layer is \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Run & \(z_{\mathrm{BZ}}/d\) & \(z_{\mathrm{DZ}}/d\) & \(z_{\mathrm{OS}}^{\mathrm{kin}}/d\) & \(z_{\mathrm{OS}}^{\mathrm{conv}}/d\) & \(d_{\mathrm{DZ}}\) & \(\bar{d}_{\mathrm{os}}^{\mathrm{kin}}\) & \(\bar{d}_{\mathrm{os}}^{\mathrm{conv}}\) \\ \hline A0 & \(0.355\) & \(0.134\) & \(-0.096\) & \(-0.204\) & \(0.221\) & \(0.230\) & \(0.338\) \\ A1 & \(0.338\) & \(0.128\) & \(-0.103\) & \(-0.205\) & \(0.210\) & \(0.231\) & \(0.333\) \\ A2 & \(0.333\) & \(0.124\) & \(-0.088\) & \(-0.185\) & \(0.209\) & \(0.212\) & \(0.309\) \\ A3 & \(0.318\) & \(0.130\) & \(-0.065\) & \(-0.134\) & \(0.189\) & \(0.195\) & \(0.264\) \\ A4 & \(0.290\) & \(0.131\) & \(-0.028\) & \(-0.054\) & \(0.159\) & \(0.159\) & \(0.185\) \\ A5 & \(0.278\) & \(0.132\) & \(-0.021\) & \(-0.039\) & \(0.146\) & \(0.153\) & \(0.171\) \\ A6 & \(0.255\) & \(0.131\) & \(-0.007\) & \(-0.021\) & \(0.123\) & \(0.138\) & \(0.152\) \\ A7 & \(0.211\) & \(0.134\) & \(0.026\) & \(-0.025\) & \(0.077\) & \(0.108\) & \(0.159\) \\ A8 & \(0.154\) & \(0.154\) & \(0.065\) & \(-0.103\) & \(0.001\) & \(0.088\) & \(0.257\) \\ A9 & \(0.161\) & \(0.183\) & \(0.120\) & \(-0.150\) & \(0.000\) & \(0.064\) & \(0.333\) \\ \hline B0 & \(0.338\) & \(0.124\) & \(-0.094\) & \(-0.185\) & \(0.214\) & \(0.218\) & \(0.309\) \\ B1 & \(0.329\) & \(0.121\) & \(-0.090\) & \(-0.179\) & \(0.208\) & \(0.211\) & \(0.229\) \\ B2 & \(0.326\) & \(0.117\) & \(-0.082\) & \(-0.166\) & \(0.209\) & \(0.200\) & \(0.284\) \\ B3 & \(0.321\) & \(0.124\) & \(-0.054\) & \(-0.122\) & \(0.197\) & \(0.179\) & \(0.246\) \\ B4 & \(0.280\) & \(0.125\) & \(-0.012\) & \(-0.039\) & \(0.154\) & \(0.138\) & \(0.164\) \\ B5 & \(0.264\) & \(0.124\) & \(-0.005\) & \(-0.024\) & \(0.140\) & \(0.128\) & \(0.147\) \\ B6 & \(0.252\) & \(0.125\) & \(0.007\) & \(-0.009\) & \(0.127\) & \(0.119\) & \(0.134\) \\ B7 & \(0.204\) & \(0.128\) & \(0.036\) & \(-0.008\) & \(0.076\) & \(0.092\) & \(0.136\) \\ B8 & \(0.138\) & \(0.136\) & \(0.090\) & \(-0.075\) & \(0.002\) & \(0.046\) & \(0.212\) \\ B9 & \(0.133\) & \(0.156\) & \(0.156\) & \(-0.126\) & \(0.000\) & \(0.000\) & \(0.281\) \\ \hline C0 & \(0.323\) & \(0.116\) & \(-0.086\) & \(-0.166\) & \(0.206\) & \(0.203\) & \(0.283\) \\ C1 & \(0.336\) & \(0.119\) & \(-0.084\) & \(-0.166\) & \(0.216\) & \(0.204\) & \(0.285\) \\ C2 & \(0.316\) & \(0.115\) & \(-0.074\) & \(-0.150\) & \(0.201\) & \(0.188\) & \(0.265\) \\ C3 & \(0.304\) & \(0.116\) & \(-0.047\) & \(-0.105\) & \(0.189\) & \(0.163\) & \(0.221\) \\ C4 & \(0.278\) & \(0.118\) & \(-0.007\) & \(-0.031\) & \(0.160\) & \(0.124\) & \(0.149\) \\ C5 & \(0.259\) & \(0.119\) & \(0.001\) & \(-0.020\) & \(0.140\) & \(0.118\) & \(0.139\) \\ C6 & \(0.240\) & \(0.119\) & \(0.011\) & \(-0.005\) & \(0.120\) & \(0.108\) & \(0.124\) \\ C7 & \(0.196\) & \(0.121\) & \(0.043\) & \(-0.002\) & \(0.074\) & \(0.079\) & \(0.124\) \\ C8 & \(0.129\) & \(0.129\) & \(0.106\) & \(-0.056\) & \(0.000\) & \(0.023\) & \(0.185\) \\ C9 & \(0.18\) & \(0.140\) & \(0.140\) & \(-0.106\) & \(0.000\) & \(0.000\) & \(0.247\ independent of the energy flux \(\mathscr{F}_{\rm n}\). This further illustrates that the DZ is caused by surface effects which are kept independent of \(\mathscr{F}_{\rm n}\) in the current simulations. A reduction of \(d_{\rm DZ}\) of about a third between the non-rotating runs [A,B,C]0 and the runs with the solar value of \({\rm Co_{F}}\) (Runs [A,B,C]5) was found; see the sixth column of Table 2. ## 4 Conclusions Simulations of compressible convection were used to study the convective scale and scalings of quantitites such as the Coriolis number and convective velocity as functions of rotation. The results were compared to those expected from scalings obtained for incompressible convection with slow and fast rotation (Auruno et al., 2020). The actual length scale is almost unaffected by rotation for \({\rm Co}\lesssim 1\) and decreases proportional to \({\rm Co}^{1/2}\) for rapid rotation. Correspondingly, the dynamical Coriolis number \({\rm Co_{\ell}}\) is proportional to \({\rm Co}\) for slow, and \(\propto{\rm Co}^{1/2}\) for rapid rotation. Furthermore, \({\rm Co_{\ell}}\) is proportional to \(({\rm Ra_{F}^{*}})^{-1/3}\) for slow and \(\propto({\rm Ra_{F}^{*}})^{-1/5}\) for rapid rotation, where \({\rm Ra_{F}^{*}}\) is the diffusion-free flux-based modified Rayleigh number. Finally, the convective velocity is compatible with proportionality to \((F_{\rm tot}/\rho)^{1/3}\) for slow and \(\propto(F_{\rm tot}/\rho)^{1/3}{\rm Co}^{-1/6}\) for rapid rotation. All of these scalings are consistent with those derived by Auruno et al. (2020) and Vasil et al. (2021). Therefore the simulations seem to follow the CIA scaling at sufficiently rapid rotation. In an earlier work (Kapyla, 2023) several measures were used to characterise the rotational influence on convection. A commonly used definition where the changing length scale of convection is taken into account is \({\rm Co_{\omega}}=2\Omega/\omega_{\rm rms}\). It is shown that this quantity cannot be used to characterise the effects of rotation on the mean scale because \(\omega_{\rm rms}\) is expected to increase with the Reynolds number as \({\rm Re}^{1/2}\). Therefore the only reliable way to account for the changing convective length scale as a function of rotation is to compute the mean wavenumber. This was not correctly identified in Kapyla (2023), and it is now clear that \({\rm Co_{\omega}}\) will diverge as \({\rm Re}\) increases. On the other hand, Kapyla (2023) introduced a stellar Coriolis number \({\rm Co_{\bullet}}\) which depends on luminosity and rotation rate which are observable and a reference density which is available from stellar structure models, but not on any dynamical lenght or velocity scale. Here this quantity is renamed as \({\rm Co_{F}}\) and it is furthermore shown that with a suitable choice of length scale, \({\rm Co_{F}}=({\rm Ra_{F}^{*}})^{-1/3}\). Matching \({\rm Co_{F}}\) (or equivalently \({\rm Ra_{F}^{*}}\)) with the target star gives a more concrete meaning to the often-used phrase that it is possible to match the Coriolis number of, for example, the Sun with 3D simulations while most other dimensionless parameters are out of reach (cf. Kapyla et al., 2023). The current simulations suggest that convection even in the deep parts of the CZ in the Sun is not strongly rotationally constrained and that the CIA balance is therefore inapplicable there. The latter has been argued to be the case by Featherstone and Hindman (2016) and Vasil et al. (2021) to argue that the largest convectively driven scale in the Sun is the supergranular scale. The current results seem to refute this conjecture and that the actual scales may be larger. Finally, the effects of rotation on convective overshooting and subadiabatic Deardorff zones were studied. The effects of rotation are relatively mild such that for the case with the solar value of \({\rm Co_{F}}\), the overshooting depth and the extent of the Deardorff layer are reduced by between 30 and 50 per cent in comparison to the non-rotating case. Therefore the current results suggest an overshooting depth of about five per cent of the pressure scale height at the base of the solar CZ. Taking the current results at face value, a similar depth is estimated for the Deardor zone. However, the latter is still subject to the caveat that the current simulations do not capture the near-surface layer very accurately and that the driving of entropy rain can be significantly stronger in reality. Another aspect which needs to be revisited in the future is the effect of magnetic fields. ###### Acknowledgements. I thank Axel Brandenburg for his comments on an earlier version of the manuscript. The simulations were performed using the resources granted by the Gauss Center for Supercomputing for the Large-Scale computing project "Cracking the Convective Countrum" in the Leibniz Supercomputing Centre's SuperMUC-NG supercomputer in Garching, Germany. This work was supported in part by the Deutsche Forschungsgemeinschaft Heisenberg programme (grant No. KA 4825/4-1).
2305.08463
Convergence Analysis of Mean Shift
The mean shift (MS) algorithm seeks a mode of the kernel density estimate (KDE). This study presents a convergence guarantee of the mode estimate sequence generated by the MS algorithm and an evaluation of the convergence rate, under fairly mild conditions, with the help of the argument concerning the {\L}ojasiewicz inequality. Our findings extend existing ones covering analytic kernels and the Epanechnikov kernel. Those are significant in that they cover the biweight kernel, which is optimal among non-negative kernels in terms of the asymptotic statistical efficiency for the KDE-based mode estimation.
Ryoya Yamasaki, Toshiyuki Tanaka
2023-05-15T09:04:55Z
http://arxiv.org/abs/2305.08463v3
# Convergence Analysis of Mean Shift ###### Abstract The mean shift (MS) algorithm seeks a mode of the kernel density estimate (KDE). This study presents a convergence guarantee of the mode estimate sequence generated by the MS algorithm and an evaluation of the convergence rate, under fairly mild conditions, with the help of the argument concerning the Lojasiewicz inequality. Our findings, which extend existing ones covering analytic kernels and the Epanechnikov kernel, are significant in that they cover the biweight kernel that is optimal among non-negative kernels in terms of the asymptotic statistical efficiency for the KDE-based mode estimation. Mean shift, convergence, convergence rate, Lojasiewicz inequality, biweight kernel ## 1 Introduction The mean shift (MS) algorithm [1, 2, 3] has been widely used in various fields such as computer vision, image processing, pattern recognition, and statistics. One of its popular applications is data clustering [4, 5], where the MS algorithm is advantageous in that it does not need to specify the number of clusters in advance. Other advantages of the MS-based clustering compared with the \(k\)-means clustering are that it does not require proper initialization of cluster centers, as well as that it can cope with arbitrary cluster shapes. Other applications of the MS algorithm include image segmentation [3, 6], edge detection [7, 8], object tracking [9, 10], and mode estimation [11, 12], to mention a few. The MS algorithm is an iterative algorithm that seeks a mode (local maximizer) of the kernel density estimate (KDE). Applications of the MS algorithm, such as data clustering and mode estimation, require the convergence of the mode estimate sequence generated by the MS algorithm. It is therefore important to theoretically study convergence properties of the MS algorithm. However, as will be reviewed in Section 3, available theoretical convergence guarantees of the MS algorithm which are applicable to practically relevant situations are quite limited: As dynamical behaviors of the MS algorithm depend on the kernel to be used in constructing the KDE, convergence properties should also depend on the choice of the kernel. To the best of the authors' knowledge, the MS algorithm for multi-dimensional data has been shown to converge when the Epanechnikov kernel [13, 14] or an analytic kernel [15] is used. These results do not cover practically relevant cases where a piecewise polynomial kernel other than the Epanechnikov kernel is used. Furthermore, little is known about the convergence rate of the MS algorithm. In this paper we study convergence properties of the MS algorithm under some generic assumptions on the kernel. From a technical point of view, we follow a line similar to that of [15] that focused on the Lojasiewicz property [16, 17], but we make use of more advanced results about that property, to further extend the convergence analysis in [15]. This extension allows us to obtain novel results, which include a convergence guarantee of the mode estimate sequence (Theorems 1 and 2) and a worst-case bound of the convergence rate (Theorems 3 and 4) of the MS algorithm for a wider class of the kernels. Our contributions are of significance as the class of the kernels we focus on in this study contains the biweight kernel, which is known to be optimal among non-negative kernels in terms of the asymptotic statistical efficiency for the KDE-based estimation of a non-degenerate mode [12, 18]. This paper is organized as follows. We formulate the MS algorithm in Section 2, and review related work on the convergence analysis of the MS algorithm in Section 3. In Section 4, we describe the Lojasiewicz property, and summarize the class of functions having that property. On the basis of these preliminaries and abstract convergence theorems by [19, 20], we provide a novel sufficient condition to ensure the convergence of the MS algorithm and an evaluation of the convergence rate in Section 5. In Section 6, we conclude this paper, and furthermore, we mention variants of the MS algorithm to which the analysis of this paper can be applied similarly, and possible directions for future research. Supplementary material provides proofs of the theoretical results. ## 2 MS Algorithm Various applications of the MS algorithm stem from the characterization that the MS algorithm is an optimization algorithm seeking a local maximizer of the KDE. Given \(n\) data points \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\in\mathbb{R}^{d}\), the KDE is constructed as \[f(\mathbf{x})\coloneqq\frac{1}{nh^{d}}\sum_{i=1}^{n}\kappa\left(\frac{\mathbf{x}-\mathbf{ x}_{i}}{h}\right), \tag{1}\] where \(K:\mathbb{R}^{d}\rightarrow\mathbb{R}\) and \(h>0\) are called the kernel and the bandwidth parameter, respectively. Throughout this paper, for the kernel \(K\) we adopt the following assumption, which is common in studies of the MS algorithm: **Assumption 1**.: _The kernel \(K\) is bounded, continuous, non-negative, normalized, and radially symmetric._ The assumption of radial symmetry of the kernel \(K\) leads to its alternative representation \[K(\mathbf{x})=\hat{K}(\|\mathbf{x}\|^{2}/2) \tag{2}\] with what is called the profile \(\hat{K}:[0,\infty)\to\mathbb{R}\) of \(K\) and the Euclidean norm \(\|\cdot\|\) in \(\mathbb{R}^{d}\). As mentioned by [21, 22], the MS algorithm can be seen as an example of the minorization-maximization (MM) algorithm under a certain condition. The MM algorithm solves a hard original optimization problem by iteratively performing construction of what is called a minorizer of the original objective function and optimization of the minorizer. Let us write the right and left derivatives of \(\hat{K}\), if exist, as \[\hat{K}^{\prime}(u\pm)=\lim_{v\to u\pm 0}\frac{\hat{K}(v)-\hat{K}(u)}{v-u}. \tag{3}\] We make the following assumption for the profile \(\hat{K}\) of the kernel \(K\): **Assumption 2**.: _The kernel \(K\) has a convex and non-increasing profile \(\hat{K}\) satisfying \(\hat{K}^{\prime}(0+)>-\infty\)._ For a real-valued function \(g\) defined on \(S\subseteq\mathbb{R}\), the subdifferential \(\partial g(u)\) of \(g\) at \(u\in S\) is defined as the set of values \(c\in\mathbb{R}\) such that \(g(v)-g(u)\geq c(v-u)\) holds for any \(v\in S\). Under Assumption 2, since the profile \(\hat{K}\) is convex, the subdifferential \(\partial\hat{K}(u)\) is non-empty for any \(u\in(0,\infty)\) and given by \([\hat{K}^{\prime}(u-),\hat{K}^{\prime}(u+)]\). Note that \(\partial\hat{K}(0)=(-\infty,\hat{K}^{\prime}(0+)]\) is non-empty as well under the assumption \(\hat{K}^{\prime}(0+)>-\infty\). Since \(\partial\hat{K}(u)\) is non-empty for any \(u\in[0,\infty)\), one can show that the subdifferential \(\partial\hat{K}(u)\) is non-decreasing in the sense that for \(0\leq u<v\) one has \(\max\partial\hat{K}(u)\leq\min\partial\hat{K}(v)\): Indeed, for any \(u,v\) with \(0\leq u<v\), take any \(c_{u}\in\partial\hat{K}(u)\) and \(c_{v}\in\partial\hat{K}(v)\). From the definition of the subdifferential, one has \(\hat{K}(v)-\hat{K}(u)\geq c_{u}(v-u)\) and \(\hat{K}(u)-\hat{K}(v)\geq c_{v}(u-v)\), which are summed up to \(0\geq(c_{u}-c_{v})(v-u)\), yielding \(c_{u}\leq c_{v}\). See also [23, Section 24] for these properties of subdifferentials of functions on \(\mathbb{R}\). Furthermore, as the profile \(\hat{K}\) is non-increasing, for any \(u\in[0,\infty)\) one has \(\max\partial\hat{K}(u)\leq 0\). Thus, defining a function \(\hat{K}\) on \([0,\infty)\) via \[\hat{K}(u)\begin{cases}:=-\hat{K}^{\prime}(0+)&\text{if $u=0$},\\ \in-\partial\hat{K}(u)&\text{if $u>0$},\end{cases} \tag{4}\] it is non-increasing, non-negative, and bounded since \(\hat{K}(u)\leq\hat{K}(0)=-\hat{K}^{\prime}(0+)<\infty\) for any \(u\in[0,\infty)\) due to Assumption 2. As \(-\hat{K}(u)\in\partial\hat{K}(u)\), the definition of the subdifferential yields \(\hat{K}(v)-\hat{K}(u)\geq-\hat{K}(u)(v-u)\) for any \(u,v\in[0,\infty)\). Substituting \((u,v)=(\|\mathbf{x}^{\prime}\|^{2}/2,\|\mathbf{x}\|^{2}/2)\) into this inequality, one has \[K(\mathbf{x})\geq\tilde{K}(\mathbf{x}|\mathbf{x}^{\prime})\coloneqq K(\mathbf{x}^{\prime})+ \frac{\hat{K}(\|\mathbf{x}^{\prime}\|^{2}/2)}{2}(\|\mathbf{x}^{\prime}\|^{2}-\|\mathbf{x} \|^{2}) \tag{5}\] for any \(\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{R}^{d}\). One also has \(K(\mathbf{x}^{\prime})=\tilde{K}(\mathbf{x}^{\prime}|\mathbf{x}^{\prime})\). These properties imply that, under Assumptions 1 and 2, \(\tilde{K}(\mathbf{x}|\mathbf{x}^{\prime})\) is a minorizer of the kernel \(K\) at \(\mathbf{x}^{\prime}\). It should be noted that there is arbitrariness in the definition (4) of \(\tilde{K}(u)\) at those values of \(u\) at which \(\partial\hat{K}(u)\) contains more than a single value. For example, the profile of the Epanechnikov kernel is given by \(\hat{K}(u)=C(1-u)_{+}\) with \(C>0\), where \((\cdot)_{+}\coloneqq\max\{\cdot,0\}\), and thus \(\partial\hat{K}(1)=[-C,0]\). In this case one may adopt any value in the interval \([0,C]\) as \(\tilde{K}(1)\). Indeed, [13] adopted \(\tilde{K}(1)=C\), whereas [14] adopted \(\tilde{K}(1)=0\). We would like to note here that the following analysis is not affected by how \(\tilde{K}(u)\) is defined at such points. The MS algorithm given a \(t\)th estimate \(\mathbf{y}_{t}\in\mathbb{R}^{d}\) builds a minorizer of the KDE \(f\) at \(\mathbf{y}_{t}\) as \[\begin{split}\bar{f}(\mathbf{x}|\mathbf{y}_{t})&\coloneqq \frac{1}{nh^{d}}\sum_{i=1}^{n}\tilde{K}\left(\frac{\mathbf{x}-\mathbf{x}_{i}}{h}\middle| \frac{\mathbf{y}_{t}-\mathbf{x}_{i}}{h}\right)\\ &=-\frac{1}{2nh^{d+2}}\sum_{i=1}^{n}\tilde{K}\left(\left\|\frac{ \mathbf{y}_{t}-\mathbf{x}_{i}}{h}\middle|^{2}/2\right)\|\mathbf{x}-\mathbf{x}_{i}\|^{2}\\ &+(\mathbf{x}\text{-independent constant}),\end{split} \tag{6}\] which satisfies \(\bar{f}(\mathbf{y}_{t}|\mathbf{y}_{t})=f(\mathbf{y}_{t})\) and \(\bar{f}(\mathbf{x}|\mathbf{y}_{t})\leq f(\mathbf{x})\) for any \(\mathbf{x}\in\mathbb{R}^{d}\). Introduce a function \[\bar{f}(\mathbf{x})\coloneqq\frac{1}{nh^{d}}\sum_{i=1}^{n}\tilde{K}\left(\left\| \frac{\mathbf{x}-\mathbf{x}_{i}}{h}\middle|^{2}/2\right), \tag{7}\] with which the coefficient of the quadratic term \(\|\mathbf{x}\|^{2}\) in \(\bar{f}(\mathbf{x}|\mathbf{y}_{t})\) is expressed as \(-\bar{f}(\mathbf{x})/(2h^{2})\). Assumption 2 ensures that \(\bar{f}(\mathbf{x})\) is non-negative due to the non-negativity of \(\tilde{K}(u)\). Furthermore, if \(\bar{f}(\mathbf{y}_{t})=0\), then all the summands on the right-hand side of (7) are zero and hence the function \(\bar{f}(\cdot|\mathbf{y}_{t})\) is constant. If \(\bar{f}(\mathbf{y}_{t})>0\), on the other hand, then the function \(\bar{f}(\cdot|\mathbf{y}_{t})\) is quadratic and has a unique maximizer. The MS algorithm then calculates the next estimate \(\mathbf{y}_{t+1}\) as \(\mathbf{y}_{t+1}\in\arg\max_{\mathbf{x}\in\mathbb{R}^{d}}\bar{f}(\mathbf{x}|\mathbf{y}_{t})\). More specifically, the MS algorithm calculates \(\mathbf{y}_{t+1}\) via \[\mathbf{y}_{t+1}=\mathbf{y}_{t}+\mathbf{m}(\mathbf{y}_{t}), \tag{8}\] where \[\mathbf{m}(\mathbf{y})\coloneqq\begin{cases}\mathbf{0}&\text{if $\bar{f}(\mathbf{y})=0$},\\ -\frac{\sum_{i=1}^{n}\tilde{K}(\|\frac{\mathbf{y}_{t}-\mathbf{x}_{i}}{h}\|^{2}/2)(\mathbf{ y}-\mathbf{x}_{i})}{\sum_{i=1}^{n}\tilde{K}(\|\frac{\mathbf{y}_{t}-\mathbf{x}_{i}}{h}\|^{2}/2)}& \text{if $\bar{f}(\mathbf{y})\neq 0$},\end{cases} \tag{9}\] with the all-zero vector \(\mathbf{0}\in\mathbb{R}^{d}\). The MS algorithm iterates the update rule (8) starting from a given initial estimate \(\mathbf{y}_{1}\in\mathbb{R}^{d}\) while incrementing the subscript \(t\in\mathbb{N}\). Therefore, the MS algorithm can be regarded as an instance of the MM algorithm. Here, the update rule when \(\bar{f}(\mathbf{y}_{t})=0\) is an exception-halling rule to avoid the MS algorithm to be ill-defined due to the denominator (\(=nh^{d}f(\mathbf{y}_{t})\)) of the ordinary update rule being zero. Under Assumptions 1 and 2, if \(\bar{f}(\mathbf{y}_{t})=0\) then the gradient of the KDE \(f\) also vanishes, that is, \(\mathbf{y}_{t}\) is a critical point of \(f\). Therefore, the exception-handling rule ensures the MS algorithm to stop at a critical point. Also, the following proposition shows that no such exception occurs if one selects an initial estimate \(\mathbf{y}_{1}\) properly: **Proposition 1**.: _Assume Assumptions 1 and 2. Let \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) be the mode estimate sequence obtained by estimate \(\mathbf{y}_{t}\), and hence the additional assumption \(f(\mathbf{y}_{1})>0\) definitely holds. The above construction of the MS algorithm as the MM algorithm shows the ascent property \(f(\mathbf{y}_{t})=f(\mathbf{y}_{t}|\mathbf{y}_{t})\leq f(\mathbf{y}_{t+1}|\mathbf{y}_{t})\leq f(\bm {y}_{t+1})\) of the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\), and the boundedness of the KDE \(f\) (due to Assumption 1) guarantees the convergence of that sequence: **Proposition 2** (Theorem 1 in [15]).: _Assume Assumptions 1 and 2. For the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) obtained by the MS algorithm (8) starting from any \(\mathbf{y}_{1}\in\mathbb{R}^{d}\), the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\) is non-decreasing and converges._ The above proposition guarantees the convergence of the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\) generated by the MS algorithm. From the application point of view, however, what we are interested in is not the convergence of the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\) but that of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\), since it is the limit \(\lim_{t\to\infty}\mathbf{y}_{t}\), if exists, that will tell us the location of a mode or a cluster center. The difficulty here is that one cannot deduce the convergence of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) from the convergence of the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\) without additional assumptions. Our main interest in this paper lies in convergence properties of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) obtained by the MS algorithm, such as whether it converges to a critical point, as well as its convergence rate when it converges. ## 3 Related Work Convergence properties of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) have been discussed in several papers. Some early convergence studies are, however, not rigorous. For instance, the proof in [3] used an incorrect inequality evaluation to claim that the mode estimate sequence is a Cauchy sequence; see counterexamples given in [24]. Essentially the same flaw had been shared by [25] in the discussion of consistency, which was subsequently amended in the errata [26] to [25]. [27] claimed the convergence of the mode estimate sequence under the assumption that the MS algorithm uses the Gaussian kernel, on the basis of the fact that the MS algorithm under this assumption is an example of the expectation-maximization (EM) algorithm [28]. As pointed out by [29], however, this reasoning alone is not enough: the EM algorithm may not converge without additional conditions [30]. Later studies have successfully provided some sufficient conditions for the convergence of the mode estimate sequence. In [24], the convergence of the mode estimate sequence has been proved under the assumption that the KDE has a finite number of critical points inside the convex hull of data points. For example, when the Epanechnikov kernel is used, the KDE is shown to have a finite number of critical points, so that the result of [24] is applicable to provide a convergence guarantee. For the Epanechnikov kernel, something even stronger holds true: [13] and [14] proved that the MS algorithm converges in a finite number of iterations. Another instance for which the finiteness of critical points, and consequently the convergence of the mode estimate sequence, have been shown is the 1-dimensional KDE with the Gaussian kernel. See, e.g., [31] and [32]. However, it is not known whether the number of critical points of the KDE with the Gaussian kernel for the dimension \(d\geq 2\) is finite. See, e.g., [33], where upper and lower bounds of the number of _non-degenerate_ critical points were given, whereas they wrote that the finiteness of the number of critical points is still open. Although [34] provided a condition under which the KDE with the Gaussian kernel has a finite number of critical points, his condition requires taking the bandwidth of the kernel large enough. Under this condition, mode estimates to be obtained would have a large statistical bias. Furthermore, the KDE with a large bandwidth might even yield a far smaller number of mode estimates than the actual number of the true modes when the data-generating distribution has multiple modes. Therefore, its practical significance is quite obscure, in view of applications of the MS algorithm such as data clustering and mode estimation. Additionally, in the 1-dimensional case, [29] proved the convergence of the mode estimate sequence for various kernels, by showing that its subsequence around a critical point becomes a bounded monotonic sequence. However, this proof strategy cannot be extended to the multi-dimensional case. More recently, [15] have given a convergence proof of the MS algorithm using analytic kernels, including the Gaussian kernel. Their proof takes advantage of the Lojasiewicz property [16, 17] (see Definition 1) of an analytic kernel and the corresponding KDE, while not requiring assumptions either on the finiteness of critical points of the KDE, on the non-degeneracy of KDE's Hessian at critical points, on the size of the bandwidth, or on the dimension of the data. Thus, their result is significant in that it guarantees the convergence of the MS algorithm under practical settings on the bandwidth parameter, and even in the multi-dimensional case. To summarize, it is only when the MS algorithm uses the Epanechnikov kernel [13, 14] or an analytic kernel [15] that the convergence of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) has been guaranteed without regard to the size of the bandwidth parameter or the data dimension. Much less is known so far about the convergence rate. Previous studies have clarified only the finite-time convergence when the algorithm uses the Epanechnikov kernel [13, 14] and the linear convergence when the algorithm uses the Gaussian kernel and the KDE has a non-degenerate Hessian at the convergent point [27]. The convergence rate when the Hessian is degenerate has not been clarified. ## 4 Preliminaries: Lojasiewicz Property As mentioned above, [15] proved the convergence of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) of the MS algorithm using an analytic kernel, without regard to the bandwidth parameter or the data dimension. The key in their proof is the Lojasiewicz property/inequality for an analytic function [16, 17], which provides a lower bound of the flatness of the the function around its critical points. In the convergence analysis of the MS algorithm, this bound in turn allows us to transfer the convergence of the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\) to that of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\). In this paper, we follow a similar line to that of [15], but we rely on more advanced results about the Lojasiewicz property. We here describe the Lojasiewicz property, and important classes of functions having that property. We adopt the following definition of the Lojasiewicz property/inequality, along with related notions. **Definition 1** (Lojasiewicz property/inequality/exponent).: _A function \(g:S\to\mathbb{R}\) with \(S\subseteq\mathbb{R}^{d}\) is said to have the Lojasiewicz property at \(\mathbf{x}^{\prime}\in S\) with an exponent \(\theta\), if there exists \(\epsilon>0\) such that \(g\) is differentiable on \(U(\mathbf{x}^{\prime},S,\epsilon)\coloneqq\{\mathbf{x}\in S\mid\|\mathbf{x}^{\prime}-\mathbf{x }\|<\epsilon,g(\mathbf{x}^{\prime})-g(\mathbf{x})\geq 0\}\) and satisfies the Lojasiewicz inequality_ \[\|\nabla_{g}(\mathbf{x})\|\geq c\{g(\mathbf{x}^{\prime})-g(\mathbf{x})\}^{\theta} \tag{10}\] _with \(c>0\), \(\theta\in[0,1)\), and any \(\mathbf{x}\in U(\mathbf{x}^{\prime},S,\epsilon)\). Also, \(g\) is said to have the Lojasiewicz property on \(T\subseteq S\) (when \(T=S\), we omit "on \(T\)"), if \(g\) is differentiable on \(T\) and there exists \(\epsilon>0\) such that \(g\) satisfies the Lojasiewicz inequality (10) with \(c>0\), \(\theta\in[0,1)\), and any \((\mathbf{x}^{\prime},\mathbf{x})\) such that \(\mathbf{x}^{\prime}\in T,\mathbf{x}\in U(\mathbf{x}^{\prime},T,\epsilon)\). Moreover, the minimum value of \(\theta\), with which \(g\) has the Lojasiewicz property at \(\mathbf{x}^{\prime}\), is called the Lojasiewicz exponent of \(g\) at \(\mathbf{x}^{\prime}\)._ Intuitively, the Lojasiewicz property of a function \(g\) at \(\mathbf{x}^{\prime}\) quantifies how flat the function \(g\) is around the point \(\mathbf{x}^{\prime}\). It is obvious from the definition that, for any \(\theta\in[0,1)\), if \(g\) has the Lojasiewicz property at \(\mathbf{x}^{\prime}\) with an exponent \(\theta\), then for any \(\theta^{\prime}\in[\theta,1)\) the same holds true with the exponent \(\theta^{\prime}\) as well. It is thus the minimum possible exponent \(\theta\) (i.e., the Lojasiewicz exponent) that is informative. If \(g\) is continuously differentiable at \(\mathbf{x}^{\prime}\) and if \(\mathbf{x}^{\prime}\) is a non-critical point of \(g\) (that is, \(\nabla_{g}(\mathbf{x}^{\prime})\neq\mathbf{0}\)), then for any \(\theta\in[0,1)\), \(g\) trivially has the Lojasiewicz property at \(\mathbf{x}^{\prime}\) with the exponent \(\theta\), implying that \(g\) is "maximally non-flat" at \(\mathbf{x}^{\prime}\). If, on the other hand, \(\mathbf{x}^{\prime}\) is a local minimum of \(g\), then with a sufficiently small \(\epsilon\) one has \(U(\mathbf{x}^{\prime},S,\epsilon)=\{\mathbf{x}^{\prime}\}\), implying that \(g\) has the Lojasiewicz property at the local minimum \(\mathbf{x}^{\prime}\). These facts demonstrate that Definition 1 is tailored primarily for characterizing the flatness of \(g\) around its critical points except local minima. As a more concrete example let us take \[g(\mathbf{x})=g(\mathbf{x}^{\prime})-\|\mathbf{x}-\mathbf{x}^{\prime}\|^{\alpha},\quad\alpha>1. \tag{11}\] One then has \[\|\nabla g(\mathbf{x})\|=\alpha\|\mathbf{x}-\mathbf{x}^{\prime}\|^{\alpha-1}=\alpha\{g( \mathbf{x}^{\prime})-g(\mathbf{x})\}^{1-1/\alpha}, \tag{12}\] implying that \(g\) has the Lojasiewicz property at \(\mathbf{x}^{\prime}\) with the exponent \(\theta\geq 1-1/\alpha\). As one takes a larger \(\alpha\), \(g\) gets "flatter" at \(\mathbf{x}^{\prime}\), and correspondingly the Lojasiewicz exponent \(1-1/\alpha\) becomes larger. As another example, let \[g(\mathbf{x})=g(\mathbf{x}^{\prime})-e^{-\|\mathbf{x}-\mathbf{x}^{\prime}\|^{-\beta}}\mathbbm{1 }(\mathbf{x}\neq\mathbf{x}^{\prime}),\quad\beta>0, \tag{13}\] where \(\mathbbm{1}\left(c\right)\) is the indicator function that takes the value \(1\) if the condition \(c\) is true, and \(0\) otherwise. One then has \[\begin{split}\|\nabla g(\mathbf{x})\|&=\beta e^{-\|\mathbf{x} -\mathbf{x}^{\prime}\|^{-\beta}}\|\mathbf{x}-\mathbf{x}^{\prime}\|^{-\beta-1}\mathbbm{1}( \mathbf{x}\neq\mathbf{x}^{\prime})\\ &=\beta h(g(\mathbf{x}^{\prime})-g(\mathbf{x}))\end{split} \tag{14}\] with \(h(z)=z(-\log z)^{1+|\beta/\beta}\mathbbm{1}(z>0)\), \(z\geq 0\), on the basis of which one can show that \(g\) does not have the Lojasiewicz property at \(\mathbf{x}^{\prime}\) as defined in Definition 1, that is, it is "too flat" at \(\mathbf{x}^{\prime}\) to be captured by this definition1, since for any \(\theta\in[0,1)\) one has Footnote 1: We would like to mention, however, that an extended definition of the Lojasiewicz property, provided in supplementary material, allows us to capture the flatness in this example as well. \[\frac{h(z)}{z^{\theta}}=z^{1-\theta}(-\log z)^{1+1/\beta}\overset{z\to \to 0}{\longrightarrow}0. \tag{15}\] The significance of the Lojasiewicz property for our purpose is that it allows us to convert the convergence of the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\) into that of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) when the KDE \(f\) is "not too flat," as well as that, when the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) converges, the property can provide a guarantee of faster convergence when \(f\) is "less flat" at the limit, as will be discussed in Section 5. [16] showed that analytic functions have the Lojasiewicz property, and thereafter, [35] generalized that result to the class of \(C^{1}\) functions with o-minimal structure (see also [36]), which particularly includes \(C^{1}\) globally subanalytic functions:2 Footnote 2: More recently, [37, 38] extended the definition of the Lojasiewicz inequality to the case of non-smooth functions, and showed that continuous globally subanalytic functions satisfy that generalized Lojasiewicz inequality. Succeeding studies such as [39, 40, 19, 20, 41] used it to construct abstract convergence theorems for various optimization algorithms. We also attempted convergence analysis according to such a general framework that allows non-smooth objective functions, but, even for the MS algorithm, we could not avoid the smoothness assumption (asumption (41) in Theorem 1 or Assumption 3 in Theorem 2, in Section 5.1). Such difficulty is also discussed in [40, 20]. Therefore, from Section 4 onwards, we adopt a simple framework that supposes the smoothness even if it can be generalized to the non-smooth case. Also, according to the boundedness assumption (Assumption 1), we omit devices used to handle unbounded functions. **Proposition 2** (Global subanalyticity and related notions).: _A set \(S\subseteq\mathbb{R}^{d}\) is called semialgebraic, if there exists a finite number of polynomial functions \(g_{ij}:\mathbb{R}^{d}\to\mathbb{R}\) such that \(S=\bigcup_{i=1}^{p}\bigcap_{j=1}^{q}\{\mathbf{x}\in\mathbb{R}^{d}\mid g_{ij}(\mathbf{x}) \,\sigma_{ij}\,0\}\) with relational operators \(\sigma_{ij}\in[\cdot,\cdot,\cdot]\)._ * _A set_ \(S\subseteq\mathbb{R}^{d}\) _is called semial analytic, if for each point_ \(\mathbf{x}^{\prime}\in\mathbb{R}^{d}\) _there exist a neighborhood_ \(T\) _of_ \(\mathbf{x}^{\prime}\) _and a finite number of analytic functions_ \(g_{ij}:T\to\mathbb{R}\) _such that_ \(S\cap T=\bigcup_{i=1}^{p}\bigcap_{j=1}^{q}\{\mathbf{x}\in T\mid g_{ij}(\mathbf{x}) \,\sigma_{ij}\,0\}\) _with relational operators_ \(\sigma_{ij}\in\{\cdot,\cdot,=\}\)_._ * _A set_ \(S\subseteq\mathbb{R}^{d}\) _is called subanalytic, if for each point_ \(\mathbf{x}^{\prime}\in\mathbb{R}^{d}\) _there exist a neighborhood_ \(T\) _of_ \(\mathbf{x}^{\prime}\) _and a bounded semialgebraic set_ \(U\subseteq\mathbb{R}^{d+d^{\prime}}\) _with_ \(d^{\prime}\geq 1\) _such that_ \(S\cap T=\{\mathbf{x}\in\mathbb{R}^{d}\mid(\mathbf{x},\mathbf{y})\in U\}\)_._ * _A set_ \(S\subseteq\mathbb{R}^{d}\) _is called_ globally semianalytic \((x_{1}/(1+x_{1}^{2})^{1/2},\ldots,x_{d}/(1+x_{d}^{2})^{1/2})\) is a semianalytic (resp. subanalytic) subset of \(\mathbb{R}^{d}\)._ * _A function_ \(g:S\to\mathbb{R}\) _with_ \(S\subseteq\mathbb{R}^{d}\) _is called_ semialgebraic _(resp._ semianalytic, subanalytic, globally semialytic, _or_ globally subanalytic)_, if its graph_ \(\{(\mathbf{x},y)\in S\times\mathbb{R}\mid y=g(\mathbf{x})\}\) _is semialgebraic (resp._ semianalytic, subanalytic, globally semianalytic, or globally subanalytic) subset of_ \(\mathbb{R}^{d+1}\)_._ * _A function_ \(g:S\to\mathbb{R}\) _with_ \(S\subseteq\mathbb{R}^{d}\) _is called_ piecewise polynomial _with the_ maximum degree \(k\in\mathbb{N}\)_, if there exists a finite collection \(\{S_{l}\}_{l\in[L]}\) of subdomains \(S_{l}\subseteq S\), \(l\in[L]\), that forms a partition of \(S\) (i.e., \(S_{l}\neq\emptyset\) for all \(l\in[L]\), \(S_{l}\cap S_{l^{\prime}}=\emptyset\) for all \(l,l^{\prime}\in[L]\) with \(l\neq l^{\prime}\), and \(\cup_{l\in[L]}S_{l}=S\)), such that \(g(\mathbf{x})=g_{l}(\mathbf{x})\) for any \(\mathbf{x}\in S_{l}\) (i.e., the restriction of \(g\) to \(S_{l}\) is the same as that of \(g_{l}\) to \(S_{l}\)) with a polynomial \(g_{l}:S\to\mathbb{R}\) for each \(l\in[L]\) and that the maximum degree of \(\{g_{l}\}_{l\in[L]}\) is \(k\)._ The class of semialgebraic functions has a wide variety of instances: polynomial, rational, and more generally piecewise polynomial functions are semialgebraic [43, 44]. As will be discussed in the next section, the class of piecewise polynomial functions that includes the biweight kernel is of particular importance in the discussion of this study. Any globally semianalytic functions are semianalytic, and any semianalytic functions with a bounded graph are globally semianalytic [42, before Example 1.1.4]. Any globally subanalytic functions are subanalytic, and any subanalytic functions with a bounded graph are globally subanalytic [37, after Definition 2.2]. Also, semianalytic functions are subanalytic (which can be seen from Definition 2), globally semianalytic functions are globally subanalytic [42, Definition 1.1.6], and semialgebraic functions are globally semianalytic [42, Example 1.1.4]. Note that an analytic function is not necessarily globally subanalytic (of course, the converse is not necessarily true either: a globally subanalytic function is not necessarily analytic). For example, \(g(x)=\sin(x)\), \(x\in\mathbb{R}\), is certainly analytic but not globally subanalytic [42, Example 1.1.7]. Moreover, it should be noted that a semianalytic/subanalytic function (e.g., the sine function defined on \(\mathbb{R}\)) and a \(C^{\infty}\) function are not necessarily globally subanalytic and do not always have the Lojasiewicz property; the "Mexican hat" function (equation (2.8) in [17]) and the function shown on page 14 of [45] are instances that are in the \(C^{\infty}\) class and not globally subanalytic, and these functions do not have the Lojasiewicz property. These inclusion relations are summarized in Figure 1. As stated in Proposition 3, in view of the Lojasiewicz property, what is important for our purpose is to provide sufficient conditions for the KDE to be \(C^{1}\) globally subanalytic. Thus, sufficient conditions for global subanalyticity in the above inclusion relations, as well as the stability of the global subanalyticity under the summation [42, Properties 1.1.8], are important, which are summarized as follows: **Proposition 4**.: _Any semialgebraic or globally semianalytic functions, any semianalytic or subanalytic functions with a bounded graph, and the summation of any globally subanalytic functions are globally subanalytic._ ## 5 Main Results: Convergence Theorems for MS Algorithm ### _Convergence to a Critical Point_ In this subsection, we provide a sufficient condition for the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) of the MS algorithm to converge to a critical point of the KDE \(f\). Our result is along the same line as the existing convergence theorem by [15] for the MS algorithm using analytic kernels, and further extends it on the ground of Propositions 3 and 4 stating that \(C^{1}\) globally subanalytic kernels and the corresponding KDE have the Lojasiewicz property. Several recent studies in optimization theory, including [17, 19, 39, 40, 41], exploit the Lojasiewicz property to prove the convergence of various optimization algorithms. By applying abstract convergence theorems such as [19, Theorem 3.2] and [20, Theorem 3.1] to the MS algorithm, we obtain the following theorem:3 Footnote 3: As we have observed in Section 2 that the MS algorithm is an example of the MM algorithm, we might alternatively be able to apply abstract convergence theorems for the MM algorithm [41] to the MS algorithm. Although convergence of the MS algorithm could be proved in this way, the resulting bound of the convergence rate can become looser than that given by Theorems 3 and 4 in this paper. This is because that bound depends on the Lojasiewicz exponent of the function \(f(\mathbf{x}+\mathbf{m}(\mathbf{x})|\mathbf{x})\) (called the value function) introduced in [41] (not of the KDE), which is in general flatter than the KDE at the critical point. **Theorem 1** (Convergence guarantee).: _Assume Assumptions 1 and 2. Let \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) be the mode estimate sequence obtained by the MS algorithm (8) starting from \(\mathbf{y}_{1}\) with \(f(\mathbf{y}_{1})>0\). Assume further, for the closure \(\operatorname{cl}(\operatorname{Conv}(\{\mathbf{y}_{t}\}_{t\geq\tau}))\) of the convex hull \(\operatorname{Conv}(\{\mathbf{y}_{t}\}_{t\geq\tau})\) of \(\{\mathbf{y}_{t}\}_{t\geq\tau}\) with some \(\tau\in\mathbb{N}\), that_ 1. _the KDE_ \(f\) _is differentiable and has a Lipschitz-continuous gradient on_ \(\operatorname{cl}(\operatorname{Conv}(\{\mathbf{y}_{t}\}_{t\geq\tau}))\) _(i.e., there exists a constant_ \(L\geq 0\) _such that_ \(\|\nabla f(\mathbf{x})-\nabla f(\mathbf{x}^{\prime})\|\leq L\|\mathbf{x}-\mathbf{x}^{\prime}\|\) _for any_ \(\mathbf{x},\mathbf{x}^{\prime}\in\operatorname{cl}(\operatorname{Conv}(\{\mathbf{y}_{t}\}_{t \geq\tau}))\)_, where_ \(L\) _is called the Lipschitz constant of_ \(\nabla f\) _or_ \(\operatorname{cl}(\operatorname{Conv}(\{\mathbf{y}_{t}\}_{t\geq\tau}))\)_, and_ 2. _the KDE_ \(f\) _has the Lojasiewicz property on_ \(\operatorname{cl}(\operatorname{Conv}(\{\mathbf{y}_{t}\}_{t\geq\tau}))\)_._ _Then, the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) has a finite-length trajectory (i.e., \(\sum_{t=1}^{\infty}\|\mathbf{y}_{t+1}-\mathbf{y}_{t}\|<\infty\)) and converges to a critical point \(\hat{\mathbf{y}}\) of the KDE \(f\)._ We next argue how one can replace the assumptions (a1) and (a2) on the KDE \(f\) to assumptions on the kernel \(K\) in such a way that the latter ones provide sufficient conditions for the former ones. Let us focus on the assumption (a1) of Theorem 1 first. If a kernel \(K\) is differentiable with a Lipschitz-continuous gradient, then the KDE \(f\) using the kernel \(K\) trivially satisfies the assumption (a1) for any \(\tau\), simply because the summation of functions preserves the differentiability, as well as the Lipschitz continuity of the gradients. Therefore, for the convergence guarantee of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\), we can replace the assumption (a1) on the KDE \(f\) with the following assumption on the kernel \(K\): **Assumption 3**.: _The kernel \(K\) is differentiable and has a Lipschitz-continuous gradient._ Note that Assumption 3 also implies that the kernel \(K\) is of class \(C^{1}\). We next argue how one can replace the assumption (a2) of Theorem 1 with an assumption on the kernel \(K\). According to Propositions 3 and 4, when the kernel \(K\) is analytic or \(C^{1}\) globally subanalytic, it is clear that the corresponding KDE \(f\) is so as well and has the Lojasiewicz property. We argue in the following that requiring the kernel \(K\) to be \(C^{1}\) subanalytic is indeed enough in order for the assumption (a2) to hold: Under Assumptions 1 and 2, as well as the condition \(f(\mathbf{y}_{1})>0\), the mode estimate \(\mathbf{y}_{t}\) for \(t\geq 2\) becomes a convex combination of the data points \(\{\mathbf{x}_{i}\}_{i\in[n]}\), that is, a weighted mean of \(\{\mathbf{x}_{i}\}_{i\in[n]}\) with non-negative weights, and thus it lies in the convex hull \(\operatorname{Conv}(\{\mathbf{x}_{i}\}_{i\in[n]})\) of \(\{\mathbf{x}_{i}\}_{i\in[n]}\), which is a bounded set. Therefore, we can restrict the domain of every kernel \(K(\frac{-\mathbf{x}_{i}}{h})\), \(i=1,\ldots,n\), to \(\operatorname{Conv}(\{\mathbf{x}_{i}\}_{i\in[n]})\) without any problems. Also, every kernel \(K(\frac{-\mathbf{x}_{i}}{h})\) is bounded under Assumption 1. Therefore, when the kernel \(K\) is \(C^{1}\) subanalytic, the restriction of \(K(\frac{-\mathbf{x}_{i}}{h})\) to \(\operatorname{Conv}(\{\mathbf{x}_{i}\}_{i\in[n]})\) becomes a \(C^{1}\) subanalytic function with a bounded graph, and consequently, it is \(C^{1}\) globally subanalytic due to Proposition 4. Hence, the restriction of the corresponding KDE is also \(C^{1}\) globally subanalytic due to Proposition 4 and has the Lojasiewicz property due to Proposition 3. Given this consideration, we do not have to require global subanalyticity, and requiring \(C^{1}\) subanalyticity to the kernel \(K\) is sufficient for the assumption (a2) to be satisfied for any \(\tau\). Therefore, under Assumptions 1, 2, and 3 and the condition \(f(\mathbf{y}_{1})>0\), we can replace the assumption (a2) on the KDE \(f\) with the following assumption on the kernel \(K\): **Assumption 4**.: _The kernel \(K\) is analytic or subanalytic._ Consequently, the following theorem will be obtained as a direct corollary of Theorem 1, which assures the convergence independently of the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\). **Theorem 2** (Corollary of Theorem 1).: _Assume Assumptions 1, 2, 3, and 4. Let \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) be the mode estimate sequence obtained by the MS algorithm (8) starting from \(\mathbf{y}_{1}\) with \(f(\mathbf{y}_{1})>0\). Then, the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) has a finite-length trajectory and converges to a critical point \(\hat{\mathbf{y}}\) of the KDE \(f\)._ The main significance of Theorem 2 is that it reveals for the first time the convergence of the MS algorithm for several piecewise polynomial kernels including the biweight and triweight kernels. In particular, the biweight kernel is known to be optimal among non-negative kernels in terms of the asymptotic statistical efficiency for the KDE-based mode estimation [11, 12]. More concretely, for a mode of the true probability density function with a non-degenerate Hessian at the mode, the main term of the asymptotic mean squared error of the 1-dimensional KDE-based mode estimator using a non-negative kernel \(K\) and optimal bandwidth parameter for that kernel is proportional to the kernel-dependent term \((\int_{-\infty}^{\infty}u^{2}K(u)\,du)^{\frac{1}{2}}\cdot(\int_{-\infty}^{ \infty}\{K^{\prime}(u)\}^{2}\,du)^{\frac{1}{2}}\) (we call its inverse the asymptotic statistical efficiency), and [18] showed that the biweight kernel minimizes this kernel-dependent term. Moreover, [12] obtained similar results for the multi-dimensional case. The triweight kernel is also relatively good in the same perspective; see Table I where we arrange kernels in the order of the asymptotic statistical efficiency for the 1-dimensional case (calculated ignoring a finite number of non-differentiable points) from the top.4 Footnote 4: [46, 47] show that the Epanechnikov kernel minimizes the asymptotic mean integrated squared error of the KDE using the associated optimal bandwidth parameter among non-negative kernels. It should be noted, however, that, although this fact was mentioned in papers which study convergence properties of the MS algorithm, such as [13] and [14] it does not imply the optimality of the Epanechnikov kernel for the KDE-based mode estimation, a representative application of the MS algorithm, in any sense. ### _Convergence Rate_ In this subsection, we study convergence rate of the MS algorithm. As mentioned at the end of Section 3, there are only a few studies on the convergence rate of the MS algorithm: It was proved in [13] and [14] that the MS algorithm with the Epanechnikov kernel converges in a finite number of iterations, and in [27] that the MS algorithm with the Gaussian kernel exhibits linear convergence provided that the Hessian of the KDE at a critical point is non-degenerate. We here establish a convergence rate evaluation for other kernels under more general situations. Assume for a moment that the kernel \(K\) is three-times differentiable, in addition to Assumptions 1 and 2. Consider Fig. 1: Inclusion relation among function classes. Taylor expansion of the map \(\mathbf{y}_{t}\mapsto\mathbf{y}_{t+1}=\mathbf{y}_{t}+\mathbf{m}(\mathbf{y}_{t})\) around a critical point \(\bar{\mathbf{y}}\) of the KDE \(f\), \[\mathbf{y}_{t+1}=\bar{\mathbf{y}}+\mathbf{\mathsf{J}}(\bar{\mathbf{y}})(\mathbf{y}_{t}-\bar{\mathbf{y}})+O (\|\mathbf{y}_{t}-\bar{\mathbf{y}}\|^{2}). \tag{16}\] Ignoring the higher-order terms, one has the approximate relation \[\mathbf{y}_{t+1}-\bar{\mathbf{y}}\approx\mathbf{\mathsf{J}}(\bar{\mathbf{y}})(\mathbf{y}_{t}-\bar{ \mathbf{y}}), \tag{17}\] where \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) is the Jacobian of the map \(\mathbf{x}\mapsto\mathbf{x}+\mathbf{m}(\mathbf{x})\) at \(\mathbf{x}=\bar{\mathbf{y}}\). Applying the approximate relation (17) recursively shows \(\mathbf{y}_{t}-\bar{\mathbf{y}}\approx\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\mathbf{t}^{\top-1}( \mathbf{y}_{t}-\bar{\mathbf{y}})\) for sufficiently large \(t\), \(\tau\in\mathbb{N}\) with \(t\geq\tau\). This approximation suggests that the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) achieves the exponential-rate convergence (also called the linear convergence) when the farthest-from-zero eigenvalue \(q\) of \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) has the absolute value less than 1: \(\|\mathbf{y}_{t}-\bar{\mathbf{y}}\|\leq|q|^{t-\tau}\|\mathbf{y}_{\tau}-\bar{\mathbf{y}}\|\) for sufficiently large \(t\). Also, the second-order Taylor expansion of the KDE \(f\) around the critical point \(\bar{\mathbf{y}}\) shows the exponential-rate convergence of the density estimate sequence \((f(\mathbf{y}_{t}))_{t\in\mathbb{N}}\) as \(|f(\bar{\mathbf{y}})-f(\mathbf{y}_{t})|\approx|(\mathbf{y}_{t}-\bar{\mathbf{y}})^{\top}\{ \nabla^{2}f(\bar{\mathbf{y}})\}(\mathbf{y}_{t}-\bar{\mathbf{y}})|=O(q^{2(t-t)})\). Simple calculation reveals that the Jacobian \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) of the map \(\mathbf{x}\mapsto\mathbf{x}+\mathbf{m}(\mathbf{x})\) at \(\mathbf{x}=\bar{\mathbf{y}}\) is given by \[\mathbf{\mathsf{J}}(\bar{\mathbf{y}})=\frac{\sum_{i=1}^{n}\hat{K}^{\prime\prime}(\| \frac{\bar{\mathbf{y}}-\mathbf{x}_{i}}{h}\|^{2}/2)(\mathbf{x}_{i}-\bar{\mathbf{y}})(\mathbf{x}_{i }-\bar{\mathbf{y}})^{\top}}{-h^{2}\sum_{i=1}^{n}\hat{K}^{\prime}(\|\frac{\bar{\bm {y}}-\mathbf{x}_{i}}{h}\|^{2}/2)}. \tag{18}\] It should be noted that the denominator of the right-hand side of (18) is equal to \(nh^{d+2}f(\bar{\mathbf{y}})\geq 0\), which is positive if \(f(\bar{\mathbf{y}})>0\). As Assumption 2 ensures that \(\hat{K}^{\prime\prime}\) is non-negative, \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) becomes positive semidefinite. On the other hand, from \(\mathbf{m}(\mathbf{x})=\frac{h^{2}}{f(\mathbf{x})}\nabla f(\mathbf{x})\) and \(\nabla f(\bar{\mathbf{y}})=\mathbf{0}\), the Jacobian is also calculated as \[\mathbf{\mathsf{J}}(\bar{\mathbf{y}})=\mathbf{\mathsf{I}}_{d}+\frac{h^{2}}{f(\bar{\mathbf{y}} )}\nabla^{2}f(\bar{\mathbf{y}}), \tag{19}\] where \(\mathbf{\mathsf{I}}_{d}\) is the \(d\times d\)-identity matrix. The fact that \(\nabla^{2}f\) at a local maximizer becomes negative semidefinite, together with the positive semidefiniteness of the Jacobian \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) mentioned above, implies that \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) at a local maximizer \(\bar{\mathbf{y}}\) of \(f\) has eigenvalues within the interval \([0,1]\). The following proposition, which is a generalization of [27] with the Gaussian kernel to that with a generic three-times differentiable kernel, shows the linear convergence when the Hessian \(\nabla^{2}f(\bar{\mathbf{y}})\) is non-degenerate. **Proposition 5** (Linear convergence in non-degenerate case).: _Assume Assumptions 1 and 2, that \(K\) is three-times differentiable, that the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) obtained by the MS algorithm (8) converges to \(\bar{\mathbf{y}}=\lim_{t\to\infty}\mathbf{y}_{t}\), and that the Hessian \(\nabla^{2}f(\bar{\mathbf{y}})\) of the KDE \(f\) at \(\bar{\mathbf{y}}\) is negative definite. Then, the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) achieves the linear convergence: \(\|\bar{\mathbf{y}}-\mathbf{y}_{t}\|=O(q^{t})\) and \(f(\bar{\mathbf{y}})-f(\mathbf{y}_{t})=O(q^{2t})\) with \(q=1+\frac{h^{2}}{f(\bar{\mathbf{y}})}\lambda\in[0,1)\), where \(\lambda\in[-\frac{f(\bar{\mathbf{y}})}{h^{2}},0)\) is the largest eigenvalue of the Hessian \(\nabla^{2}f(\bar{\mathbf{y}})\)._ In the above proposition, we excluded from our consideration the case where the Hessian \(\nabla^{2}f(\bar{\mathbf{y}})\) is degenerate. When the Hessian is degenerate, the Jacobian \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) has the largest eigenvalue equal to 1, and then analysis based on the first-order Taylor approximation of \(\mathbf{\mathsf{J}}(\bar{\mathbf{y}})\) does not lead to the (linear) convergence of the MS algorithm. In order to evaluate convergence rate along the same line of the analysis in such cases, one might have to investigate effects of the higher order terms in more detail. Discussion based on the Lojasiewicz property allows us to derive convergence rate of the MS algorithm under a weaker assumption on differentiability. More concretely, by applying [20, Theorem 3.5], we can prove the following theorem on the convergence rate of the MS algorithm that covers more general kernels and the degenerate case as well. It provides upper bounds of the convergence rate in terms of the Lojasiewicz exponent \(\theta\) of the KDE. **Theorem 3** (Convergence rate evaluation).: _Under the same assumptions as in Theorem 1 or 2, assume further that the KDE \(f\) has the Lojasiewicz exponent \(\theta\) at \(\bar{\mathbf{y}}=\lim_{t\to\infty}\mathbf{y}_{t}\), for the mode estimate sequence \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) obtained by the MS algorithm (8). Then, one has that_ 1. _if_ \(\theta\in[0,\frac{1}{2})\) _then_ \((\mathbf{y}_{t})_{t\in\mathbb{N}}\) _converges in a finite number of iterations,_ 2. _if_ \(\theta=\frac{1}{2}\) _then_ \(\|\bar{\mathbf{y}}-\mathbf{y}_{t}\|=O(q^{t})\) _and_ \(f(\bar{\mathbf{y}})-f(\mathbf{y}_{t})=O(q^{2t})\) _with some_ \(q\in(0,1)\)_, or_ 3. _if_ \(\theta\in(\frac{1}{2},1)\) _then_ \(\|\bar{\mathbf{y}}-\mathbf{y}_{t}\|=O(t^{-\frac{2-\theta}{2\theta-1}})\) _and_ \(f(\bar{\mathbf{y}})-f(\mathbf{y}_{t})=O(t^{-\frac{2-\theta}{2\theta-1}})\)_._ It should be noted that the Epanechnikov kernel does not satisfy Assumption 3 as shown in Table I, so that Theorem 3 will be applicable to the MS algorithm with the Epanechnikov kernel only under the conditions where \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Kernel & \(\hat{K}\left(u\right)\propto\) & Asm. 2 & Asm. 3 & Convergence & Rate & Worst-case bound of rate \\ \hline Biweight & \((\{1-u_{k}\}^{2}\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ – & \(\{(1-u)_{k}\}^{3/2}\) & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Triweight & \(\{(1-u)_{k}\}^{3}\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Tricube & \(\{(1-u^{3/2})_{k}\}^{3}\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & - & - \\ Cosine & \(\cos\left(\frac{u^{3/2}}{h^{2}}\right)1\left(u\leq 1\right)\) & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkcheck\) & - \\ Epanechnikov & \((\checkmark\)-u_{k}\) & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\check\) & \(\check\) & \(\check\) \\ Gaussian & \(e^{-u}\) & \(\checkmark\) & \(\checkmark\) & \(\check\) the assumptions (a1) and (a2) in Theorem 1 are satisfied. With the Epanechnikov kernel, the Lojasiewicz exponent at the mode of the KDE is typically \(\frac{1}{2}\), and if applying Theorem 3 is legitimate, it suggests the linear convergence via (b2), which is a looser evaluation than the finite-time convergence guaranteed by [13] and [14]. However, the convergence rate evaluation provided by Theorem 3 seems to be almost tight in other generic cases, as demonstrated in Figure 2, where the behaviors of the MS algorithm with the Gaussian kernel in the one-dimensional case are shown, with carefully chosen positions of data points so that the KDE has a degenerate Hessian at its mode. Theorem 3, as well as the experimental results summarized in Figure 2, strongly suggests that the Lojasiewicz exponent of the KDE bears essential information about the convergence rate of the MS algorithm. It is known, however, that the calculation of the Lojasiewicz exponent is difficult in general (see discussion of [48] for details). Even in such a circumstance, [49, 50, 51] provided bounds of the Lojasiewicz exponent for polynomial functions. On the ground of [50, Proposition 4.3], we can provide an upper bound of the Lojasiewicz exponent of the KDE with a piecewise polynomial kernel. **Theorem 4** (Bound of Lojasiewicz exponent).: _Assume that the kernel \(K\) is of class \(C^{1}\) and piecewise polynomial with maximum degree \(k\geq 2\). Then, the Lojasiewicz exponent \(\theta\) of the KDE \(f\) at any critical point \(\tilde{\mathbf{y}}\) is bounded from above as_ \[\theta\leq 1-\frac{1}{\max\{k(3k-4)^{d-1},2k(3k-3)^{d-2}\}}, \tag{20}\] _provided that \(f\) is not constant in any subdomain with a non-empty intersection with the \(\epsilon\)-neighborhood of \(\mathbf{y}\) for any \(\epsilon>0\)._ This bound of the Lojasiewicz exponent, together with Theorem 3 (b3), gives a worst-case bound of the convergence rate of the MS algorithm using a piecewise polynomial kernel. However, it should be noted that the bound provided in Theorem 4 is not tight in general and might be improved by future research. ## 6 Conclusion and Future Work We have shown that the mode estimate sequence generated by the MS algorithm using a \(C^{1}\) subanalytic kernel converges to a critical point of the KDE (Theorem 2). Our proof does neither presume that the KDE has a finite number of critical points or they are isolated, nor that its Hessian at a convergent point is non-degenerate, nor restriction on the size of the bandwidth or on the data dimension; it utilizes the Lojasiewicz property of the KDE. The class of kernels covered by this theorem includes several piecewise polynomial kernels, such as a biweight kernel which is optimal among non-negative kernels for the KDE-based estimation of a non-degenerate mode in terms of the asymptotic statistical efficiency [12, 18]. The convergence analysis results in this paper extend the existing ones for the Epanechnikov kernel [13, 14] and for analytic kernels [15]. Moreover, we not only provide a sufficient condition for the mode estimate sequence to achieve the linear convergence when the Hessian of the KDE at a convergent point is non-degenerate (Proposition 5), but also give a worst-case evaluation of the convergence rate (Theorems 3 and 4) in terms of the Lojasiewicz exponent of the KDE, applicable even when the Hessian is degenerate. The convergence theorems of the MS algorithm, including ours for \(C^{1}\) subanalytic kernels and the existing ones for the Epanechnikov kernel and analytic kernels, are also effective for the iteratively reweighted least squares algorithm, commonly used for various versions of robust M-type location estimation and regression [52, 53]. Moreover, these results can be applied to several generalized MS algorithms. The conditional MS algorithm, which is a representative estimation method for nonparametric modal regression [54, 55, 56, 57], can be regarded as a weighted version of the conventional MS algorithm with the weights determined by the values of the independent variable part of the data. The convergence theorems can be generalized to the weighted version of the MS algorithm derived for the weighted objective function, \(\frac{1}{nR^{2}}\sum_{i=1}^{n}w_{i}K(\frac{\pi-\pi_{i}}{R})\) with constant weights \(\{w_{i}\in(0,\infty)\}_{i\in[n]}\). Other instances of the generalized MS algorithms include an MS variant derived for the KDE \(\frac{1}{n}\sum_{i=1}^{n}\frac{1}{R^{2}}K(\frac{\pi-\pi_{i}}{R_{i}})\) with datapoint-wise bandwidths \(\{h_{i}\in(0,\infty)\}_{i\in[n]}\)[24, 58], and the over-relaxation of the MS algorithm, \(\mathbf{y}_{t+1}=\mathbf{y}_{t}+\zeta\mathbf{m}(\mathbf{y}_{t})\) with a constant \(\zeta\in(0,2)\)[15]. Even under these generalizations, a guarantee of the convergence to a critical point and a convergence rate evaluation still hold as well. The subspace constrained MS algorithm [59, 60], another MS variant, is a method for estimating principal curves and principal surfaces as ridges of the KDE [61, 62]. It iterates an update rule that is expected to converge to a point on a ridge of the KDE instead of its critical point. The convergence property of the subspace constrained MS algorithm would be related to that of the MS algorithm but is still open, and analysis with the Lojasiewicz property might be useful for it. ## Acknowledgment This work was supported by Grant-in-Aid for JSPS Fellows, Number 20J23367. We would like to thank the authors of the article [25] for kindly drawing our attention to the errata [26] that accompanies that article.
2307.10455
A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset
In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is taxonomically classified by an expert, and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers, which are genetically-based proxies for species classification. This paper presents a curated million-image dataset, primarily to train computer-vision models capable of providing image-based taxonomic assessment, however, the dataset also presents compelling characteristics, the study of which would be of interest to the broader machine learning community. Driven by the biological nature inherent to the dataset, a characteristic long-tailed class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is a hierarchical classification scheme, presenting a highly fine-grained classification problem at lower levels. Beyond spurring interest in biodiversity research within the machine learning community, progress on creating an image-based taxonomic classifier will also further the ultimate goal of all BIOSCAN research: to lay the foundation for a comprehensive survey of global biodiversity. This paper introduces the dataset and explores the classification task through the implementation and analysis of a baseline classifier.
Zahra Gharaee, ZeMing Gong, Nicholas Pellegrino, Iuliia Zarubiieva, Joakim Bruslund Haurum, Scott C. Lowe, Jaclyn T. A. McKeown, Chris C. Y. Ho, Joschka McLeod, Yi-Yun C Wei, Jireh Agda, Sujeevan Ratnasingham, Dirk Steinke, Angel X. Chang, Graham W. Taylor, Paul Fieguth
2023-07-19T20:54:08Z
http://arxiv.org/abs/2307.10455v3
# A Step Towards Worldwide Biodiversity Assessment: ###### Abstract In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is taxonomically classified by an expert, and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers, which are genetically-based proxies for species classification. This paper presents a curated million-image dataset, primarily to train computer-vision models capable of providing image-based taxonomic assessment, however, the dataset also presents compelling characteristics, the study of which would be of interest to the broader machine learning community. Driven by the biological nature inherent to the dataset, a characteristic long-tailed class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is a hierarchical classification scheme, presenting a highly fine-grained classification problem at lower levels. Beyond spurring interest in biodiversity research within the machine learning community, progress on creating an image-based taxonomic classifier will also further the ultimate goal of all BIOSCAN research: to lay the foundation for a comprehensive survey of global biodiversity. This paper introduces the dataset and explores the classification task through the implementation and analysis of a baseline classifier. The code repository of the BIOSCAN-1M-Insect dataset is available at [https://github.com/zahrag/BIOSCAN-1M](https://github.com/zahrag/BIOSCAN-1M) ## 1 Introduction Global change is restructuring ecosystems on a planetary scale, creating an increasingly urgent need to track impacts on biodiversity. Such tracking is exceptionally challenging because life is highly diverse: the biosphere comprises more than 10 million multicellular species [41]. Until recently, this complexity has meant that an Earth observation system for biodiversity was inconceivable, however the increased power of DNA sequencing and the recognition that living organisms can be discriminated by short stretches of DNA have revealed a way forward, which has become the central focus of the International Barcode of Life (iBOL) Consortium. Discriminating organisms by DNA sequences [22; 6] can revolutionize our understanding of biodiversity, not only by providing a reliable species proxy for known and unknown species, but also by revealing their interactions and assessing their responses to changes in the ecosystem. This is essential to mitigate a looming mass extinction, where an _eighth of all species_ may become extinct by 2100 unless there is a significant change in human behaviour [10; 11]. The BIOSCAN project [2], lead by iBOL, has the following three main goals: (1) species discovery, (2) studying the interactions between species, and (3) tracking and modelling species dynamics over geography and time. To that end, BIOSCAN collects samples of multicellular life from around the world. Each sample is individually imaged, genetically sequenced and barcoded [22], and then classified by expert taxonomists. Of particular interest to the BIOSCAN project are _insects_, which constitute a great proportion of the Earth's species and many of which remain unknown. Indeed, it is estimated that 5.5 M insect species exist worldwide, of which only roughly one million have been identified [52; 23]. The rate of insect collection within the BIOSCAN project is increasing as the project progresses, such that 3 M insect specimens will be collected in 2023 and 10 M by 2028. Using high-resolution photographs, human taxonomists can accurately classify insects from within their domain of expertise. However, human annotation cannot scale to the volume of samples needed to measure and track global biodiversity. Moreover, many taxonomists with highly specialized knowledge are leaving the practice and won't be replaced. Thus, the use of artificial intelligence and machine learning to process visual and textual information collected by the BIOSCAN project is crucial to the success of a planet-scale observation system. Classification of the insect images to their taxonomic group ranking is especially useful in regions of the world where the facilities required to perform genetic barcoding are not available. Indeed, even beyond this project, there are opportunities for computer vision to transform entomology [25]. This article has two main contributions. The first is the publication of the BIOSCAN insect image dataset, containing approximately 1.1 M high-quality microscope images, each of which is annotated by the insect's taxonomic ranking and accompanied by its raw DNA sequences and Barcode Index Number (BIN) [46], an example of which is shown in Figure 1. Secondly, we designed and implemented a deep model, classifying BIOSCAN images into their taxonomic ranking, to serve as a baseline for future work utilizing this dataset. ## 2 Background and Related work This section provides background on taxonomic classification, the use of genetic barcoding, and several challenges in the field of machine learning associated with our dataset. ### Taxonomic Classification In biology, taxonomic classification is the study of hierarchically categorizing lifeforms based on shared characteristics. In particular, Linnean taxonomy [7; 20; 31] forms the basis for the modern (generally accepted) system of taxonomy, of which the main hierarchical ranks are Domain, Kingdom, Phylum, Class, Order, Family, Genus, and Species, as shown in Figure 3. All insect life is part of the class _Insecta_. Figure 1: Dataset records contain high-quality microscope images of insects and labels including the taxonomic classification, raw DNA sequences, and Barcode Index Number (BIN). Pictured here is a mosquito of the subfamily _Culicinae_, the most populous subfamily of mosquitoes with species found around the world. Conventionally, expert taxonomists classify organisms based on their appearance and behaviour [7]. However, this approach is susceptible to both misclassification and lacks consensus throughout the community of taxonomists, since it is difficult to prove with certainty that a given classification is absolutely _correct_. This shortcoming of traditional taxonomy has prompted the use of classification heuristics, based on fairly concrete evidence in the form of genetic codes, that are sensitive to species identity. ### Genetic Barcoding and Barcode Index Numbers DNA barcoding [22; 6] employs large-scale screening of one or a few reference genes for assigning unknown individuals to species, as well as increasing the discovery of new species [42]. Barcoding is commonly used in several fields including taxonomy, ecology, conservation biology, diet analysis and food safety [47; 51]. It is faster and more accurate than traditional methods, which rely on the judgment of experts [45]. Barcoding is based on the use of a short, standardized segment of mitochondrial DNA, typically a portion of the _mitochondrial cytochrome c oxidase subunit I (COI) gene_, which is nearly always unique for different species. Once the DNA sequence is obtained, it can be compared to a reference library of known sequences to identify the species. The concept of genetic barcoding can be taken a step further by mapping barcodes to clusters of organisms (characterized by their barcodes) with _highly_ similar genetic code, known as operational taxonomic units (OTU) [50; 5]. OTUs act as a proxy for species based on the high degree of genetic similarity exhibited by their members. To enable indexing, each OTU is assigned a uniform resource identifier (URI), commonly referred to as the Barcode Index Number (BIN) [46], which offers a unique representation such that genetically identical taxa will be assigned the same BIN, and each BIN is registered in the Barcode Of Life Data system (BOLD) [1]. BINs additionally provide an alternative to the use of Linnean names, offering a genetics-based classification for organisms. ### Machine Learning Challenges As will be demonstrated in Section 3, the dataset exhibits two key characteristics corresponding to open problems in the field of machine learning. **Class imbalance.** The degree to which the expected quantity of instances varies between classes is known as the class imbalance. In the context of a closed dataset, the class imbalance describes the disparity in size between classes [26; 29]. As we describe in Section 3, and Figure 2 the published dataset exhibits a long-tailed class distribution whereby the sizes of classes closely follow a power-law, meaning that there is a substantial class imbalance. This represents a challenge due to the disproportionate amounts of available training data for majority vs. minority classes. Figure 2: Class distribution and class imbalance in BIOSCAN-1M dataset. **Hierarchical classification.** Classification problems involving data with labels that are inherently hierarchical present a unique challenge in comparison to simpler "flat" classification problems [48]. The outputs of hierarchical classification algorithms are defined over tree-like class taxonomies, where the relationship between parent and child nodes is given by the asymmetric "is-a" relationship. A basic example of this is the relationship that "all dogs are canines, but not all canines are dogs", whereby "dogs" would be a child node of the parent node "canines", which itself may be a child of "mammals". The dataset published here perfectly matches this paradigm and may be used to study novel approaches for handling the hierarchical classification problem. Note that the baselines we adopt in this paper do not pursue a hierarchical strategy but instead classify to fixed levels of the taxonomy: Order and Family. Hierarchical strategies are a topic of present and future work. ### Biological Datasets Image-based insect classification [38] most often finds use in agricultural settings, where Integrated Pest Management (IPM) systems are used to identify and count harmful insect pests [32; 49]. In combination with this, holistic systems capable of also identifying plant diseases through computer vision are a popular area of research [15; 12; 39]. Recently, DNA sequences have been analyzed [27] using tools from the field of Natural Language Processing [43], and in particular, through the application of bidirectional encoder representations from transformers (BERT) [14]. Indeed, BERT-based models have been used to taxonomically classify genetic sequences [24; 40]. Other recent work has used DNA barcodes as "side information" to perform zero-shot species-level recognition from images, albeit at a much smaller scale than BIOSCAN-1M [4]. Perhaps the best known and largest biological dataset is iNaturalist [54], containing 859,000 images from over 5,000 different species of plants and animals, and containing 1,021 categories of insects with \(\sim\)120 k annotated images. Many insect-specific image datasets focus on insect as pests found in agricultural settings [58; 56; 55; 16; 59; 19; 36; 33]; the most prominent of which, the IP102 [58] dataset, contains roughly 75 k insect images, 19 k of which are annotated by agricultural experts, with over 102 classes of insects. In the space of plants, the PlantNet-300K [18] dataset has 306 k images and was constructed by sampling the larger PlantNet database [3]. Table 1 highlights key biological datasets across a variety of domains. ## 3 Dataset This section describes the information made available through the publication of the BIOSCAN-1M Insect dataset, and details the procedures which generated the information. ### BIOSCAN-1M Insect dataset resources The BIOSCAN-1M Insect dataset provides three main sources of information about insect specimens. Each sample in the dataset consists of a biological taxonomic annotation, DNA barcode sequence, \begin{table} \begin{tabular}{l l r r r} \hline \hline **Name** & **Authors / Citation** & **Domain** & **Images** & **Classes** \\ \hline iNaturalist & Van Horn _et al_. [54] & Plants \& Animals & 859 k & 5,089 \\ PlantNet-300K & Garcin _et al_. [18] & Plants & 306 k & 1,000 \\ Urban Trees & Wegner _et al_. [57] & Trees & 80 k & 18 \\ IP102 & Wu _et al_. [58] & Insect & 75 k & 102 \\ NA Birds & Van Horn _et al_. [53] & Birds & 48 k & 555 \\ LeafSnap & Kumar _et al_. [30] & Plants & 31 k & 184 \\ LSWTP & Liu _et al_. [36] & Insect & 28 k & 6 \\ Pest24 & Wang _et al_. [56] & Insect & 25 k & 24 \\ Flowers 102 & Nilsback _et al_. [44] & Flowers & 8 k & 102 \\ IP-FSL & Gomes _et al_. [19] & Insect & 7 k & 142 \\ **BIOSCAN-Insect** & _Ours_ & Insect & 1,128 k & 16\({}^{\dagger}\) \\ **BIOSCAN-Diptera** & _Ours_ & Insect & 891 k & 40* \\ \hline \hline \multicolumn{4}{l}{\(\dagger\)= Orders. * = Families.} \\ \end{tabular} \end{table} Table 1: Summary of biological fine-grained and long-tailed datasets. and a RGB image of a single specimen. In the following sections, this information is described in detail. #### 3.1.1 Biological taxonomy The BIOSCAN-1M Insect dataset specifies biological taxonomic rank following the Linnean taxonomy as described in Section 2.1. In addition to the main groups shown in Figure 3, the dataset also provides the Subfamily and Subspecies ranks. The Subfamily rank is an auxiliary (intermediate) taxonomic rank, the next below Family but more inclusive than Genus. Subspecies is a taxonomic rank below Species, and it is used for populations that live in different areas and vary in size, shape, or other physical characteristics, but that can successfully interbreed. Finally, we also provide "Name" to indicate the lowest (most specific) known rank label. Not all data samples have labels for all taxonomic ranks recognized in the BIOSCAN-1M Insect dataset. As an example, the Family group of the BIOSCAN-1M Insect dataset is indexed by 494 distinct families, however, there are 16,067 data samples that are not associated to any of these families, since they were not classified by human taxonomists. As a consequence, there are many data samples that are not classified into lower-level groups like Subfamily, Tribe, Genus, Species, or Subspecies. The lack of precise annotation at all ranks is one of the major challenges of the BIOSCAN-1M Insect dataset when performing classification tasks. #### 3.1.2 DNA Barcode and Indexing Section 2.2 described the concept of genetic barcoding and the generation of barcode index numbers (BINs). The BIOSCAN-1M Insect dataset contains genetic barcodes and BINs for all samples. This information is represented as the raw nucleotide barcode sequence, under the Nuccraw field, and the Barcode Index Number (BIN), denoted by \(\mathtt{uri}\). Independently, the field processid is a unique number assigned by BOLD to each record, and sampleid is an identifier given by the collector. #### 3.1.3 RGB images The BIOSCAN-1M Insect dataset offers a wealth of information through its collection of insect images. The dataset contains high-resolution (2880x2160 pixel) RGB images in JPEG format; Figure 4 displays a selection of images representing insects from different Orders, each labeled according to its taxonomy. We have released multiple packages of the BIOSCAN-1M Insect dataset aimed at different purposes. These packages are organized into 113 chunks, each containing 10 k images. The packages include: (1) Original JPEG Images stored in 113 zip files (2.3TB), (2) Cropped images organized into 113 zip files (151GB), (3) Resized original images which have a size of 256 px on their smaller side (26GB), and (4) Resized cropped images having a size of 256 px on their smaller side (7GB). Additionally, for computational convenience, we have also provided the dataset in HDF5 archive format for both the resized original and cropped images. ### BIOSCAN-1M Insect dataset generation The BIOSCAN-1M Insect dataset consists of specimens mostly collected from three countries -- Costa Rica, Canada, and South Africa -- using Malaise traps. The RGB images of the organisms are taken by a Keyence VHX-7000 microscope. Images are organized by workflow units: 96-well microplates of which 96 are used in a single sequencing run (9,120 samples at a time). The DNA barcodes of the organisms are generated by using a high-throughput approach utilizing the Pacific Figure 3: Biological taxonomic ranking and classification. Taxonomic ranks are shown in the top row, with the classification (i.e., labels) for the Western honey bee shown below. Biosystems Sequel platform, which employs Single-molecule, real-time (SMRT) sequencing to generate long-read length DNA and cDNA. The taxonomic classifications (labels) of the dataset are created by matching the generated barcodes to a reference library on the Barcode of Life Data System (BOLD) at the Centre for Biodiversity Genomics in Canada. BOLD is a platform to store and analyze data using four modules: (1) a data portal, (2) an educational portal, (3) a registry of BINs (putative species), and (4) a data collection and analysis workbench. We provide a comprehensive metadata file alongside the RGB images, which includes taxonomic annotations, DNA barcode sequences, and data sample indexes and labels. The metadata file also contains image names and IDs to locate the corresponding images within the dataset packages. Additionally, it identifies the images associated with the training, validation, and test splits. ## 4 Experiments We curated three subsets of different sizes from the BIOSCAN-1M Insect dataset and conducted two sets of classification experiments, for a total of six datasets. Three subsets, named Small, Medium, and Large, consist of approximately 50 k, 200 k, and 1 M data samples, respectively. The first set Figure 4: Examples of insect images from 16 orders of the BIOSCAN-Insect dataset. The numbers below each image identify the number of images in each class, and clearly illustrate the degree of class imbalance in the BIOSCAN-Insect dataset. “Siphonaptera”, “Strepsiptera” and “Zoraptera” are removed from classification experiments due to an insufficient number of samples. of experiments focuses on classifying insect images to their taxonomic order. The second set of experiments delves one level deeper, classifying samples of the order Diptera into specific families. ### Subset sampling and split mechanism To create subsets of the BIOSCAN-1M Insect dataset, we followed a two-step process. First, we sampled a subset specifically from the Diptera order, which consisted of the 40 families with the highest number of members, leading to the BIOSCAN-Diptera dataset. Next, we split the BIOSCAN-Diptera dataset into train, validation, and test sets. Finally, we constructed the train, validation, and test sets of the BIOSCAN-1M Insect dataset based on the split sets of the BIOSCAN-Diptera dataset. This approach ensured consistency throughout all our experiments. The Small and Medium subsets are generated by sampling 50k and 200k data samples, respectively, from both the train, validation, and test sets of the BIOSCAN-1M Insect and BIOSCAN-Diptera datasets. In all our classification experiments, we used class-based stratified sampling to split the dataset into train, validation and test sets. To this end, 70% of the samples of each class are randomly selected as training, 10% as validation, and 20% as test samples, as shown in Table 2. The extreme class imbalances, which are an inherent characteristic of the BIOSCAN-1M dataset, are addressed to some extent by having all classes represented in the train, validation and test sets. Classes with no samples for either split set are omitted. In the insect order-level classification (Figure 4), we have sufficient data samples for 16 out of 19 orders in the train, validation, and test sets. For the Diptera family-level classification, we focus on the 40 most populous families within Diptera. ### Data preprocessing To improve computational efficiency, we crop and resize the images to be 256 px on the smaller dimension. Preliminary experiments comparing original images with images that are cropped show that cropping can help model learning to converge more rapidly and lead to slightly better performance. Reducing the resolution to 256 px helps to reduce the size of the large dataset from 2.3 TB down to 26 GB for the original uncropped images, and from 151 GB down to 7 GB for cropped images. We choose to run experiments on the cropped and resized images due to the small size which allows for efficient data loading from disk. The BIOSCAN-1M image datasets have insects with varying size, pose, color and shape. Due to these variations, cropping is not a simple task. We develop our cropping tool by fine-tuning a DETR [9] model with ResNet-50 [21] backbone (pretrained on MSCOCO [34]) on a small set of 2,000 insect images annotated using the Toronto Annotation Tool Suite [28]. In DETR, the CNN-based feature extractor extracts a set of image features that are fed into a transformer-based encoder-detector. The detector takes a set of learned positional embeddings as object queries and uses them to attend to the encoder outputs. Each of the output decoder embeddings is then passed to a shared FFN which predicts whether there is an "insect" or "no object" and regresses the bounding box. The DETR model is trained for 10 epochs with the AdamW optimizer with learning rate of 0.0001, weight decay of 0.0001 and a batch size of 8. To crop the image, we apply our fine-tuned DETR model and take the predicted bounding box with the highest confidence score. The finalized cropping is determined as the predicted bounding box, extended equally in width and height by \(0.4\) of the maximum dimension. ### Classification model To run classification experiments, we fine-tuned two different pre-trained models to extract deep visual features of insects from their RGB images. Our pre-trained models are ResNet-50 [21] and a transformer based model, ViT-Patch-16-224 [17]. During training, we take random 224x224 crops \begin{table} \begin{tabular}{l r r r r r} \hline \hline **Dataset** & **Total** & **Train** & **Validation** & **Test** & **Classes** \\ \hline BIOSCAN-1M-Insect & 1,128,313 & 789,813 & 112,835 & 225,660 & 16 \\ BIOSCAN-Diptera & 891,338 & 623,937 & 89,135 & 178,266 & 40 \\ BIOSCAN-Insect/Diptera Medium & 200,000 & 140,000 & 20,000 & 40,000 & 16/40 \\ BIOSCAN-Insect/Diptera Small & 50,000 & 35,000 & 5000 & 10,000 & 16/40 \\ \hline \hline \end{tabular} \end{table} Table 2: The total number of samples used in the BIOSCAN-Insect dataset and its five subsets: The entries display the number of data samples in the train, validation, and test sets, as well as the number of classes for Order-level (16 orders) and Diptera family-level (40 families) experiments. from the image as input, while during validation we take the center crop. The features representing insect images are then connected to a fully connected layer that maps the deep representation space to the insect class labels. To train our model, we used two loss functions, the cross-entropy as a baseline and the Focal loss, which is more suitable for datasets having class imbalances [35; 8; 13]. ## 5 Results We created six datasets from BIOSCAN-Insect dataset and for each dataset we performed four classification experiments using two different backbone models and two loss functions. Detailed hyperparameter settings of these 24 experiments are shown in Table 3. For Small and Medium datasets, the models were trained for 100 epochs; for the Large dataset, the models were trained for fewer epochs considering the convergence was met in the validation set. We evaluate the performance of our classification models using top-K accuracy, which takes the top-K predicted classes for each sample and if the ground-truth label is among the top-K predictions, then it is counted as a correct classification. We report test results of the best model from validation performance for the micro, class-averaged macro top-K accuracy at \(K\in[1,3,5,10]\) as well as average top-5 [18] accuracy as shown in Tables 4 and 5. Figure 5 shows the per-class top-1 test accuracy for the Order and Family classification of the Large dataset. Accuracy is quite high, above 90%, for most classes, decreasing mainly for classes with little training data. Test results shown in Tables 4 and 5 are for the best model, out of the four models trained for 6 datasets, based on the validation performance. For the Small dataset, Vit-P16-224 with Focal loss produced the best validation results, for the Medium dataset, Vit-P16-224 with Focal loss for Order classification and Vit-P16-224 with Cross Entropy for the Family classification experiments were best, and finally, for the Large dataset best results were produced using ResNet-50 with Cross Entropy and ViT-P16-224 with Cross Entropy for Order and Family experiments, respectively. ## 6 Conclusion We have described a set of six novel BIOSCAN datasets, on which we conducted image-based classification experiments using the taxonomic annotations of the insects. Looking ahead, iBOL's ongoing efforts will lead to further advancements in several aspects. The rate of insect sample \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline & & \multicolumn{5}{c}{**Insect-Order**} & \multicolumn{5}{c}{**Diptera-Family**} \\ \cline{3-10} **Metric** & **Dataset** & **Top-1** & **Top-3** & **Top-5** & **Top-10** & **Top-1** & **Top-3** & **Top-5** & **Top-10** \\ \hline Micro Top-K & Small & 98.10 & 99.50 & 99.79 & 99.93 & 93.65 & 97.35 & 98.06 & 98.65 \\ & Medium & 99.10 & 99.77 & 99.89 & 99.99 & 73.50 & 80.00 & 83.76 & 89.51 \\ & Large & 99.68 & 99.96 & 99.98 & 99.99 & 97.48 & 99.01 & 99.43 & 99.78 \\ Macro Top-K & Small & 85.99 & 91.74 & 99.33 & 99.82 & 91.95 & 96.21 & 97.19 & 98.04 \\ & Medium & 83.87 & 96.50 & 97.17 & 99.59 & 83.83 & 90.34 & 92.25 & 94.98 \\ & Large & 80.85 & 88.87 & 91.00 & 93.66 & 89.67 & 95.77 & 96.63 & 97.71 \\ \hline \hline \end{tabular} \end{table} Table 4: Top-K accuracy and class-averaged macro top-K accuracy based on the test sets of Insect-Order and Diptera-Family using the Small, Medium and Large datasets. \begin{table} \begin{tabular}{l c} \hline \hline **Parameters** & **Settings** \\ \hline Model & R50/ViT-P16-224 \\ Loss & Cross-Entropy/Focal \\ Optimizer & SGD \\ Weight Decay (\(\mu\)) & 0.0001 \\ Learning rate & 0.001 \\ Momentum & 0.9 \\ K & [1; 3; 5; 10] \\ group-level & order/family \\ \hline \hline \end{tabular} \begin{tabular}{l c} \hline \hline **Parameters** & **Settings** \\ \hline Batch-Size & 32 \\ Epoch & 100 \\ Num-Workers & 4 \\ Image-Size (Train/Val) & 256 \\ Crop-Size (Train) & 224 \\ Rand-Horizontal-Flip (Train) & Yes \\ Centre-Crop (Val) & 224 \\ Dataset size & L/M/S \\ \hline \hline \end{tabular} \end{table} Table 3: Detailed hyperparameter settings of the experiments. collection is already increasing, resulting in a dataset that is not only larger in terms of the number of records but also more comprehensive, with additional taxa at lower taxonomic levels such as genera and species. Moreover, the dataset will expand to encompass diverse life forms beyond insects. Thus, while the current dataset is already the largest publicly available insect image dataset, it represents just the beginning of what lies ahead. ## Acknowledgement We acknowledge the support of the Government of Canada's New Frontiers in Research Fund (NFRF), [17]. This research was enabled in part by support provided by Calcul Quebec (calculquebec.ca) and the Digital Research Alliance of Canada (allianeccan.ca). Data collection was enabled by funds from the Walder Foundation, a New Frontiers in Research Fund (NFRF) Transformation grant, a Canada Foundation for Innovation's (CFI) Major Science Initiatives (MSI) Fund and CFREF funds to the Food from Thought program at the University of Guelph. The authors also wish to acknowledge the team at the Centre for Biodiversity Genomics responsible for preparing, imaging, and sequencing specimens used for this study, as well as Utku Cicek for their help with the project. Figure 5: Per-class top-1 test accuracy of the Order and Family classifications of the Large dataset. The classes are listed in a descending manner with respect to their number of split samples. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{4}{c}{**Insect-Order**} & \multicolumn{4}{c}{**Diptera-Family**} \\ \cline{2-7} & **Mic-Top-5** & **Mac-Top-5** & **Avg-Top-5** & **Mic-Top-5** & **Mac-Top-5** & **Avg-Top-5** \\ \hline Small & 99.79 & 99.33 & 99.96 & 98.06 & 97.19 & 99.03 \\ Medium & 99.89 & 97.17 & 99.99 & 83.76 & 92.25 & 81.04 \\ Large & 99.98 & 91.00 & 99.99 & 99.43 & 96.63 & 99.15 \\ \hline \hline \end{tabular} \end{table} Table 5: Micro-Top-5, Macro-Top-5 and Avg-Top-5 [18] accuracy of the Insect Order and Diptera Family classification for the Small, Medium and Large datasets.
2306.08133
Large-scale Language Model Rescoring on Long-form Data
In this work, we study the impact of Large-scale Language Models (LLM) on Automated Speech Recognition (ASR) of YouTube videos, which we use as a source for long-form ASR. We demonstrate up to 8\% relative reduction in Word Error Eate (WER) on US English (en-us) and code-switched Indian English (en-in) long-form ASR test sets and a reduction of up to 30\% relative on Salient Term Error Rate (STER) over a strong first-pass baseline that uses a maximum-entropy based language model. Improved lattice processing that results in a lattice with a proper (non-tree) digraph topology and carrying context from the 1-best hypothesis of the previous segment(s) results in significant wins in rescoring with LLMs. We also find that the gains in performance from the combination of LLMs trained on vast quantities of available data (such as C4) and conventional neural LMs is additive and significantly outperforms a strong first-pass baseline with a maximum entropy LM. Copyright 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Tongzhou Chen, Cyril Allauzen, Yinghui Huang, Daniel Park, David Rybach, W. Ronny Huang, Rodrigo Cabrera, Kartik Audhkhasi, Bhuvana Ramabhadran, Pedro J. Moreno, Michael Riley
2023-06-13T20:54:12Z
http://arxiv.org/abs/2306.08133v2
# Large-scale language model rescoring on long-form data ###### Abstract In this work, we study the impact of Large-scale Language Models (LLM) on Automated Speech Recognition (ASR) of YouTube videos, which we use as a source for long-form ASR. We demonstrate up to 8% relative reduction in Word Error Eate (WER) on US English (en-us) and code-switched Indian English (en-in) long-form ASR test sets and a reduction of up to 30% relative on Salient Term Error Rate (STER) over a strong first-pass baseline that uses a maximum-entropy based language model. Improved lattice processing that results in a lattice with a proper (non-tree) digraph topology and carrying context from the 1-best hypothesis of the previous segment(s) results in significant wins in rescoring with LLMs. We also find that the gains in performance from the combination of LLMs trained on vast quantities of available data (such as C4 [1]) and conventional neural LMs is additive and significantly outperforms a strong first-pass baseline with a maximum entropy LM. Tongzhou Chen\({}^{*}\), Cyril Allauzen\({}^{*}\), Yinghui Huang, Daniel Park, David Rybach, W. Ronny Huang, Rodrigo Cabrera, Kartik Audhkhasi, Bhuvana Ramabhadran, Pedro J. Moreno, Michael Riley Google LLC, USA + Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. ## 1 Introduction Large-scale language models (LLM), such as as BERT [2], T5 [3], GPT-3 [4], and PaLM [5], have proven to be successful in natural language processing (NLP) tasks such as, Question Answering, Text Summarization, and other Zero Shot learning applications. These models are trained on vast amounts of text data and have yielded state-of-the-art results across several NLP and search tasks. However, there is very limited work on the use of these LLMs in Automated Speech Recognition (ASR). Recent research has focused on fine-tuning GPT, GPT-2 and BERT models with small amounts of in-domain data showing that they tend to outperform the performance of conventional Neural LMs such as transformer LMs trained on the same data [6]. The authors in [7] propose the use of pseudo-likelihood scores and show that rescoring N-best hypotheses from an ASR model can yield significant wins on Librispeuch but there is always a trade-off between in-domain modeling and fine-tuning a model trained with far more text. An alternate approach to directly predict the oracle hypothesis was originally proposed in [8] and used in [9] to re-rank the N-best hypothesis using scores from BERT. In this paper, we scale the use of LLMs to ASR on YouTube videos, which we use as a source for long-form ASR. We show the importance of lattice quality and contextual augmentation for long-form ASR and compare the performance of LLMs with other neural and maximum entropy based LMs using two metrics: Word Error Rate (WER) and _Salient Term Error Rate_ (STER). ## 2 Related Work Several methods to incorporate LMs in end-to-end sequence models have been proposed in the literature. Decoding algorithms [10, 11, 12] employ fusion strategies, such as _shallow_[13], _cold_[14], _deep_[15] and _component_[16] fusion. However, the wins from incorporating LMs in this fashion have been relatively small for large scale ASR [17]. The Hybrid Autoregressive Transducer (HAT) model introduced in [18] for encoder-decoder models, allowed for the computation of an internal language model component that can be quantified and appropriately interpolated with an external language model (ELM). The density ratio method proposed in [19] offers a theoretically grounded solution to leverage an external language model while separating out the _acoustic likelihood_ score and the internal LM score on the source domain. This modular framework lends itself to principled approaches of LM rescoring and adaptation thus overcoming some of the shortcomings of the aforementioned LM integration strategies [18, 20]. ASR systems perform best when the training data is matched to the target domain. However, end-to-end ASR models are trained on large quantities of available speech data and the LM is trained on the limited text data available in the target domain, thus enabling cross-domain transfer. Alternatively, Large LMs are trained on vast quantities of text and subsequently fine tuned on target domain text. In both scenarios, finding an optimal combination of the end-to-end ASR model, with its implicitly trained internal LM and the external LM, is critical for best performance in the target domain. Neural Oracle Search leverages HAT factorization for LM rescoring with an external LM to directly pick the oracle hypothesis [8], while others have explored on-device neural and biasing LM integration [21] and compared rescoring and deliberation [22], demonstrating wins across all tasks. In this paper, we study the impact of LLMs within the HAT framework for long-form ASR. Using data from two different sources, US English (en-us) and Indian English (en-in) which is heavily code-switched with Hindi and other Indian languages, we show that wins of up to 8% relative can be obtained in long-form ASR while achieving a reduction of up to 30% relative on Salient Term Error Rate (STER) over a strong first-pass baseline that uses a maximum-entropy based language model. We also demonstrate the importance of improved lattice quality that results in a lattice with a proper (non-tree) digraph topology and carrying context from the 1-best hypothesis of the previous segment(s) obtain best performance with LLMs. We find that both Text-to-Text Transfer Transformer (T5) [3] and its multilingual counterpart, MT5 [23] are complementary to conventional neural LMs and outperform a strong first-pass baseline that utilizes a maximum entropy LM. ## 3 Large Language Models Several LLMs have been proposed to date with significant improvements on varied NLP tasks. In this work, we mainly focus on two LLMs, T5 and PaLM, ranging in size from 3B to 540B parameters, summarized in Table 1. The conventional neural LM used for comparisons is a conformer LM described in Section 4.4 and comprising of 70M parameters. ### T5 and PaLM Built on an encoder-decoder transformer-based architecture, T5 optimizes the log-likelihood of the target text given input to learn a mapping from the input to target. While T5 is pretrained on the span corruption task, LM and Pre-fix LM are two fine-tuning tasks used for language modeling. The LM task predicts the target sequence with null context input while the prefix LM task randomly splits the text into two halves, using the first half as the input to predict the second half. These fine-tuning tasks enable direct computation of log-likelihood of the target text, instead of the estimation of a pseudo log-likelihood as proposed initially in [2] for masked LMs. Thus, given a text sequence \(Y\), similar to the LM task, we can compute its T5 score \(S_{\text{TS}}(Y)\) by using an empty string \(\epsilon\) as input and the text sequence \(Y\) as target, with the following equation: \[S_{\text{TS}}(Y)=\log P_{\text{TS}}(Y|\epsilon;\Theta_{\text{TS}}). \tag{1}\] For longer sequences, we can make better use of the previous context and compute the score in a semi-autoregressive fashion. Therefore, \(Y\) can be split into multiple segments \(Y_{1}\dots Y_{S}\) and the log-likelihood of the current segment can be computed using the previous segment's context: \[S_{\text{TS}}(Y)=\sum_{s=1}^{S}\log P_{\text{TS}}(Y_{s}|Y_{s-1};\Theta_{\text {TS}}), \tag{2}\] where \(Y_{0}\) being \(\epsilon\). PaLM is an autoregressive LM with a decoder-only architecture. Hence the score of a text sequence can be computed straightforwardly. ### Integration with ASR Models In this work, we use a first-pass model based on the conformer architecture [24] that uses HAT factorization [18]. Not only does HAT model provide a posterior score \(S_{\text{HAT}}(Y|X)\), but it also estimates the internal LM (ILM) score. As mentioned in Section 2, when interpolating an external LM during rescoring or shallow fusion, estimating and subtracting the internal LM score yields wins. Thus, inference search maximizes: \[S(Y,X)=S_{\text{HAT}}(Y|X)-\mu S_{\text{ILM}}(Y)+\nu S_{\text{ ELM}}(Y), \tag{3}\] where \(\mu\) and \(\nu\) are tunable hyperparameters. ## 4 Experiments ### Data We conduct experiments with data from two language locales, en-us and en-in. The multi-domain ASR model used in this paper is trained on several thousand hours of long-form utterances derived from YouTube videos[25] and short-form utterances that are anonymized, hand-transcribed and are representative of Google's Voice Search traffic [26]. The test sets contain long-form utterances derived from 30-minute-long YouTube videos. We set aside a subset containing 5% of the test utterances as the development test to tune the hyperparameters. The pre-training corpus used to train T5 is the publicly available, Colossal Clean Crawled Corpus(C4), while MT5 is pre-trained on the multilingual variant, MC4 [23]. To address code-switching seen in en-in [27], text data consisting of Indian English and Hindi Wikipedia and CCNet [28] collectively referred to as WEBDOC, is used. This corpus consists of 170M sentences yielding 2.9B word tokens. We use 90% data for training and 10% data for validation. All data in mixed writing systems is transliterated to Latin to be consistent with ASR model training data used for en-in. ### Training Large Language Models We experimented with T5 and MT5 models of sizes XL and XXL. Both T5 and MT5 models were pre-trained for 1M steps using the span corruption task and then fine-tuned for 100K steps using the prefix LM task on C4/MC4. To address the heavy code-switching prevalent in en-in and the lack of Hindi data in MC4 corpus, we fine-tune MTS on the LM task for an additional 300k steps on the WEBDOC corpus. PaLM models with three different sizes were trained as described in [5] for the en-us task. The corpus used to train these models consisted of filtered web pages, books, Wikipedia, news articles, source code, and social media conversations. We use these pre-trained models as-is with no additional fine-tuning. ### ASR Models We use a first-pass ASR model based on the conformer architecture [24] that uses HAT factorization [18]. The encoder consists of a convolution subsampling layer and 17-layers of conformer blocks. A conformer block is composed of a feed-forward module, multi-headed self-attention with relative positional encoding module, a convolution and a final feed-forward module, stacked together. The configuration used in this work has an encoder dimension of 512, 8 attention heads, a convolution kernel size of 32 and a decoder dimension of 640 [24]. The decoder at label \(y_{u}\) is only conditioned on the previous two labels \(y_{u-1}\) and \(y_{u-2}\), with their embeddings concatenated and projected [29]. The models are trained on 80-dimensional log-mel filter bank coefficients and predict word-piece targets (4096 for en-us and 8192 for en-in). The choice of these parameters was determined by sweeping for best performance within the expected model size. \begin{table} \begin{tabular}{l l|c c|c c|c c} \hline \multicolumn{1}{c|}{**Conventional**} & \multicolumn{1}{c|}{**Size**} & \multicolumn{1}{c|}{**T5**[3]} & \multicolumn{1}{c|}{**Size**} & \multicolumn{1}{c|}{**MT5**[23]} & \multicolumn{1}{c|}{**Size**} & \multicolumn{1}{c}{**PaLM**[5]} & \multicolumn{1}{c}{**Size**} \\ \multicolumn{1}{c|}{**LMs**} & & & & & & & \\ \hline Neural LM & 70M & S & 60M & & & S & 8B \\ MaxEnt & 4.5B & M & 220M & & & M & 62B \\ & & L & 770M & & & L & 540B \\ & & XL & 3B & XL & 3.7B & & \\ & & XXL & 11B & XXL & 13B & & \\ \hline \end{tabular} \end{table} Table 1: Comparison of LM sizes. ### Neural and Maximum-Entropy based Language Models In order to better understand the value of LLMs in ASR, we trained two state-of-the-art LMs, a conventional neural LM and a Maximum Entropy based LM. The conventional Neural LM is a small, unidirectional, conformer LM (CLM) with 70M parameters, originally designed for on-device rescoring [21]. It consists of 12 causal conformer layers, each with a dimension of 384, a feedforward layer dimension of 2048, a convolution kernel of size 15. We use 4-headed self attention with a left context size 31. The model is trained on the same data as the LLMs to predict the same word-piece targets as the first-pass ASR model. Thus, for en-us, we trained it on C4 and for en-in, we trained it on WEBDOC to match the fine-tuning corpus of MT5. The Maximum Entropy based (MaxEnt) LM [30, 31] is a log linear model based on N-gram and skip-gram word contexts, with a size of 4.5B parameters and is comparable to the size of the T5/MT5 XL models. It is also trained on the same data as the conventional Neural LM. ### Decoding and Rescoring Decoding is performed by a time-synchronous beam search using the breadth-search expansion strategy [32] where the number of active hypotheses at each frame is bounded by a beam size \(k\). A VAD-based segmenter [33] runs in parallel to the beam-search decoder. When the decoder receives an end-of-segment signal from the segmenter, a segmenter lattice is generated from the currently active hypotheses. If present, a rescoring LM is applied to this segment lattice, with the 1-best hypotheses from previous segments optionally provided as context. Only the best hypothesis in the lattice (eventually after rescoring) is carried forward in the beam-search for the next segment. The final utterance lattice is obtained by concatenating all the segment lattices. When using an ASR model with unlimited label context, each hypothesis within the beam encodes the full history from the beginning of the utterance. Hence, the segment lattice is a trie with a total number of paths (e.g. hypotheses) bounded by the beam size \(k\). When using an ASR model where the label context is bound by \(n\)[34], beam-search hypotheses sharing the same label context of length \(n\) will correspond to the same state in the segment lattice. This results in lattice with a proper (non-tree) digraph topology where the number of paths can grow up to exponentially in the number of states. This was shown to lead to a significant improvement in lattice quality: lattice diversity improvement and oracle WER reduction [34]. The ASR models described in section 4.3 used limited label context with \(n=2\). However when combining these models with the conformer LMs from section 4.4 during the beam search using HAT fusion results in dramatic increase of the label context limit making the resulting combined model to effectively have unlimited label context. ## 5 Results ### Lattice Quality The success of a rescoring approach crucially depends on the quality of the hypotheses of the first-pass beam-search decoder. To assess the lattice quality, we computed metrics such as the \(N\)-best oracle WER and the average number of paths/hypotheses per segment for our baseline systems on the en-us and en-in development sets as reported in Table 2. As the contribution to first-pass model's posterior and internal LM at label \(y_{u}\) depends only on the previous two labels, our baseline systems can leverage the state merging benefits of limited context models described in Section 4.5 as demonstrated by the relatively low oracle WER and high number of paths per segments. Lattice quality can be improved by improving first-pass modeling by integrating a neural LM in the beam-search decoding using HAT fusion. Table 2 shows this results in a significant improvement in 1-best WER. However, this causes the loss of the state merging benefits and results in an increase of oracle WER in en-us. However, this is still an significant improvement compared to disabling state merging in the baseline systems. ### Comparison of LMs In this Section, we consider the impact of LM integration on the en-us task. Table 3 demonstrates the value of providing longer context to Large LMs. Each row contains the result of rescoring with the T5 XXL model when carrying over contexts of different lengths, i.e., of carrying over the 1-best hypotheses from different number of previous segments. We observe that carrying over previous context outperforms no context. However, longer contexts do not seem to provide additional wins. The rest of this paper thus uses contextual information from just the previous segment. Table 4 presents the rescoring and fusion results on the en-us development and evaluation test sets for various LMs. First we observe that a small Neural LM edges out over the performance of a Maxent LM. Moreover, though the T5 S model, whose size is slightly smaller than the NLM, was slightly behind NLM, increasing the size of T5 leads to better results. It is also interesting to note that the NLM and T5 XXL models are complementary, as fusion can give a better 1-best WER. In addition, we experimented with more enormous PaLM LMs and they are able to brings the power of larger capacity and large amounts of training text, yielding better results than T5. ### Code-switching Task In this Section, we present the performance of LLMs on a more challenging en-in task dominated by heavy code-switching. Although MT5 is meant to be a multilingual LM, the amount of training data from the different languages is unbalanced. The training data consists of 5.67% English, but only 1.21% is Hindi in the Devanagari script [23]. This imbalance between en-in and \begin{table} \begin{tabular}{l|r r|r r|r r} \hline \hline \multirow{2}{*}{**dev**} & \multicolumn{2}{c|}{**Oracle WER**} & \multicolumn{2}{c|}{**WER**} & \multicolumn{2}{c}{**\#paths/segment**} \\ & **en-us** & **en-in** & **en-us** & **en-in** & **en-us** & **en-in** \\ \hline Baseline & 7.3 & 12.8 & 12.2 & 17.2 & 4e20 & 4e13 \\ No state merging & 8.8 & 13.1 & 12.2 & 17.2 & 5.7 & 5.8 \\ Neural LM fusion & 8.4 & 11.0 & 11.6 & 15.6 & 5.2 & 5.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Lattice quality on the en-us and en-in dev sets. \begin{table} \begin{tabular}{l|c} \hline \hline **WER** & **dev** \\ \hline Baseline & 12.2 \\ + T5 rescoring, carrying 0 segment & 11.6 \\ + T5 rescoring, carrying 1 segment & **11.5** \\ + T5 rescoring, carrying 2 segments & **11.5** \\ \hline \hline \end{tabular} \end{table} Table 3: WER comparison on the en-us test set for different lengths of carried over context Hindi fails to capture the frequent code switches between English and Hindi predominant in the en-in test sets. To address this issue, we finetune both XL and XXL MT5 models on the WEBDOC corpora with the LM task. We evaluate the raw MT5 model and these fine-tuned models on the en-in development set to study the effect of fine-tuning. These results are tabulated in Table 5. It can be seen that rescoring with the fine tuned models outperforms rescoring with the raw MT5 model. This can be attributed to the lack of sufficient Hindi data in the MC4 corpus which can be fixed with data balanced fine-tuning. When compared to en-us, the wins from LLMs on en-in are less. We hypothesize that this could be related to the small size of the WEBDOC corpus compared to MC4, in line with the data-hungry nature of LLMs [35, 36]. ### Comparison of LMs on the code-switching task Table 6 presents the rescoring results from various LMs. The MT5 XL model is the best performing model with a WER reduction of 7.3% relative on the evaluation test set. On the other hand, the Conformer LM when used in shallow fusion in the first-pass shows additional wins. Since we fine-tuned MT5 on the same training data as Conformer LM, we also report the perplexity of MT5 and Conformer LM on the 10% validation part of WEBDOC. MT5 has a log perplexity per word of 4.15, slightly higher than the Conformer LM at 2.98 and MaxEnt at 3.69. We observe that the Conformer LM and MT5 are complementary and the combination results in a best WER reduction of 8% relative. ## 6 Error Analysis To analyze the effectiveness of large LM, we select unigrams and bigrams with the highest Term Frequency Inverse Document Frequency (TF-IDF) values from the evaluation test sets (_salient terms_) for the two languages studied in this paper. In general, such terms capture the topic presented in the video. On the one hand, they are important for indexing or information retrieval; on the other hand, they are more difficult to be recognized compared to frequently occurring function words (such as, "the", "of", etc.). We analyzed the performance of the baseline and the various large LMs on these _salient terms_ to study the impact on rare words. The _Salient Term Error Rate (STER)_ is reported in Table 7, defined as the number of deletion and substitution errors on the _salient terms_ divided by the total number of _salient terms_. Out of a total of 600K words, approximately, 10% words are tagged as _salient terms_ for en-in and 5% for en-us. First we observe that almost all rescoring and fusion can reduce the error made on these _salient terms_. In en-us, as reflected by the WER reported in Table 4, T5 outperforms other LMs. In en-in, however, NLM fusion in the first pass has a bigger impact on the _salient terms_ than any rescoring method similar to what has been reported in [37]. Although MT5 has been fine tuned on the same data as the NLM, we find that it is less impactful by itself on the _salient terms_ in en-in. Although MT5 has been fine tuned on the same data as the neural LM, we find that it is less impactful by itself on the _salient terms_. However, in both languages, the combination of these two LMs through interpolation is additive (last row in Table 6) resulting in the best performance. As noted in [35, 36] scaling to larger and larger datasets is only beneficial when the data is high-quality and larger models require larger data sets. This can explain some of the differences seen between these two relatively high resource languages. ## 7 Conclusion In this study, we presented the impact of LLMs (up to 350B parameters) on long-form ASR. We demonstrated up to 8% relative reduction in Word Error Rate (WER) on US English (en-us) and code-switched Indian English (en-in) long-form ASR test sets and a reduction of up to 30% relative on Salient Term Error Rate (STER) over a strong first-pass baseline that uses a maximum-entropy based language model. We also find that the gains in performance from the combination of LLMs trained on vast quantities of available data (such as C4 [1]) and conventional neural LMs is additive and significantly outperforms a strong first-pass baseline with a maximum entropy LM. To the best of our knowledge, this is the first study that scales LLMs to long-form ASR. \begin{table} \begin{tabular}{l|c|c} \hline \hline **WER** & **dev** & **eval** \\ \hline Baseline & 17.2 & 16.4 \\ + MaxEnt rescoring & 16.5 & 15.9 \\ + NLM rescoring & 16.2 & 15.4 \\ + MT5 XL rescoring & 16.1 & 15.2 \\ + NLM fusion & 15.6 & 15.0 \\ + NLM fusion \& MT5 XL rescoring & **15.4** & **14.6** \\ \hline \hline \end{tabular} \end{table} Table 6: en-in WER comparison between MT5 and other LMs \begin{table} \begin{tabular}{l|c|c} \hline \hline **WER** & **dev** & **eval** \\ \hline Baseline & 12.2 & 16.1 \\ + MaxEnt rescoring & 11.8 & 15.8 \\ + T5 S rescoring & 11.9 & 15.9 \\ + T5 M rescoring & 11.7 & 15.8 \\ + T5 XL rescoring & 11.6 & 15.7 \\ + T5 XXL rescoring & 11.5 & 15.7 \\ + PalM S rescoring & 11.5 & 15.5 \\ + PalM M rescoring & **11.3** & **15.4** \\ + PalM L rescoring & **11.3** & - \\ + NLM fusion & 11.6 & 15.6 \\ + NLM fusion \& T5 XXL rescoring & 11.4 & 15.5 \\ \hline \hline \end{tabular} \end{table} Table 4: en-us WER comparison between T5 and other LMs \begin{table} \begin{tabular}{l|c|c} \hline \hline **en-in dev** & **MT5 XL** & **MT5 XXL** \\ \hline Baseline & \multicolumn{2}{c}{17.2} \\ Raw & 16.6 & 16.8 \\ Fine-tuned & **16.1** & 16.3 \\ \hline \hline \end{tabular} \end{table} Table 5: WER comparison on en-in dev set with raw and fine-tuned MT5 models of sizes XL and XXL
2303.15031
Semantics for first-order superposition logic
We investigate how the sentence choice semantics (SCS) for propositional superposition logic (PLS) developed in \cite{Tz17} could be extended so as to successfully apply to first-order superposition logic(FOLS). There are two options for such an extension. The apparently more natural one is the formula choice semantics (FCS) based on choice functions for pairs of arbitrary formulas of the basis language. It is proved however that the universal instantiation scheme of FOL, $(\forall v)\varphi(v)\rightarrow\varphi(t)$, is false, as a scheme of tautologies, with respect to FCS. This causes the total failure of FCS as a candidate semantics. Then we turn to the other option which is a variant of SCS, since it uses again choice functions for pairs of sentences only. This semantics however presupposes that the applicability of the connective $|$ is restricted to quantifier-free sentences, and thus the class of well-formed formulas and sentences of the language is restricted too. Granted these syntactic restrictions, the usual axiomatizations of FOLS turn out to be sound and conditionally complete with respect to this second semantics, just like the corresponding systems of PLS.
Athanassios Tzouvaras
2023-03-27T09:27:30Z
http://arxiv.org/abs/2303.15031v1
# Semantics for first-order superposition logic ###### Abstract We investigate how the sentence choice semantics (SCS) for propositional superposition logic (PLS) developed in [9] could be extended so as to successfully apply to first-order superposition logic(FOLS). There are two options for such an extension. The apparently more natural one is the formula choice semantics (FCS) based on choice functions for pairs of arbitrary formulas of the basis language. It is proved however that the universal instantiation scheme of FOL, \((\forall v)\varphi(v)\rightarrow\varphi(t)\), is false, as a scheme of tautologies, with respect to FCS. This causes the total failure of FCS as a candidate semantics. Then we turn to the other option which is a variant of SCS, since it uses again choice functions for pairs of sentences only. This semantics however presupposes that the applicability of the connective \(|\) is restricted to quantifier-free sentences, and thus the class of well-formed formulas and sentences of the language is restricted too. Granted these syntactic restrictions, the usual axiomatizations of FOLS turn out to be sound and conditionally complete with respect to this second semantics, just like the corresponding systems of PLS. Department of Mathematics Aristotle University of Thessaloniki 541 24 Thessaloniki, Greece e-mail: [email protected] _Mathematics Subject Classification (2010)_: 03B60, 03G12 _Keywords:_ Logic of superposition. Choice function for pairs of sentences/formulas. Sentence choice semantics. Formula choice semantics. ## 1 Introduction In [9] we introduced and investigated various systems of propositional superposition logic (PLS). The systems of PLS extend classical propositional logic (PL). Their language is that of PL augmented with a new binary connective \(|\) for the "superposition operation", while their axioms are those of PL together with a few axioms about \(|\). The motivating idea was roughly this: if \(\varphi|\psi\) denotes the "superposition of two states" (or, more precisely, the propositions expressing these states), as the latter is currently understood in quantum mechanics (QM), what is the _purely logical content_ of the operation, that is, what can we say about the _truth_ of \(\varphi|\psi\) without leaving the ground of classical logic? The basic intuition is that \(\varphi|\psi\) strangely expresses _both_ some kind of conjunction of the properties \(\varphi\) and \(\psi\) (before the measurement), and simultaneously some kind of disjunction of the same properties (after the measurement, i.e., after the "collapsing" of the superposed states). This collapsing can be formalized by the help of a choice function that acts on pairs of sentences \(\{\varphi,\psi\}\), turning each formula \(\varphi|\psi\) into a classical one. Such functions formed the basis of a semantics for the new logic, called _sentence choice semantics_ (or SCS for short), that allows \(\varphi|\psi\) to present simultaneously conjunctive and disjunctive characteristics, which are manifested in the "interpolation property", i.e., the property of \(\varphi|\psi\) to be strictly logically interpolated between \(\varphi\wedge\psi\) and \(\varphi\vee\psi\). Although QM has been the source of motivation for introducing the logical connective of superposition, PLS is _not_ a quantum logic based on orthomodular lattices (see [6] for complete information about such structures), as these logics are discussed e.g. in [7]. Nor is it in a similar vein with the content, e.g., of [1], [2], and other papers cited and discussed in [10], that belong to what can be called standard approach to quantum phenomena. As said above, the logic(s) PLS intend only to explore the logical content of the phenomenon of superposition alone. To quote from [9, p. 151]: "So the logic presented here is hardly the logic of superposition as this concept is currently used and understood in physics today. It is rather the logic of superposition, when the latter is understood as the 'logical extract' of the corresponding physics concept. Whether it could eventually have applications to the field of QM we don't know." In response to questions asked by one of the referees let me add some further comments. That PLS (and its first-order extension FOLS considered in this paper) has almost no points in common with standard treatments of QM can be simply inferred from the fact that it is _not_ a probabilistic theory. Probabilities have no place in this logical system (as it stands) and I cannot see how it could be revised in order to be compatible with their use. This is why the "collapsing" of the superposed sentence \(\varphi|\psi\) is accomplished by means of a _choice_ between \(\varphi\) and \(\psi\). And up to my knowledge there is no genuine theory that relates fruitfully choice functions with probabilities. As we put it in [9, p. 151]: "The basic idea is that the collapse of the composite state \(c_{0}\vec{u}_{0}+c_{1}\vec{u}_{1}\) to one of the states \(\vec{u}_{0}\), \(\vec{u}_{1}\) can be seen, _from the point of view of pure logic,_ just as a (more or less random) choice from the set of possible outcomes \(\{\vec{u}_{0},\vec{u}_{1}\}\). This is because from the point of view of pure logic probabilities are irrelevant or, which amounts to the same thing, the states \(\vec{u}_{0}\) and \(\vec{u}_{1}\) are considered equiprobable. In such a case the superposition of \(\vec{u}_{0}\) and \(\vec{u}_{1}\) is unique and the outcome of the collapse can be decided by a coin tossing or, more strictly, by a _choice function_ acting on pairs of observable states, which in our case coincide with pairs of sentences of \(L\). This of course constitutes a major deviation from the standard treatment of superposition, according to which there is not just one superposition of \(\vec{u}_{0}\) and \(\vec{u}_{1}\) but infinitely many, actually as many as the number of linear combinations \(c_{0}\vec{u}_{0}+c_{1}\vec{u}_{1}\), for \(|c_{0}|^{2}+|c_{1}|^{2}=1\)." Of course, theoretically, we could switch from \(\{0,1\}\)-valuations of classical logic to \([0,1]\)-valuations of a non-classical logic. But then the interpretation of superposition would not be "within classical reasoning and common-sense", as was the aim of the original attempt. Perhaps in the future we shall attempt some non-classical interpretation through a continuous-valued logic. In view of the above, fundamental concepts pertaining to the probabilistic character of the standard treatment of QM, such as global vs local phases, contextuality, the Kochen-Specker theorem, entanglement, etc., simply do not make any sense for our logic. However, despite of this, the systems PLS still seem to have merits. As a second referee wrote: "Even though the interpretation of superposition logics in terms of the original quantum-mechanical motivation is probably dubious, still the connective with a choice semantics is sufficiently interesting per se to justify the investigation; conceivably, the logic can find other interpretations, perhaps of some epistemic or possibilistic kind." Let us come now to the content of the present work. A natural question, already asked in the last section of [9], is whether the logic of superposition can be extended to a quantified version, i.e., whether the systems of PLS can be extended to corresponding systems of first-order superposition logic (or FOLS for short). At a syntactic level, systems of FOLS extending corresponding systems of PLS are very easily defined. They are just extensions of classical first-order logic (FOL) by the help of the same axioms for \(|\) that were used in PLS. This is because there are no new axioms for \(|\) involving \(\forall\) or \(\exists\), as there are no plausible correlations between \(|\) and the quantifiers. But at semantic level things are much more complex. First, we made sure that systems of FOLS do have semantics having characteristics quite analogous to that of SCS for PLS. Actually, an alternative semantics that we meanwhile developed for PLS in [10], the Boolean-value choice semantics (or BCS for short), turned out to be suitable also for FOLS. However the question whether a semantics for FOLS generalizing SCS of PLS is possible, remained. One of the goals of the present paper is to show that the straightforward generalization of the semantics SCS of PLS, namely the semantics based on choice functions for all pairs of _formulas_ of a first-order language \(L\), called _formula choice semantics_ (or FCS for short), does not work. Specifically, we show that the systems of FOLS fail to be true with respect to FCS (i.e., soundness fails) in an unexpected way: It is not the axioms for \(|\) that fail to be tautologies of FCS but one of the fundamental axiom of FOL, namely the Universal Instantiation \((UI)\) scheme, \(\forall v\varphi(v)\rightarrow\varphi(t)\). This of course leads to the break down of FCS itself, since it cannot accommodate the most fundamental logical constant of quantified logic, the universal quantifier. This result is shown in section 3. In section 4 we show a related fact concerning non-existence of uniform choice functions. The second major result shown in this paper is that the semantics SCS (using functions on pairs of _sentences_ only rather than arbitrary formulas) _can_ be applied also to FOLS, provided we shall restrict the applicability of the connective \(|\) to formulas without quantifiers (unless they are classical ones), and thus restrict the class of well-formed formulas of the language \(L_{s}=L\cup\{|\}\) of the logic of superposition. Under this restriction, FOLS is proved sound and conditionally complete with respect to SCS. This result is described in section 5. Since the content of the present paper relies heavily on the material contained in [9], we need first to recall briefly the main notions and facts established there. This is done in the next subsection. ### Overview of PLS with sentence choice semantics This subsection overviews the main notions and facts contained in [9]. It is identical to the corresponding introductory subsection 1.1 of [10]. In general a Propositional Superposition Logic (PLS) consists, roughly, of a pair \((X,K)\), where \(X\) is the semantic and \(K\) is the syntactic part of the logic. Actually \(K\) is a _formal system_ in the usual sense of the word, and \(X\) is a set of functions that provides meaning to sentences in a way described below. PLS\((X,K)\) will denote the propositional superposition logic with semantic part \(X\) and syntactic part \(K\). The precise definition of PLS\((X,K)\) will be given below. Although the semantic part is the most intuitively appealing we start with the description of the syntactic part \(K\). The language of all formal systems \(K\) below (or the language of PLS), \(L_{s}\), is that of standard Propositional Logic (PL) \(L=\{p_{0},p_{1},\ldots\}\cup\{\wedge,\vee,\rightarrow,\leftrightarrow,-\}\) augmented with the new binary connective "\(|\)". That is, \(L_{s}=L\cup\{|\}\). The set of sentences of \(L_{s}\), \(Sen(L_{s})\), is defined by induction as usual, with the additional inductive step that \(\varphi|\psi\) is a sentence whenever \(\varphi\) and \(\psi\) are so. Throughout the letters \(\alpha\), \(\beta\), \(\gamma\) range exclusively over the set of sentences of \(L\), \(Sen(L)\), while \(\varphi\), \(\psi\), \(\sigma\) range over elements of \(Sen(L_{s})\) in general. A formal system \(K\) consists of a set of axioms \(\mathsf{Ax}(K)\) and a set of inference rules \(\mathsf{IR}(K)\). The axioms of \(K\) always include the axioms of PL, while \(\mathsf{IR}(K)\) includes the inference rule of PL. So let us first fix the axiomatization for PL consisting of the following axiom schemes (for the language \(L_{s}\)). (P1) \(\varphi\rightarrow(\psi\rightarrow\varphi)\) (P2) \((\varphi\rightarrow(\psi\rightarrow\sigma))\rightarrow((\varphi\rightarrow \psi)\rightarrow(\varphi\rightarrow\sigma))\) (P3) \((\neg\varphi\rightarrow\neg\psi)\rightarrow((\neg\varphi\rightarrow\psi) \rightarrow\varphi)\), together with the inference rule Modus Ponens \((MP)\). So for every \(K\), \(\{\mathrm{P1},\mathrm{P2},\mathrm{P3}\}\subset\mathsf{Ax}(K)\) and \(MP\in\mathsf{IR}(K)\). In addition each \(K\) contains axioms for the new connective \(|\). These are some or all of the following schemes. \begin{tabular}{l l} \((S_{1})\) & \(\varphi\wedge\psi\rightarrow\varphi|\psi\) \\ \((S_{2})\) & \(\varphi|\psi\rightarrow\varphi\vee\psi\) \\ \((S_{3})\) & \(\varphi|\psi\rightarrow\psi|\varphi\) \\ \((S_{4})\) & \((\varphi|\psi)|\sigma\rightarrow\varphi|(\psi|\sigma)\) \\ \((S_{5})\) & \(\varphi\wedge\neg\psi\rightarrow(\varphi|\psi\leftrightarrow\neg\varphi| \neg\psi)\) \\ \end{tabular} Provability (a la Hilbert) in \(K\), denoted \(\vdash_{K}\varphi\), is defined as usual. It is clear that \[\Sigma\vdash\alpha\ \Rightarrow\ \Sigma\vdash_{K}\alpha,\] where \(\vdash\) denotes provability in PL. \(\Sigma\) is said to be \(K\)_-consistent,_ if \(\Sigma\not\vdash_{K}\bot\). Let \(K_{0}\) denote the formal system described as follows. \[{\sf Ax}(K_{0})=\{{\rm P1},{\rm P2},{\rm P3}\}+\{S_{1},S_{2},S_{3}\},\qquad{\sf IR} (K_{0})=\{MP\}.\] Extensions of \(K_{0}\) defined below will contain also the rule \(SV\) (from _salvo veritate_) defined as follows. \[(SV)\qquad\mbox{\it from }\ \varphi\leftrightarrow\psi\ \mbox{\it infer}\ \varphi|\sigma\leftrightarrow\psi|\sigma,\] \[\mbox{if }\varphi\leftrightarrow\psi\mbox{ is provable in }K_{0}.\] The rule \(SV\) guarantees that if \(\alpha\), \(\beta\) are classical logically equivalent sentences, then truth is preserved if \(\alpha\) is substituted for \(\beta\) in expressions containing \(|\) (just as in the case with the standard connectives). Let the formal systems \(K_{1}\), \(K_{2}\) and \(K_{3}\) be defined as follows. \[{\sf Ax}(K_{1})={\sf Ax}(K_{0}),\qquad{\sf IR}(K_{1})=\{\mbox{\it MP},SV\},\] \[{\sf Ax}(K_{2})={\sf Ax}(K_{1})+S_{4},\qquad\ \ {\sf IR}(K_{2})=\{\mbox{\it MP},SV\},\] \[{\sf Ax}(K_{3})={\sf Ax}(K_{2})+S_{5},\qquad\ \ {\sf IR}(K_{3})=\{\mbox{\it MP},SV\}.\] A consequence of \(SV\) is that if \(\vdash_{K_{0}}(\varphi\leftrightarrow\psi)\) then, for any \(\sigma\), \(\vdash_{K_{i}}(\varphi|\sigma\leftrightarrow\psi|\sigma)\), for \(i=1,2,3\). So much for the syntax of PLS. We now turn to the semantics. The axioms \(S_{i}\) are motivated by the intended meaning of \(|\) already mentioned above, and the corresponding semantics for sentences of \(L_{s}\) based on choice functions. This semantics consists of pairs \(\langle v,f\rangle\), where \(v:Sen(L)\rightarrow\{0,1\}\) is a usual two-valued assignment of the sentences of \(L\), and \(f\) is a choice function for pairs of elements of \(Sen(L)\), i.e., \(f:[Sen(L)]^{2}\to Sen(L)\) such that \(f(\{\alpha,\beta\})\in\{\alpha,\beta\}\), where for any set \(A\), \([A]^{2}=\{\{a,b\}:a,b\in A\}\). (For basic facts about choice functions the reader may consult [5].) The functions \(f\) are defined also for singletons with \(f(\{\alpha\})=\alpha\). We simplify notation by writing \(f(\alpha,\beta)\) instead of \(f(\{\alpha,\beta\})\), thus by convention \(f(\alpha,\beta)=f(\beta,\alpha)\) and \(f(\alpha,\alpha)=\alpha\). \(f\) gives rise to a function \(\overline{f}:Sen(L_{s})\to Sen(L)\), defined inductively as follows. **Definition 1.1**: (i)_\(\overline{f}(\alpha)=\alpha\), for \(\alpha\in Sen(L)\),_ (ii)_\(\overline{f}(\varphi\wedge\psi)=\overline{f}(\varphi)\wedge\overline{f}(\psi)\),_ (iii)_\(\overline{f}(\neg\varphi)=\neg\overline{f}(\varphi)\),_ (iv)_\(\overline{f}(\varphi|\psi)=f(\overline{f}(\varphi),\overline{f}(\psi))\)._ We refer to \(\overline{f}\) as the _collapsing function_ induced by \(f\). Then we define the truth of \(\varphi\) in \(\langle v,f\rangle\), denoted \(\langle v,f\rangle\models_{s}\varphi\), as follows. \[\langle v,f\rangle\models_{s}\varphi:\Leftrightarrow v(\overline{f}(\varphi) )=1. \tag{1}\] (In [9] we denote by \(M\) the two-valued assignments of sentences of \(L\) and write \(\langle M,f\rangle\) instead of \(\langle v,f\rangle\). Also we write \(M\models\alpha\) instead of \(M(\alpha)=1\).) We shall refer to the semantics defined by (1) as _sentence choice semantics,_ or SCS for short. A remarkably similar notion of choice function for pairs of sentences, and its interpretation as a "conservative" binary connective, was given also independently in [4] (see Example 3.24.14, p. 479). The reason that we used four formal systems \(K_{0}\)-\(K_{3}\), in increasing strength, is that they correspond to four different classes of choice functions defined below. **Definition 1.2**: Let \({\cal F}\) denote the set of all choice functions for \(Sen(L)\) and let \(X\subseteq{\cal F}\). (i) For a set \(\Sigma\subseteq Sen(L_{s})\) and \(X\subseteq{\cal F}\), \(\Sigma\) is said to be \(X\)_-satisfiable_ if there are \(v\) and \(f\in X\) such that \(\langle v,f\rangle\models_{s}\Sigma\). (ii) For \(\Sigma\subseteq Sen(L_{s})\) and \(\varphi\in Sen(L_{s})\), \(\varphi\) is an \(X\)_-logical consequence of \(\Sigma\),_ denoted \(\Sigma\models_{X}\varphi\), if for every \(v\) and every \(f\in X\), \(\langle v,f\rangle\models_{s}\Sigma\Rightarrow\langle v,f\rangle\models_{s}\varphi\). (iii) \(\varphi\) is an \(X\)_-tautology,_ denoted \(\models_{X}\varphi\), if \(\emptyset\models_{X}\varphi\). iv) \(\varphi\) and \(\psi\) are \(X\)_-logically equivalent,_ denoted \(\varphi\sim_{X}\psi\), if \(\models_{X}(\varphi\leftrightarrow\psi)\). Also let \[Taut(X)=\{\varphi\in Sen(L_{s}):\models_{X}\varphi\}.\] One of the motivating results behind the development of PLS was the following "interpolation property" of \(\varphi|\psi\) with respect to \(\varphi\wedge\psi\) and \(\varphi\vee\psi\) (see Theorem 2.8 of [9]). **Fact 1.3**: _For all \(\varphi,\psi\in Sen(L_{s})\),_ \[\varphi\wedge\psi\models_{\cal F}\varphi|\psi\models_{\cal F}\varphi\vee\psi,\] _while in general_ \[\varphi\vee\psi\not\models_{\cal F}\varphi|\psi\not\models_{\cal F}\varphi \wedge\psi.\] Now while the axioms of \(K_{0}\) are easily seen to be \(\models_{\cal F}\)-tautologies, this is not the case with the axioms \(S_{4}\) and \(S_{5}\). They correspond to some special subclasses of \({\cal F}\) described below. **Definition 1.4**: 1) An \(f\in{\cal F}\) is said to be _associative_ if for all \(\alpha,\beta,\gamma\in Sen(L)\) \[f(f(\alpha,\beta),\gamma)=f(\alpha,f(\beta,\gamma)).\] 2) An \(f\in{\cal F}\) is said to be _regular_ if for all \(\alpha,\alpha^{\prime},\beta\in Sen(L),\) \[\alpha\sim\alpha^{\prime}\Rightarrow f(\alpha,\beta)\sim f(\alpha^{\prime},\beta)\] where \(\alpha\sim\beta\) denotes logical equivalence in PL. Let \[\mbox{\it Asso}=\{f\in{\cal F}:f\mbox{ is associative}\},\] \[\mbox{\it Reg}=\{f\in{\cal F}:f\mbox{ is regular}\},\] We have the following simple and nice characterization of the functions in _Asso_. **Lemma 1.5**: _([9, Corollary 2.17]) \(f\in\mbox{\it Asso}\) if and only if there is a total \(<\) ordering of \(Sen(L)\) such that \(f=\min_{<}\), i.e., \(f(\alpha,\beta)=\min(\alpha,\beta)\) for all \(\alpha,\beta\in Sen(L)\)._ (Actually 1.5 holds for associative choice functions on an arbitrary set \(A\), see Theorem 2.14 of [9].) Both properties of associativity and regularity are strongly desirable and would be combined. Also, in view of the above characterization of associative functions through total orderings, the following definition is natural. **Definition 1.6**: A total ordering \(<\) of \(Sen(L)\) is _regular_ if the corresponding choice function \(f=\min_{<}\) is regular or, equivalently, if for all \(\alpha\), \(\beta\) in \(Sen(L)\) \[\alpha\not\sim\beta\mbox{ \& }\alpha<\beta\mbox{ }\Rightarrow\mbox{ }[\alpha]<[\beta],\] where \([\alpha]\) is the \(\sim\)-equivalence class of \(\alpha\). Let \[\mbox{\it Reg}^{*}=\mbox{\it Reg}\cap\mbox{\it Asso}.\] Clearly \(f\in\mbox{\it Reg}^{*}\) iff \(f=\min_{<}\) for a regular total ordering \(<\) of \(Sen(L)\). **Definition 1.7**: Let \(<\) be a total ordering of \(Sen(L)\). \(<\) is said to be \(\neg\)_-decreasing_ if for all \(\alpha,\beta\in Sen(L)\) such that \(\alpha\not\sim\beta\), \[\alpha<\beta\Leftrightarrow\neg\beta<\neg\alpha.\] If \(f\in\mbox{\it Reg}^{*}\), \(f\) is said to be \(\neg\)_-decreasing_ if \(f=\min_{<}\) for some \(\neg\)-decreasing \(<\). Let \[Dec=\{f\in\mbox{\it Reg}^{*}:f\mbox{ is $\neg$-decreasing}\}.\] Since \(Dec\subseteq\mbox{\it Reg}^{*}\subseteq\mbox{\it Reg}\subseteq\mathcal{F}\), it follows that \[Taut(\mathcal{F})\subseteq Taut(Reg)\subseteq Taut(Reg^{*})\subseteq Taut(Dec).\] We can give now a full specification of the meaning of the notation \[\mbox{PLS}(X,K)\] already introduced in the beginning of this section: given a set \(X\subseteq\mathcal{F}\), and a formal system \(K\) with \(\mathsf{Ax}(K)\subseteq Taut(X)\), \(\mbox{PLS}(X,K)\) is the logic with logical consequence relation \(\models_{X}\), determined by the structures \(\langle v,f\rangle\), with \(f\in X\), and with provability relation \(\vdash_{K}\). Given a logic \(\mbox{PLS}(X,K)\), the soundness and completeness theorems for it refer as usual to the connections between the relations \(\models_{X}\) and \(\vdash_{K}\), or between \(X\)-satisfiability and \(K\)-consistency. At this point a word of caution is needed. As is well-known the soundness theorem (ST) and completeness theorem (CT) of a logic have two distinct formulations which are equivalent for classical logic, but need not be so in general. For the logic \(\mbox{PLS}(X,K)\) these two forms, ST1 and ST2 for Soundness and CT1 and CT2 for Completeness, are the following. (ST1) \[\Sigma\vdash_{K}\varphi\;\Rightarrow\;\Sigma\models_{X}\varphi,\] (ST2) \[\Sigma\mbox{ is $X$-satisfiable}\;\Rightarrow\;\Sigma\mbox{ is $K$-consistent}\] (CT1) \[\Sigma\models_{X}\varphi\;\Rightarrow\;\Sigma\vdash_{K}\varphi,\] (CT2) \[\Sigma\mbox{ is $K$-consistent}\;\Rightarrow\;\Sigma\mbox{ is $X$-satisfiable}.\] ST1 and ST2 are easily shown to be equivalent for every system \(\mbox{PLS}(X,K)\). Moreover the Soundness Theorem for each one of the logics \(\mbox{PLS}(\mathcal{F},K_{0})\), \(\mbox{PLS}(Reg,K_{1})\), \(\mbox{PLS}(Reg^{*},K_{2})\) and \(\mbox{PLS}(Dec,K_{3})\) is easily established. But the equivalence of CT1 and CT2 is based on the _Deduction Theorem_ (DT) which is not known to be true for every \(\mbox{PLS}(X,K)\), when \(K\) contains the inference rule \(SV\). Recall that DT is the following implication. For all \(\Sigma\), \(\varphi\), \(\psi\), \[\Sigma\cup\{\varphi\}\vdash_{K}\psi\;\Rightarrow\;\Sigma\vdash_{K}\varphi \rightarrow\psi. \tag{2}\] Concerning the relationship between CT1 and CT2 for \(\mbox{PLS}(X,K)\) the following holds. **Fact 1.8**: CT1 \(\Rightarrow\) CT2 _holds for every \(\mathrm{PLS}(X,K)\). If \(\vdash_{K}\) satisfies DT, then the converse holds too, i.e.,_ CT1 \(\Leftrightarrow\) CT2_._ The system \(\mathrm{PLS}(\mathcal{F},K_{0})\), whose only inference rule is \(MP\), satisfies CT1 \(\Leftrightarrow\) CT2 as a consequence of DT. So we can just say it is "complete" instead of "CT1-complete" and "CT2-complete". The following is shown in [9, SS3.1]) **Theorem 1.9**: PLS\((\mathcal{F},K_{0})\) _is complete._ However in the systems over \(K_{i}\), for \(i>0\), that contain the extra rule \(SV\), the status of DT is open, so the distinction between CT1 and CT2 remains. So concerning the logics \(\mathrm{PLS}(Reg,K_{1})\), \(\mathrm{PLS}(Reg^{*},K_{2})\) and \(\mathrm{PLS}(Dec,K_{3})\) it is reasonable to try to prove the weaker of the two forms of completeness, namely CT2-completeness. But even this will be proved only conditionally. Because there is still another serious impact of the lack of DT. This is that we don't know if every consistent set of sentences can be extended to a consistent and _complete_ set (i.e., one that contains one of the \(\varphi\) and \(\neg\varphi\), for every \(\varphi\)). Of course every consistent set \(\Sigma\) can be extended (e.g. by Zorn's Lemma) to a _maximal_ consistent set \(\Sigma^{\prime}\supseteq\Sigma\). But maximality of \(\Sigma^{\prime}\) does not guarantee completeness without DT. Because \(\Sigma^{\prime}\) may be maximal consistent and yet there is a \(\varphi\) such that \(\varphi\notin\Sigma^{\prime}\) and \(\neg\varphi\notin\Sigma^{\prime}\), so \(\Sigma\cup\{\varphi\}\) and \(\Sigma\cup\{\neg\varphi\}\) are both inconsistent. That looks strange but we don't see how it could be proved false without DT. This property of extendibility of a consistent set to a consistent and complete one, for a formal system \(K\), is crucial for the proof of completeness of \(K\) (with respect to a given semantics), so we isolate it as a property of \(K\) denoted \(cext(K)\). It reads as follows. \((cext(K))\) _Every \(K\)-consistent set of sentences can be extended to_ _a \(K\)-consistent and complete set_. Then the following conditional CT2-completeness results are shown in [9, SS3.2]). **Theorem 1.10**: _(i) \(\mathrm{PLS}(Reg,K_{1})\) is_ CT2_-complete if and only if \(cext(K_{1})\) is true._ _(ii) \(\mathrm{PLS}(Reg^{*},K_{2})\) is_ CT2_-complete if and only if \(cext(K_{2})\) is true._ _(iii) \(\mathrm{PLS}(Dec,K_{3})\) is_ CT2_-complete if and only if \(cext(K_{3})\) is true._ First-order superposition logic and their semantics ### What is first-order superposition logic First let us make precise what first-order superposition logic, or FOLS for short, is. At axiomatic level the formal systems of FOLS extend the formal system of first-order logic (FOL) exactly as the formal systems of PLS outlined in section 1.1. extend the formal system of propositional logic (PL). So we first fix an axiomatization of FOL (see e.g. [3]). If \(L\) is a first-order language with logical constants \(\wedge\), \(\neg\), \(\forall\) and equality \(=\), and variables \(v_{i}\), \(v\), \(u\) etc, let \(L_{s}=L\cup\{|\}\), where \(|\) is the new binary connective for superposition. Let \(Fml(L_{s})\), \(Sen(L_{s})\) denote the sets of _all_ formulas and sentences of \(L_{s}\), respectively, defined by the usual recursion, as those of \(L\), plus the step for the connective \(|\). We stress the word "all" because in some version of formalization considered below, _restrictions_ to the formation of formulas of \(L_{s}\), concerning the applicability of \(|\), might be sensible. For example in one of the semantics of FOLS considered below formulas of the form, e.g. \(\forall v\exists u(\alpha(v)|\beta(v)|\gamma(u))\) are allowed, while \((\forall v(\alpha(v)|\beta(v)))|(\exists u\gamma(u))\) are not. Thus we reserve the right to deal later only with some subsets of \(Fml(L_{s})\), \(Sen(L_{s})\). The axioms and rules of inference of FOL are the following. \[\mathsf{Ax}(\text{FOL})=\mathsf{Ax}(\text{PL})+\{\,\text{{U}I},D\}+\{I_{1}, \ldots,I_{5}\},\quad\mathsf{IR}(\text{FOL})=\{\text{{MP}},\,\text{{GR}}\},\] where \({GR}\) is the generalization rule, \({UI}\) (Universal Instantiation scheme) and \(D\) are the basic axioms of FOL (for the language \(L_{s}\)) concerning quantifiers, \(I_{1}\), \(I_{2}\), \(I_{3}\) are the trivial axioms for \(=\) (reflection, symmetry and transitivity), and finally \(I_{4}\) and \(I_{5}\) are the schemes of substitution of equals within terms and formulas. Specifically: \((\,\text{{U}I})\)\(\forall v\varphi(v)\rightarrow\varphi(t)\), for every closed term \(t\), \((\,\text{{D}})\)\(\forall v(\varphi\rightarrow\psi(v))\rightarrow(\varphi\rightarrow\forall v\psi(v))\), if \(v\) is not free in \(\varphi\), \((I_{4})\)\((\forall v,u)(v=u\to t(v)=t(u))\), \((I_{5})\)\((\forall v,u)(v=u\rightarrow(\varphi(v)\rightarrow\varphi(u))\). The sentences of \(L\) will be interpreted in \(L\)-structures \(\mathcal{M}=\langle M,\ldots\rangle\). **Notational convention.** We keep using the notational convention introduced in the previous section, that throughout the letters \(\varphi\), \(\psi\), \(\sigma\) will denote in general formulas of \(L_{s}\), while the letters \(\alpha\), \(\beta\), \(\gamma\) are reserved for formulas of \(L\) only. The extra axioms of FOLS will be among the schemes already seen in 1.1, namely: \begin{tabular}{l l} \((S_{1})\) & \(\varphi\wedge\psi\rightarrow\varphi|\psi\) \\ \((S_{2})\) & \(\varphi|\psi\rightarrow\varphi\vee\psi\) \\ \((S_{3})\) & \(\varphi|\psi\rightarrow\psi|\varphi\) \\ \((S_{4})\) & \((\varphi|\psi)|\sigma\rightarrow\varphi|(\psi|\sigma)\) \\ \((S_{5})\) & \(\varphi\wedge\neg\psi\rightarrow(\varphi|\psi\leftrightarrow\neg\varphi| \neg\psi)\) \\ \end{tabular} The formal systems we are going to deal with below are \(\Lambda_{0}\), \(\Lambda_{1}\), \(\Lambda_{2}\) and \(\Lambda_{3}\) defined as follows \[\mathsf{Ax}(\Lambda_{0})=\mathsf{Ax}(\mathrm{FOL})+\{S_{1},S_{2},S_{3}\}, \quad\mathsf{IR}(\Lambda_{0})=\{MP,GR\}\] \[\mathsf{Ax}(\Lambda_{1})=\mathsf{Ax}(\Lambda_{0}),\quad\mathsf{IR}(\Lambda_{ 1})=\{MP,GR,SV\}\] \[\mathsf{Ax}(\Lambda_{2})=\mathsf{Ax}(\Lambda_{1})+S_{4},\quad\mathsf{IR}( \Lambda_{2})=\{MP,GR,SV\},\] \[\mathsf{Ax}(\Lambda_{3})=\mathsf{Ax}(\Lambda_{2})+S_{5},\quad\mathsf{IR}( \Lambda_{3})=\{MP,GR,SV\},\] where \(SV\) is the rule Salva Veritate mentioned in section 1.1, but with \(\Lambda_{0}\) in place of \(K_{0}\). That is: \[(SV)\qquad\text{\it from }\ \varphi\leftrightarrow\psi\ \text{\it infer}\ \varphi|\sigma\leftrightarrow\psi|\sigma,\] \[\text{if }\varphi\leftrightarrow\psi\text{ is provable in }\Lambda_{0}.\] Note that there are no new axioms for \(|\) in FOLS beyond those of PLS, which means that there is no natural interplay between \(|\) and quantifiers. In fact the connections one might consider between \(|\) and \(\forall\), e.g. \((\forall v)(\varphi|\psi)\leftrightarrow(\forall v\varphi)|(\forall v\psi)\), or \((\exists v)(\varphi|\psi)\leftrightarrow(\exists v\varphi)|(\exists v\psi)\), either do not make sense because the formulas involved are illegitimate, or are simply false in the semantics where the formulas involved are allowed (e.g. in the semantics of [10]). ### Candidate semantics for FOLS In [10] we developed an alternative semantics (and a slightly different formalization) for PLS, based on choice function not for pairs of sentences but for pairs of elements of a Boolean algebra \(\mathcal{B}\) where the classical sentences take truth values. We called this "Boolean-value choice semantics", or BCS for short. It turned out that this semantics can apply also to FOLS without extra pains, and with respect to this semantics the formal systems of FOLS satisfy some natural soundness and completeness results. The main question addressed in this paper is whether FOLS can admit a semantics that naturally extends and generalizes the sentence choice semantics (SCS) of [9] (based on the truth definition (1) mentioned in the previous section). "Naturally" means that the semantics will continue to consist of pairs \(\langle\mathcal{M},f\rangle\), where \(\mathcal{M}\) is an \(L\)-structure and \(f\) is a choice function for pairs of formulas/sentences of \(L\), and will follow the basic reduction of truth to the Tarskian one through the relation: \(\langle\mathcal{M},f\rangle\models_{s}\varphi\) iff \(\mathcal{M}\models\overline{f}(\varphi)\). The intricate question is about the _domain_ of the choice function \(f\). Namely, would \(f\) apply to pairs of _all_ formulas of \(L\), or only to _some_ such pairs, e.g. to pairs of sentences alone? The answer to the above question is that there can be no natural extension of SCS to a "formula choice semantics", in the sense that \(f\) is allowed to apply to pairs of _arbitrary_ formulas. Such a semantics fails badly for reasons independent of the connective \(|\), simply as a result of incompatibility between choice of formulas with free variables and corresponding choice of formulas with substituted terms. On the other hand, it is shown that a semantics with some restrictions both to the construction of formulas, as well as to the applicability of choice functions (allowing them to apply to pairs of sentences only), can work smoothly and lead to satisfactory soundness and completeness results with respect to the axiomatization of FOLS, which is essentially the same as the one considered in [10]. First let us note that in any case, whatever the domain of \(f\) would be, the collapsing map \(\overline{f}\) should satisfy conditions (i)-(iv) of Definition 1.1. So if we assume that \(f\) is defined for all pairs of quantifier-free sentences of \(L\), then conditions (i)-(iv), in combination with the truth definition (1), suffice to define \(\langle\mathcal{M},f\rangle\models_{s}\varphi\) for every quantifier-free sentence of \(L_{s}\). So the only missing step for the complete definition of \(\langle\mathcal{M},f\rangle\models_{s}\varphi\) is the definition of \(\langle\mathcal{M},f\rangle\models_{s}\forall v\varphi(v)\). For that we have two options, called _formula choice semantics_ (FCS for short) and _sentence choice semantics_ (SCS) because they are based on the use of choice functions for pairs of arbitrary formulas and for pairs of sentences alone, respectively. To distinguish them we shall use the symbols \(\models_{s}^{1}\) and \(\models_{s}^{2}\) for the resulting truth relations, respectively. **Option 1. Formula choice semantics (FCS)** Here the set of formulas \(Fml(L_{s})\) of \(L_{s}\) is defined by the usual closure steps with respect to the connectives (including \(|\)) and quantifiers and every choice function \(f\) is defined on the entire \([Fml(L)]^{2}\). Therefore the truth definition of quantified sentences should be as follows. \[\langle\mathcal{M},f\rangle\models_{s}^{1}\forall v\varphi(v)\Leftrightarrow \mathcal{M}\models\overline{f}(\forall v\varphi(v)). \tag{3}\] It is easy to see that this definition is meaningful and effective if and only if the collapsing mapping \(\overline{f}\) commutes with \(\forall\), i.e., if \(\overline{f}\) satisfies, in addition to conditions (i)-(iv) of Definition 1.1, the condition: (v) \(\overline{f}(\forall v\varphi)=\forall v\overline{f}(\varphi)\).1 Footnote 1: Otherwise, one cannot see how e.g. \(\overline{f}(\forall v(\alpha|\beta))\) could be defined. [Treating \(\exists\) as usual, i.e., as \(\neg\forall\neg\), it follows from (v) and (iii) of 1.1 that \(\overline{f}(\exists v\varphi)=\exists v\overline{f}(\varphi)\).] Throughout this subsection we shall often refer to conditions (i)-(iv) of 1.1 together with condition (v) above as "conditions (i)-(v)" for \(\overline{f}\). By (v), (3) becomes \[\langle{\cal M},f\rangle\models_{s}^{1}\forall v\varphi(v)\Leftrightarrow{ \cal M}\models\forall v\overline{f}(\varphi(v)). \tag{4}\] The right-hand side of (4) is an instance of Tarskian satisfaction, so it holds iff \({\cal M}\models\overline{f}(\varphi(v))(x)\) is true for every \(x\in M\), where the elements of \(M\) are used as parameters added to \(L\). Therefore (4) is equivalently written \[\langle{\cal M},f\rangle\models_{s}^{1}\forall v\varphi(v)\Leftrightarrow{ \cal M}\models\overline{f}(\varphi(v))(x),\mbox{ for every }x\in M. \tag{5}\] Thus (4) (or (5)) determines the truth of every sentence \(\varphi\) of \(L_{s}(M)\) in \(\langle{\cal M},f\rangle\) with respect to \(\models_{s}^{1}\). We refer to the truth relation \(\models_{s}^{1}\) (for obvious reasons) as _formula choice semantics_, or FCS for short. We shall see however below that FCS fails badly not with respect to the interpretation of \(|\), but because, surprisingly enough, fails to satisfy the Universal Instantiation scheme, as a consequence of the fact that \(f\) applies to pairs of formulas with free variables. So a reasonable alternative would be to restrict \(f\) to pairs of sentences alone. **Option 2. Sentence choice semantics (SCS)** Assume now that the choice functions \(f\) are defined only for _sentences_ of \(L(M)=L\cup M\), where the latter is \(L\) augmented with the elements of \(M\) treated as parameters. We let the letters \(x,y,a,c\) range over elements of \(M\). The question is how the collapsing \(\overline{f}\) is defined in this case and for which \(\varphi\) of \(L_{s}\). For instance, what would \(\overline{f}(\forall v(\alpha(v)|\beta(v)))\) be for classical \(\alpha(v)\) and \(\beta(v)\)? Letting \(\overline{f}(\forall v(\alpha(v)|\beta(v)))=\forall v\overline{f}(\alpha(v)| \beta(v))=\forall vf(\alpha(v),\beta(v))\) is not an option since \(f\) does not apply to pairs of open formulas. The answer is simply that for \(Q\in\{\forall,\exists\}\), \[\overline{f}(Qv(\alpha(v)|\beta(v)))\mbox{ are not defined.} \tag{6}\] However this is not necessarily a dead end. It would only prompt us to define the truth of \(\forall v(\alpha(v)|\beta(v))\) in \(\langle{\cal M},f\rangle\) not through (4), but in the Tarskian way: \[\langle{\cal M},f\rangle\models_{s}\forall v(\alpha(v)|\beta(v))) \Leftrightarrow\langle{\cal M},f\rangle\models_{s}(\alpha(x)|\beta(x)),\] for all \(x\in M\). So let us define, alternatively to (4), for every universal well-formed formula \(\forall v\varphi(v)\) of \(L_{s}\): \[\langle{\cal M},f\rangle\models_{s}^{2}\forall v\varphi(v)\Leftrightarrow\langle {\cal M},f\rangle\models_{s}^{2}\varphi(x),\mbox{ for every }x\in M. \tag{7}\] From (7), combined with clause (iii) of 1.1, clearly we have also that \[\langle{\cal M},f\rangle\models_{s}^{2}\exists v\varphi(v)\Leftrightarrow\langle {\cal M},f\rangle\models_{s}^{2}\varphi(x),\mbox{ for some }x\in M. \tag{8}\] (7) and (8) settle the definition with respect to \(\models_{s}^{2}\) of sentences that begin with a quantifier. This also implicitly suggests that for \(\varphi\) that do not begin with a quantifier, the truth of \(\varphi\) in \(\langle{\cal M},f\rangle\) should be defined by means of the collapsing map \(\overline{f}\) i.e., \[\langle{\cal M},f\rangle\models_{s}^{2}\varphi\Leftrightarrow{\cal M}\models \overline{f}(\varphi). \tag{9}\] But this will immediately lead to trouble, unless we put restrictions to the formation of formulas of \(L_{s}\). For consider, say, the sentence \((\forall v(\alpha|\beta))|(\exists u(\gamma|\delta))\). Then we should have \[\langle{\cal M},f\rangle\models_{s}^{2}(\forall v(\alpha|\beta))|(\exists u( \gamma|\delta))\Leftrightarrow{\cal M}\models\overline{f}[(\forall v(\alpha| \beta))|(\exists u(\gamma|\delta))]\Leftrightarrow\] \[{\cal M}\models f(\overline{f}(\forall v(\alpha|\beta)),\overline{f}(\exists u (\gamma|\delta))).\] But by (6) above, \(\overline{f}(\forall v(\alpha|\beta))\) and \(\overline{f}(\exists u(\gamma|\delta))\) are not defined, so the last part of the above equivalences does not make sense. The conclusion is that if we want to employ choice functions for pairs of _sentences_ only and \(\models_{s}^{2}\) obeys (7) and (8), formulas like \((\forall v(\alpha|\beta))|(\exists u(\gamma|\delta))\), should not be allowed. That is, instead of the full set of formulas \(Fml(L_{s})\) we shall consider the restricted set of formulas \(RFml(L_{s})\). The latter differs from \(Fml(L_{s})\) in that \(\varphi|\psi\) belongs to \(RFml(L_{s})\) iff \(\varphi\) and \(\psi\) are either classical or quantifier free. We shall refer to the truth relation \(\models_{s}^{2}\) as _sentence choice semantics_, or SCS, just as we did with the corresponding semantics of PLS. We shall examine \(\models_{s}^{2}\) in more detail in section 5. Obviously the two semantics based on \(\models_{s}^{1}\) and \(\models_{s}^{2}\) are not equivalent, since they apply to different sets of sentences. However, there are sentences \(\varphi\) for which both truth definitions \(\langle{\cal M},f\rangle\models_{s}^{1}\varphi\) and \(\langle{\cal M},f\rangle\models_{s}^{2}\varphi\) make sense. But in general even for such sentences the definitions do not coincide. Actually none of them implies the other. The difference is easily detected by observing the right-hand sides of (5) and (7) for \(\varphi\) that do not begin with a quantifier. Namely, \(\overline{f}(\varphi(v))(x)\) and \(\overline{f}(\varphi(x))\) are in general inequivalent. To illustrate it, let \(\varphi(v):=\alpha(v)|\beta(v)\), where \(\alpha(v)\) and \(\beta(v)\) are formulas of \(L\) Let \({\cal M}\) be an \(L\)-structure. To compare the two approaches, we must use an \(f\) which is meaningful in both of them, i.e., it applies to all pairs of formulas of \(L(M)\). Fix such an \(f\). Then \(f\) applies to \(\{\alpha(v),\beta(v)\}\) and let \(f(\alpha(v),\beta(v))=\alpha(v)\). By (5), \(\langle{\cal M},f\rangle\models^{1}_{s}\forall v(\alpha(v)|\beta(v))\) iff \({\cal M}\models f(\alpha(v),\beta(v))(x)\), for all \(x\in M\), therefore \[\langle{\cal M},f\rangle\models^{1}_{s}\forall v(\alpha(v)|\beta(v)) \Leftrightarrow{\cal M}\models\alpha(x),\mbox{ for all }x\in M. \tag{10}\] On the other hand, by (7), \(\langle{\cal M},f\rangle\models^{2}_{s}\forall v(\alpha(v)|\beta(v))\) iff \(\langle{\cal M},f\rangle\models^{2}_{s}\alpha(x)|\beta(x)\), for all \(x\in M\), hence \[\langle{\cal M},f\rangle\models^{2}_{s}\forall v(\alpha(v)|\beta(v)) \Leftrightarrow{\cal M}\models f(\alpha(x),\beta(x)),\mbox{ for all }x\in M. \tag{11}\] The right-hand sides of (10) and (11) may be quite different, since the choices of \(f\) from the pairs \(\{\alpha(x),\beta(x)\}\), for the various \(x\in M\), may be non-uniform, e.g. for \(x_{1}\neq x_{2}\) we may have \(f(\alpha(x_{1}),\beta(x_{1}))=\alpha(x_{1})\) and \(f(\alpha(x_{2}),\beta(x_{2}))=\beta(x_{2})\). In order for the definitions (10) and (11) to be equivalent, \(f\) should be a _uniform choice function_, i.e., \(f(\alpha(\vec{v}),\beta(\vec{v}))=\alpha(\vec{v})\) should imply \(f(\alpha(\vec{t}),\beta(\vec{t}))=\alpha(\vec{t})\) for all pairs of formulas \(\{\alpha(\vec{v}),\beta(\vec{v})\}\) and every tuple of terms \(\vec{t}\) that can be substituted for \(\vec{v}\). However, as we shall prove in section 4, no choice function \(f:[Fml(L)]^{2}\to Fml(L)\) can have this property. ## 3 The formula choice semantics (FCS) and the failure of universal instantiation Since FOLS extends FOL, any proper semantics for FOLS should first of all satisfy the quantifier axioms of FOL, namely _UI_ and \(D\). In this section we show that unfortunately (and rather unexpectedly) FCS fails to satisfy \(UI\). Firstly recall that given a language \(L\), whenever we write \(\varphi(\vec{v})\), for a formula of \(L_{s}\), we mean that the free variables of \(\varphi\) are _among_ those of the tuple \(\vec{v}\). Then the following can be easily verified by induction on the length of \(\varphi\). **Fact 3.1**: _For every choice function \(f\) for pairs of formulas and every \(\varphi\in Fml(L_{s})\), the free variables of \(\overline{f}(\varphi)\) are included in those of \(\varphi\), i.e., \(FV(\overline{f}(\varphi))\)\(\subseteq FV(\varphi)\). In particular, if \(\varphi\) is a sentence of \(L_{s}\), then \(\overline{f}(\varphi)\) is a sentence of \(L\)._ [In general, \(FV(\overline{f}(\varphi))\varsubsetneq FV(\varphi)\), since, for example, we may have \(\varphi(v_{1},v_{2})=\alpha(v_{1})|\beta(v_{2})\) and \(f(\alpha(v_{1}),\beta(v_{2}))=\alpha(v_{1})\), so \(\overline{f}(\varphi)=f(\alpha(v_{1}),\beta(v_{2}))=\alpha(v_{1})\).] It follows from this Fact that the variables of \(\overline{f}(\varphi)\) are among the variables of \(\varphi\), so we may write for every \(\varphi(\vec{v})\): \[\overline{f}(\varphi(\vec{v}))=\overline{f}(\varphi)(\vec{v}). \tag{12}\] **Fact 3.2**: _The scheme \(D\) is a tautology with respect to FCS._ _Proof._ Take an instance of \(D\) \[\sigma:(\forall v)(\varphi\to\psi(v))\to(\varphi\to(\forall v)\psi(v)),\] where \(\varphi\) does not contain \(v\) free, and take an arbitrary choice function satisfying conditions (i)-(v). Then clearly applying these conditions we have \[\overline{f}(\sigma)=[(\forall v)(\overline{f}(\varphi)\to\overline{f}(\psi( v)))\to(\overline{f}(\varphi)\to(\forall v)\overline{f}(\psi(v)))].\] Let \(\overline{f}(\varphi)=\alpha\) and \(\overline{f}(\psi(v))=\beta(v)\). Then the last formula is written \[\overline{f}(\sigma)=[(\forall v)(\alpha\to\beta(v))\to(\alpha\to(\forall v) \beta(v))].\] By assumption \(v\not\in FV(\varphi)\), and by Fact 3.1, \(FV(\overline{f}(\varphi))\subseteq FV(\varphi)\), so \(v\notin FV(\alpha)\), therefore \(\overline{f}(\sigma)\) is an instance of the scheme \(D\) of FOL, so it holds in every \(L\)-structure \({\cal M}\). Therefore \({\cal M}\models\overline{f}(\sigma)\), or equivalently \(\langle{\cal M},f\rangle\models\sigma\). \(\dashv\) However the situation is quite different for the scheme _UI_. The next theorem shows that, under mild conditions for \(L\), there is no choice function \(f\) for \(L\) with respect to which _UI_ could be a scheme of tautologies. **Theorem 3.3**: _Let \(L\) be a language with at least two distinct closed terms \(t_{1}\), \(t_{2}\), and a formula \(\alpha(v)\) in one free variable such that both \(\alpha(t_{1})\wedge\neg\alpha(t_{2})\) and \(\neg\alpha(t_{1})\wedge\alpha(t_{2})\) are satisfiable. Then for every choice function \(f\) for \(Fml(L)\) there is an \(L\)-structure \({\cal M}\) and a formula \(\psi(v_{1},v_{2})\) of \(L_{s}\) with two free variables for which UI fails in \(\langle{\cal M},f\rangle\), i.e., such that \(\langle{\cal M},f\rangle\models_{s}^{1}(\forall v_{1},v_{2})\psi(v_{1},v_{2}) \wedge\neg\psi(t_{1},t_{2})\)._ _Proof._ [Note first that the conditions required for \(L\) in the above Lemma are quite weak. E.g. any \(L\) containing three distinct constants \(c_{1},c_{2},c_{3}\) satisfies them. For if we set \(\alpha(v)=(v=c_{3})\), then \(\alpha(c_{1})\wedge\neg\alpha(c_{2})\) and \(\neg\alpha(c_{1})\wedge\alpha(c_{2})\) are both satisfiable in \(L\)-structures.] Now let \(L\), \(t_{1}\), \(t_{2}\) and \(\alpha(v)\) be as stated. Then there are structures \({\cal M}_{1}\), \({\cal M}_{2}\) such that \({\cal M}_{1}\models\alpha(t_{1})\wedge\neg\alpha(t_{2})\) and \({\cal M}_{2}\models\neg\alpha(t_{1})\wedge\alpha(t_{2})\). Pick a choice function \(f\) for \(Fml(L)\). It suffices to show that there is a formula \(\psi(v_{1},v_{2})\) of \(L_{s}\) such that either \(\langle\mathcal{M}_{1},f\rangle\models^{1}_{s}(\forall v_{1},v_{2})\psi(v_{1},v_{ 2})\wedge\neg\psi(t_{1},t_{2})\), or \(\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1},v_{2})\psi(v_{1}, v_{2})\wedge\neg\psi(t_{1},t_{2})\). Let \(\alpha(v_{2})\) be the formula resulting from \(\alpha(v_{1})\) if we replace \(v_{1}\) by the new variable \(v_{2}\). We examine how \(f\) acts on the pairs of formulas \(\{\alpha(v_{1}),\alpha(v_{2})\}\) and \(\{\alpha(t_{1}),\alpha(t_{2})\}\) and consider the four possible cases. _Case 1._\(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{1})\) and \(f(\alpha(t_{1}),\alpha(t_{2}))=\alpha(t_{1})\). By assumption \(\mathcal{M}_{2}\models\neg\alpha(t_{1})\wedge\alpha(t_{2})\). Arguing in the standard logic FOL, this can be written as follows: \[\mathcal{M}_{2}\models(\forall v_{1}v_{2})[v_{1}=t_{2}\wedge v_{2}=t_{1} \rightarrow\alpha(v_{1})]\wedge\neg[t_{2}=t_{2}\wedge t_{1}=t_{1}\rightarrow \alpha(t_{1})],\] or \[\mathcal{M}_{2}\models(\forall v_{1}v_{2})[v_{1}=t_{2}\wedge v_{2}=t_{1} \to f(\alpha(v_{1}),\alpha(v_{2}))]\wedge\] \[\neg[t_{2}=t_{2}\wedge t_{1}=t_{1}\to f(\alpha(t_{1}),\alpha(t_{2}))],\] or, since \(|\) is commutative, \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})[v_{1}=t_{2 }\wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})|\alpha(v_{2})]\wedge\] \[\neg[t_{2}=t_{2}\wedge t_{1}=t_{1}\rightarrow\alpha(t_{2})|\alpha(t_{1})].\] Setting \[\psi(v_{1},v_{2}):=[v_{1}=t_{2}\wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})| \alpha(v_{2})],\] the last relation is written \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})\psi(v_{1}, v_{2})\wedge\neg\psi(t_{2},t_{1}),\] thus _UI_ fails in \(\langle\mathcal{M}_{2},f\rangle\). _Case 2._\(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{1})\) and \(f(\alpha(t_{1}),\alpha(t_{2}))=\alpha(t_{2})\). Now we use the fact that \(\mathcal{M}_{1}\models\alpha(t_{1})\wedge\neg\alpha(t_{2})\). As before this is written equivalently, \[\mathcal{M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2} \rightarrow\alpha(v_{1})]\wedge\neg[t_{1}=t_{1}\wedge t_{2}=t_{2}\rightarrow \alpha(t_{2})],\] or \[\mathcal{M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2} \to f(\alpha(v_{1}),\alpha(v_{2}))]\wedge\] \[\neg[t_{1}=t_{1}\wedge t_{1}\rightarrow\alpha(t_{1})|\alpha(t_{2})],\] or, since \(|\) is commutative, \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})[v_{1}=t_{ 2}\wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})|\alpha(v_{2})]\wedge\] \[\neg[t_{1}=t_{1}\wedge t_{1}=t_{1}\rightarrow\alpha(t_{1})|\alpha(t_{1})].\] Setting \[\psi(v_{1},v_{2}):=[v_{1}=t_{2}\wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})| \alpha(v_{2})],\] the last relation is written \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})\psi(v_{1}, v_{2})\wedge\neg\psi(t_{2},t_{1}),\] thus _UI_ fails in \(\langle\mathcal{M}_{2},f\rangle\). _Case 2._\(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{1})\) and \(f(\alpha(t_{1}),\alpha(t_{2}))=\alpha(t_{2})\). Now we use the fact that \(\mathcal{M}_{1}\models\alpha(t_{1})\wedge\neg\alpha(t_{2})\). As before this is written equivalently, \[\mathcal{M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2} \rightarrow\alpha(v_{1})]\wedge\neg[t_{1}=t_{1}\wedge t_{2}=t_{2}\rightarrow \alpha(t_{2})],\] or \[\mathcal{M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2} \rightarrow\alpha(v_{1}),\alpha(v_{2})]\wedge\] \[\neg[t_{1}=t_{1}\wedge t_{1}=t_{1}\rightarrow\alpha(t_{1})|\alpha(t_{2})],\] or, since \(|\) is commutative, \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})[v_{1}=t_{2} \wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})|\alpha(v_{2})]\wedge\] \[\neg[t_{1}=t_{2}\wedge t_{1}=t_{1}\rightarrow\alpha(t_{2})|\alpha(t_{1})].\] Setting \[\psi(v_{1},v_{2}):=[v_{1}=t_{2}\wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})| \alpha(v_{2})],\] the last relation is written \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})\psi(v_{1}, v_{2})\wedge\neg\psi(t_{2},t_{1}),\] thus _UI_ fails in \(\langle\mathcal{M}_{2},f\rangle\). _Case 2._\(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{1})\) and \(f(\alpha(t_{1}),\alpha(t_{2}))=\alpha(t_{2})\). Now we use the fact that \(\mathcal{M}_{1}\models\alpha(t_{1})\wedge\neg\alpha(t_{2})\). As before this is written equivalently, \[\mathcal{M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2} \rightarrow\alpha(v_{1})]\wedge\neg[t_{1}=t_{1}\wedge t_{2}=t_{2}\rightarrow \alpha(t_{2})],\] or \[\mathcal{M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2} \rightarrow\alpha(v_{1}),\alpha(v_{2}))]\wedge\] \[\neg[t_{1}=t_{1}\wedge t_{1}=t_{1}\rightarrow\alpha(t_{1})|\alpha(t_{2})],\] or, since \(|\) is commutative, \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})[v_{1}=t_{2} \wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})|\alpha(v_{2})]\wedge\] \[\neg[t_{2}=t_{2}\wedge t_{1}=t_{1}\rightarrow\alpha(t_{2})|\alpha(t_{1})].\] Setting \[\psi(v_{1},v_{2}):=[v_{1}=t_{2}\wedge v_{2}=t_{1}\rightarrow\alpha(v_{1})| \alpha(v_{2})],\] the last relation is written \[\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})\psi(v_{1}, v_{2})\wedge\neg\psi(t_{2},t_{1}),\] thus _UI_ fails in \(\langle\mathcal{M}_{2},f\rangle\). _Case 2._\(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{1})\) and \(f(\alpha(t_{1}),\alpha(t_{2}))=\alpha(t_{2})\). Now we use the fact that \(\mathcal{M}_{1}\models\alpha(t_{1})\wedge\neg\alpha(t_{2})\). As before this is written equivalently, \[\mathcal{M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2} \rightarrow\alpha(v_{1})]\wedge\neg[t_{1}=t_{1}\wedge t_{2}=t_{2}\rightarrow \alpha(t_{2})],\] or \[\neg[t_{1}=t_{1}\wedge t_{2}=t_{2}\to f(\alpha(t_{1}),\alpha(t_{2}))],\] or \[\langle{\cal M}_{1},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})[v_{1}=t_{1} \wedge v_{2}=t_{2}\to\alpha(v_{1})|\alpha(v_{2})]\wedge\] \[\neg[t_{1}=t_{1}\wedge t_{2}=t_{2}\to\alpha(t_{1})|\alpha(t_{2})].\] Thus putting \[\psi(v_{1},v_{2}):=[v_{1}=t_{1}\wedge v_{2}=t_{2}\to\alpha(v_{1})|\alpha(v_{2 })],\] we are done. _Case 3._\(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{2})\) and \(f(\alpha(t_{1}),\alpha(t_{2}))=\alpha(t_{1})\). We use again the fact that \({\cal M}_{2}\models\neg\alpha(t_{1})\wedge\alpha(t_{2})\), which yields as before \[{\cal M}_{2}\models(\forall v_{1}v_{2})[v_{1}=t_{1}\wedge v_{2}=t_{2}\to \alpha(v_{2})]\wedge\neg[t_{1}=t_{1}\wedge t_{2}=t_{2}\to\alpha(t_{1})],\] or \[\langle{\cal M}_{2},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})[v_{1}=t_{1} \wedge v_{2}=t_{2}\to\alpha(v_{1})|\alpha(v_{2})]\wedge\] \[\neg(t_{1}=t_{1}\wedge t_{2}=t_{2}\to\alpha(t_{1})|\alpha(t_{2})].\] So setting \[\psi(v_{1},v_{2}):=[v_{1}=t_{1}\wedge v_{2}=t_{2}\to\alpha(v_{1})|\alpha(v_{2 })]\] we are done. _Case 4._\(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{2})\) and \(f(\alpha(t_{1}),\alpha(t_{2}))=\alpha(t_{2})\). We use the fact that \({\cal M}_{1}\models\alpha(t_{1})\wedge\neg\alpha(t_{2})\) which translates into \[{\cal M}_{1}\models(\forall v_{1}v_{2})[v_{1}=t_{2}\wedge v_{2}=t_{1}\to \alpha(v_{2})]\wedge\neg[t_{2}=t_{2}\wedge t_{1}=t_{1}\to\alpha(t_{2})],\] or \[\langle{\cal M}_{1},f\rangle\models^{1}_{s}(\forall v_{1}v_{2})[v_{1}=t_{2} \wedge v_{2}=t_{1}\to\alpha(v_{1})|\alpha(v_{2})]\wedge\] \[\neg(t_{2}=t_{2}\wedge t_{1}=t_{1}\to\alpha(t_{2})|\alpha(t_{1})].\] Setting \[\psi(v_{1},v_{2}):=[v_{1}=t_{2}\wedge v_{2}=t_{1}\to\alpha(v_{1})|\alpha(v_{2 })]\] we are done. This completes the proof. \(\dashv\) Equivalent to _UI_ (in FOL, hence also in FOLS) is the dual axiom of _Existential Generalization (EG):_ \[(EG)\quad\varphi(t)\to(\exists v)\varphi(v).\] Therefore Theorem 3.3 is equivalently formulated as follows. **Corollary 3.4**: _Given any language \(L\) as above, for every choice function \(f\) for \(Fml(L)\) there is \(\mathcal{M}\) such that EG fails in \(\langle\mathcal{M},f\rangle\), namely, there is a formula \(\varphi(v_{1},v_{2})\) and closed terms \(t_{1},t_{2}\) such that \(\langle\mathcal{M},f\rangle\models^{1}_{s}\varphi(t_{1},t_{2})\wedge\neg( \exists v_{1},v_{2})\varphi(v_{1},v_{2})\)._ In Theorem 3.3 the formula(s) \(\psi(v_{1},v_{2})\) used to refute _UI_ contain two free variables. We do not know if it possible to refute _UI_ using a formula with a single free variable. Also in the proof of 3.3 we used a superposed formula of the form \(\alpha(v_{1})|\alpha(v_{2})\), which looks somewhat artificial. Can we show the failure of \(UI\), using a superposition of the form \(\alpha(\vec{v})|\beta(\vec{v})\) where \(\alpha(\vec{v})\) and \(\beta(\vec{v})\) are distinct formulas? The answer is yes. Specifically, by essentially the same argument we can prove the following variant of Theorem 3.3. **Theorem 3.5**: _Let \(L\) be a language and assume that there exist formulas \(\alpha(\vec{v})\), \(\beta(\vec{v})\) and corresponding tuples of closed terms \(\vec{t}\), \(\vec{s}\) such that:_ _(a) \(\alpha(\vec{s})=\beta(\vec{t})\),_ _(b) \(\alpha(\vec{t})=\beta(\vec{s})\)_ _(c) \(\alpha(\vec{t})\wedge\neg\beta(\vec{t})\) and \(\neg\alpha(\vec{t})\wedge\beta(\vec{t})\) are satisfiable._ _Then for every choice function \(f\) there is a structure \(\mathcal{M}\), a formula \(\psi(\vec{v})\) and closed terms \(\vec{t}\) such that \(\langle\mathcal{M},f\rangle\models^{1}_{s}(\forall\vec{v})\psi(\vec{v})\wedge \neg\psi(\vec{t})\)._ _Proof._ [For example if \(<\) is a binary relation of \(L\), and \(\alpha(v_{1},v_{2}):=(v_{1}<v_{2})\), \(\beta(v_{1},v_{2}):=(v_{1}>v_{2})\), \(\vec{t}=\langle t_{1},t_{2}\rangle\) and \(\vec{s}=\langle t_{2},t_{1}\rangle\), then \(\alpha\), \(\beta\), \(\vec{t}\) and \(\vec{s}\) satisfy conditions (a)-(c) above.] The argument goes exactly as in the proof of Theorem 3.3. We fix structures \(\mathcal{M}_{1}\), \(\mathcal{M}_{2}\) such that \(\mathcal{M}_{1}\models\alpha(\vec{t})\wedge\neg\beta(\vec{t})\) and \(\mathcal{M}_{2}\models\neg\alpha(\vec{t})\wedge\beta(\vec{t})\). Pick a choice function \(f\) for \(Fml(L)\). It suffices to show that there is a formula \(\psi(\vec{v})\) of \(L_{s}\) such that either \(\langle\mathcal{M}_{1},f\rangle\models^{1}_{s}(\forall\vec{v})\psi(\vec{v}) \wedge\neg\psi(\vec{r})\), or \(\langle\mathcal{M}_{2},f\rangle\models^{1}_{s}(\forall\vec{v})\psi(\vec{v}) \wedge\neg\psi(\vec{r})\), for \(\vec{r}=\vec{t}\) or \(\vec{r}=\vec{s}\). As before we examine how \(f\) acts on the pairs of formulas \(\{\alpha(\vec{v}),\beta(\vec{v})\}\), and \(\{\alpha(\vec{t}),\beta(\vec{t})\}=\{\alpha(\vec{s}),\beta(\vec{s})\}\), and we examine the four possible cases that arise as before. Namely: _Case 1._\(f(\alpha(\vec{v}),\beta(\vec{v}))=\alpha(\vec{v})\) and \(f(\alpha(\vec{t}),\beta(\vec{t}))=\alpha(\vec{t})\). _Case 2._\(f(\alpha(\vec{v}),\beta(\vec{v}))=\alpha(\vec{v})\) and \(f(\alpha(\vec{t}),\beta(\vec{t}))=\beta(\vec{t})\). _Case 3._\(f(\alpha(\vec{v}),\beta(\vec{v}))=\beta(\vec{v})\) and \(f(\alpha(\vec{t}),\beta(\vec{t}))=\alpha(\vec{t})\). _Case 4._\(f(\alpha(\vec{v}),\beta(\vec{v}))=\beta(\vec{v})\) and \(f(\alpha(\vec{t}),\beta(\vec{t}))=\beta(\vec{t})\). In each of these cases we work as in the corresponding case of the proof of 3.3. Details are left to the reader. Closing this section, let us remark that the failure of \(UI\) is _fatal_ for any quantified logical system like \(\Lambda\), in the sense that there can be no reasonable "weakening" of \(\Lambda\) in which \(\forall\) is still in use while \(UI\) fails. For the failure of \(UI\) is quite different from the failure e.g. of the Excluded Middle (EM), which has led to a logic weaker than the classical one and yet quite interesting. The reason is that \(UI\) expresses exactly the meaning of "all", as a fundamental logical constant, while EM does not express the meaning of any logical constant. ## 4 The impossibility of uniform choice functions At the end of section 2.2, comparing the truth relations \(\models_{s}^{1}\) and \(\models_{s}^{2}\), we said that the two notions of truth deviate even for choice functions \(f\) that are defined in both semantics, because \(f\) cannot be _uniform_ when considered as a choice function in FCS. Let us make this claim precise. **Definition 4.1**: A choice function \(f:[Fml(L)]^{2}\to Fml(L)\) is said to be _uniform_ if for any two formulas \(\alpha(\vec{v})\), \(\beta(\vec{v})\), with \(\vec{v}\) free, and any tuple \(\vec{t}\) of terms substitutable for \(\vec{v}\) in \(\alpha,\beta\), \(f(\alpha(\vec{v}),\beta(\vec{v}))\sim\alpha(\vec{v})\) implies \(f(\alpha(\vec{t}),\beta(\vec{t}))\sim\alpha(\vec{t})\), or equivalently, if the following equivalence holds: \[[f(\alpha(\vec{v}),\beta(\vec{v}))](\vec{t})\sim f(\alpha(\vec{t}),\beta(\vec{ t})). \tag{13}\] **Note.** The reason for writing \(\sim\) instead of \(=\) in condition (13) above is the need to cover the situation where \(\alpha(\vec{v})\sim\beta(\vec{v})\). In this case also \(\alpha(\vec{t})\sim\beta(\vec{t})\), and the choice from \(\{\alpha(\vec{v}),\beta(\vec{v})\}\), as well as from \(\{\alpha(\vec{t}),\beta(\vec{t})\}\), is indifferent. So if \(f(\alpha(\vec{v}),\beta(\vec{v}))=\alpha(\vec{v})\), while \(f(\alpha(\vec{t}),\beta(\vec{t}))=\beta(\vec{t})\), then \(f(\alpha(\vec{t}),\beta(\vec{t}))\neq\alpha(\vec{t})\) while \(f(\alpha(\vec{t}),\beta(\vec{t}))\sim\alpha(\vec{t})\). Unfortunately, no choice function \(f:[Fml(L)]^{2}\to Fml(L)\) can be uniform, for any first-order language \(L\), so the definition 4.1 is void. **Proposition 4.2**: _For any language \(L\) there is no uniform choice function for \(L\)._ _Proof._ Let \(L\) be any first-order language. Clearly we can pick a formula \(\alpha(v)\) and variables \(v_{1},v_{2}\) such that \(\alpha(v_{1})\not\sim\alpha(v_{2})\). Suppose \(f\) is a uniform choice function for \([Fml(L)]^{2}\), that is \(f\) satisfies (13). In particular this holds for the pair \(\{\alpha(v_{1}),\alpha(v_{2})\}\) and the tuple of terms \(\vec{t}=\langle v_{2},v_{1}\rangle\). Assume without loss of generality that \(f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{1})\). Then \[[f(\alpha(v_{1}),\alpha(v_{2}))](\vec{t})=\alpha(v_{1})(\vec{t})=\alpha(v_{2}).\] By (13) \[[f(\alpha(v_{1}),\alpha(v_{2}))](\vec{t})\sim f(\alpha(v_{1})(\vec{t}),\alpha(v_{ 2})(\vec{t}))=f(\alpha(v_{2}),\alpha(v_{1})),\] therefore, by the above relations \[f(\alpha(v_{2}),\alpha(v_{1}))\sim\alpha(v_{2}). \tag{14}\] But \(f(\alpha(v_{2}),\alpha(v_{1}))=f(\alpha(v_{1}),\alpha(v_{2}))=\alpha(v_{1})\) by our assumption. So \(\alpha(v_{1})\sim\alpha(v_{2})\), a contradiction. \(\dashv\) ## 5 The sentence choice semantics (SCS) for first-order superposition logic We come now to examine the semantics SCS for FOLS based on the truth relation \(\models^{2}_{s}\) roughly described as Option 2 in section 2. As already said there, this semantics presumes that a restriction is imposed to the syntax of \(L_{s}\), namely that \(|\) should not apply to quantified formulas, unless they are classical. So below we shall deal with a class of formulas of \(L_{s}\), called "restricted formulas/sentences". To define them we define first the class of "basic formulas/sentences". **Definition 5.1**: Let \(L\) be a first-order language and let \({\cal M}=\langle M,\ldots\rangle\) be an \(L\)-structure. (i) The set \(BFml(L_{s}(M))\) of _basic formulas_ of \(L_{s}(M)\) is the smallest set of formulas \(X\) such that (a) \(Fml(L(M))\subset X\) and (b) \(X\) is closed with respect to the connectives \(\wedge\), \(\vee\), \(\rightarrow\), \(\leftrightarrow\), \(|\) and \(\neg\) (but not with respect to quantifiers). The set \(BSen(L_{s}(M))\) of _basic sentences_ of \(L_{s}(M)\) is the subset of \(BFml(L_{s}(M))\) of formulas without free variables. (ii) The set \(RFml(L_{s}(M))\) of _restricted formulas_ of \(L_{s}(M)\) is the smallest set of formulas \(X\) such that (a) \(BFml(L_{s}(M))\subset X\) and (b) \(X\) is closed with respect to \(\wedge\), \(\vee\), \(\rightarrow\), \(\leftrightarrow\), \(\neg\) and \(\forall\), \(\exists\) (but not with respect to \(|\)). The set \(RSen(L_{s}(M))\) of _restricted sentences_ of \(L_{s}(M)\) is the subset of \(RFml(L_{s}(M))\) of formulas without free variables. We come next to choice functions. In contrast to the choice functions used in FCS, the choice functions \(f\) of SCS apply only to pairs of sentences of \(L(M)\), i.e., \(f:[Sen(L(M))]^{2}\to Sen(L(M))\). Let us denote by \({\cal F}_{M}\) the class of all these functions. Further, SCS differs from FCS in that the collapsing function \(\overline{f}\) induced from \(f\) will be defined only for _basic sentences_ i.e., for elements of \(BSen(L_{s}(M))\). **Definition 5.2**: Given \(f:[Sen(L(M))]^{2}\to Sen(L(M))\), the function \(\overline{f}:BSen(L_{s}(M))\to Sen(L(M))\) is defined along the clauses of Definition 1.1 as follows: (i) \(\overline{f}(\alpha)=\alpha\), for \(\alpha\in Sen(L(M))\), (ii) \(\overline{f}(\varphi\wedge\psi)=\overline{f}(\varphi)\wedge\overline{f}(\psi)\), (iii) \(\overline{f}(\neg\varphi)=\neg\overline{f}(\varphi)\), (iv) \(\overline{f}(\varphi|\psi)=f(\overline{f}(\varphi),\overline{f}(\psi))\). It is easy to check that this definition is good, and yields \(\overline{f}(\varphi)\) for every \(\varphi\in BSen(L_{s}(M)\). Especially concerning step (iv), note that if \(\varphi|\psi\) belongs to \(BSen(L_{s}(M))\), then so do \(\varphi\) and \(\psi\), hence \(\overline{f}(\varphi)\) and \(\overline{f}(\psi)\) are defined and are sentences of \(L(M)\). Therefore \(f(\overline{f}(\varphi),\overline{f}(\psi))\) is defined too. We come to the truth definition \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi\), where \({\cal M}\) is an \(L\)-structure, \(f\in{\cal F}_{M}\) and \(\varphi\in RSen(L_{s}(M))\). **Definition 5.3**: \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi\) is defined by induction on the length of \(\varphi\) along the following clauses. (We think of \(\wedge\), \(\neg\), \(|\) and \(\forall\) as basic connectives, the others being thought of as abbreviations.) (i) \(\langle{\cal M},f\rangle\models^{2}_{s}\alpha\) iff \({\cal M}\models\alpha\), for \(\alpha\in Sen(L(M))\). (ii) \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi\wedge\psi\) iff \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi\) and \(\langle{\cal M},f\rangle\models^{2}_{s}\psi\). (iii) \(\langle{\cal M},f\rangle\models^{2}_{s}\neg\varphi\) iff \(\langle{\cal M},f\rangle\not\models^{2}_{s}\varphi\). (iv) \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi|\psi\) iff \({\cal M}\models f(\overline{f}(\varphi),\overline{f}(\psi))\). (v) \(\langle{\cal M},f\rangle\models^{2}_{s}(\forall v)\varphi(v)\) iff \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi(x)\) for every \(x\in M\). It is easy to check that the above definition assigns a unique truth value to every \(\varphi\in RSen(L_{s}(M))\). Specifically clauses (i)-(iv) attribute truth values to all basic sentences, while clause (v) is needed for quantified (non-classical) sentences. Given a class \(X\subseteq{\cal F}\), the \(X\)-logical consequence relation \(\models^{2}_{X}\) and the notion of \(X\)-tautology, \(\models^{2}_{X}\varphi\), are defined as usual. We denote by \(Taut^{2}_{X}\) the set of \(\models^{2}_{X}\varphi\)-tautologies. We denote again by \(Asso\), \(Reg\), \(Reg^{*}\) and \(Dec\) the classes of associative, regular, regular and associative, and regular, associative and \(\neg\)-decreasing elements of \({\cal F}\). In particular we have \[Dec\subset Reg^{*}\subset Reg\subset{\cal F},\] hence \[Taut({\cal F})\subseteq Taut(Reg)\subseteq Taut(Reg^{*})\subseteq Taut(Dec).\] Given a class \(X\subseteq{\cal F}\) and a formal system \(\Lambda\) consisting of axioms (set \(\models^{2}_{X}\varphi\)-tautologies) and rules of inference, we shall denote by \[{\rm RFOLS}(X,\Lambda)\] the logical system having as usual semantic part \(X\) and syntactic part \(\Lambda\) (the prefix "R" stands for reminding that we work in a restricted class of sentences of \(L_{s}\)). ### Soundness The formal systems \(\Lambda_{0}\), \(\Lambda_{1}\), \(\Lambda_{2}\) and \(\Lambda_{3}\) described in section 2.1, are going to formalize the classes \({\cal F}\), \(Reg\), \(Reg^{*}\) and \(Dec\) of choice functions. So we shall be dealing with the logics \[{\rm RFOLS}({\cal F},\Lambda_{0}),\ {\rm RFOLS}(Reg,\Lambda_{1}),\ {\rm RFOLS }(Reg^{*},\Lambda_{2}),\ {\rm RFOLS}(Dec,\Lambda_{3}).\] Since we work with restricted formulas only we must be careful with the syntax of the systems \(\Lambda_{i}\) above. Namely the following remarks are in order. 1) The formulas that can be substituted in the axiom schemes \(S_{i}\) above must be restricted, that is \(S_{i}\subset RFml(L_{s})\). 2) Whenever we write \(\Sigma\vdash_{\Lambda_{i}}\varphi\), it is implicitly assumed that \(\Sigma\cup\{\varphi\}\subset RFml(L_{s})\) 3) Next the rule \(SV\) says that if \(\varphi\leftrightarrow\psi\) is provable in \(\Lambda_{0}\), then we can derive that \(\varphi|\sigma\leftrightarrow\psi|\sigma\). But since \(|\) applies only to non-quantified formulas (unless they are classical), \(\varphi\) and \(\psi\), hence also \(\varphi\leftrightarrow\psi\), and \(\sigma\) must be basic formulas. 4) Given that the above conditions are satisfied, if \(\varphi_{1},\ldots,\varphi_{n}\) is a \(\Lambda_{i}\)-proof of \(\varphi\) from \(\Sigma\), then every \(\varphi_{i}\) is restricted. Most of the results given in this and later subsections have proofs similar to proofs of corresponding results of [10]. However the adaptations needed, especially in the proofs of completeness theorems, are rather extensive and so we give them here in full detail. **Theorem 5.4**: _Let \(X\subseteq{\cal F}\). If \(\Lambda\) is a system such that \({\sf A}{\sf x}(\Lambda)\subset Taut(X)\) and \(I\!\!R(\Lambda)=\{\mbox{MP,GR}\}\), then \({\rm RFOLS}(X,\Lambda)\) is sound. In particular \({\rm RFOLS}({\cal F},\Lambda_{0})\) is sound._ _Proof._ Let \(X\), \(\Lambda\) be as stated and let \(\Sigma\vdash_{\Lambda}\varphi\), for a set of sentences \(\Sigma\) and a sentence \(\varphi\). Let \(\varphi_{1},\ldots,\varphi_{n}\), where \(\varphi_{n}=\varphi\), be a \(\Lambda\)-proof of \(\varphi\). As usual we show that \(\Sigma\models_{X}\varphi_{i}\), for every \(1\leq i\leq n\), by induction on \(i\). Given \(i\), suppose the claim holds for all \(j<i\), and let \(\langle{\cal M},f\rangle\models_{s}^{2}\Sigma\), for some \(L\)-structure \({\cal M}\) and \(f\in X\). We show that \(\langle{\cal M},f\rangle\models_{s}^{2}\varphi_{i}\). If \(\varphi_{i}\in\Sigma\) this is obvious. If \(\varphi_{i}\in{\sf A}{\sf x}(\Lambda)\), then \(\langle{\cal M},f\rangle\models_{s}^{2}\varphi_{i}\), because by assumption \({\sf A}{\sf x}(\Lambda)\subset Taut(X)\) and \(f\in X\). Next suppose \(\varphi_{i}\) is derived by the help of \(MP\). Then there are sentences \(\varphi_{j}\), \(\varphi_{k}=(\varphi_{j}\rightarrow\varphi_{i})\), for some \(j,k<i\). By the induction assumption, \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi_{j}\) and \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi_{k}\). Therefore \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi_{i}\). Finally let \(\varphi_{i}\) be derived by the help of \(GR\), i.e., there is \(j<i\) and \(\varphi_{j}(v)\) such that \(\varphi_{i}=(\forall v)\varphi_{j}(v)\). (\(\Sigma\) is a set of sentences so \(v\) does not occur free in \(\Sigma\).) By the induction assumption \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi_{j}(x)\) for every \(x\in M\). Then by the definition of \(\models^{2}_{s}\), \(\langle{\cal M},f\rangle\models^{2}_{s}(\forall v)\varphi_{j}(v)\). Therefore \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi_{i}\). \(\dashv\) In contrast to \(\Lambda_{0}\), the formal systems \(\Lambda_{i}\) for \(i=1,2,3\) contain in addition the rule \(SV\), already mentioned in sections 1.1 and 2.1. Since however we are working in a language with syntactic restrictions we must specify it even more concretely. Recall that \(\varphi|\psi\) makes sense only if \(\varphi\) and \(\psi\) are basic formulas, so \(SV\) takes here the form: \[(SV)\ \ \mbox{\it For}\ \ \varphi,\psi,\sigma\in BFml(L_{s}),\ \mbox{if}\ \ \varphi \leftrightarrow\psi\ \mbox{is provable in}\ \Lambda_{0}\] \[\mbox{\it infer that}\ \ \ \varphi|\sigma\leftrightarrow\psi|\sigma.\] **Theorem 5.5**: _Let \(X\subseteq Reg\). If \(\Lambda\) is a system such that \({\sf Ax}(\Lambda)\subset Taut(X)\) and \({\sf IR}(\Lambda)=\{\mbox{\it MP},GR,SV\}\), then \({\rm RFOLS}(X,\Lambda)\) is sound. In particular_ \({\rm RFOLS}(Reg,\Lambda_{1})\)_, \({\rm RFOLS}(Reg^{*},\Lambda_{2})\) and \({\rm RFOLS}(Dec,\Lambda_{3})\) are sound._ _Proof._ Let \(X\subseteq Reg\), \({\sf Ax}(\Lambda)\subset Taut(X)\) and \({\sf IR}(\Lambda)=\{\mbox{\it MP},GR,SV\}\), and let \(\Sigma\vdash_{\Lambda}\varphi\). Let \(\varphi_{1},\ldots,\varphi_{n}\), where \(\varphi_{n}=\varphi\), be a \(\Lambda\)-proof of \(\varphi\). We show, by induction on \(i\), that for all \(i=1,\ldots,n\), \(\Sigma\models_{X}\varphi_{i}\). Let \(\langle{\cal M},f\rangle\models^{2}_{s}\Sigma\), with \(f\in X\). Given \(\varphi_{i}\), the proof that \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi_{i}\) (given the induction assumption) goes exactly as in the proof of Theorem 5.4, except of the case where \(\varphi_{i}\) follows from a sentence \(\varphi_{j}\), for \(j<i\), by the rule \(SV\). It means that \(\varphi_{i}=(\sigma|\tau\leftrightarrow\rho|\tau)\) while \(\varphi_{j}=(\sigma\leftrightarrow\rho)\), where \(\vdash_{\Lambda_{0}}(\sigma\leftrightarrow\rho)\). Moreover \(\sigma\), \(\rho\) and \(\tau\) are basic sentences. Now \(\Lambda_{0}\) is a system satisfying the conditions of 5.4 above for \(X={\cal F}\), so \(\models^{2}_{\cal F}(\sigma\leftrightarrow\rho)\). It means that for every \(L\)-structure \({\cal N}\) and every \(g\in{\cal F}\), \(\langle{\cal N},g\rangle\models^{2}_{s}(\sigma\leftrightarrow\rho)\). Since \(\sigma\), \(\rho\) and \(\tau\) are basic sentences, \(\overline{f}(\sigma)\), \(\overline{f}(\rho)\) and \(\overline{f}(\tau)\) are defined and moreover \(\langle{\cal N},g\rangle\models^{2}_{s}(\sigma\leftrightarrow\rho)\) is equivalent to \({\cal N}\models\overline{g}(\sigma)\leftrightarrow\overline{g}(\rho)\). Since this holds for every \({\cal N}\), \(\overline{g}(\sigma)\leftrightarrow\overline{g}(\rho)\) is a classical tautology, or \(\overline{g}(\sigma)\sim\overline{g}(\rho)\), for every \(g\in{\cal F}\). In particular, \(\overline{f}(\sigma)\sim\overline{f}(\rho)\). Now since \(X\subseteq Reg\), \(f\in X\) implies \(f\) is regular. Therefore \(\overline{f}(\sigma)\sim\overline{f}(\rho)\) implies that \(f(\overline{f}(\sigma),\overline{f}(\tau))\sim f(\overline{f}(\rho),\overline{ f}(\tau))\), or \(\overline{f}(\sigma|\tau)\sim\overline{f}(\rho|\tau)\), therefore \({\cal M}\models\overline{f}(\sigma|\tau)\leftrightarrow\overline{f}(\rho|\tau)\), or \(\langle{\cal M},f\rangle\models^{2}_{s}(\sigma|\tau\leftrightarrow\rho|\tau)\), i.e., \(\langle{\cal M},f\rangle\models^{2}_{s}\varphi_{i}\), as required. The other claim follows from the fact that the logics in question clearly satisfy the criteria of the general statement. This completes the proof. \(\dashv\) ### Completeness of the logic \(\mathrm{RFOLS}(\mathcal{F},\Lambda_{0})\) Since the rules of \(\Lambda_{0}\) are only \(MP\) and \(GR\), the Deduction Theorem (DT) holds in \(\Lambda_{0}\) so by Fact 1.8, the two forms of Completeness Theorem CT1 and CT2 are equivalent, so we can refer simply to "completeness" instead of CT1- or CT2-completeness. Further by the help of DT and standard proofs, every consistent set \(\Sigma\) of formulas of \(L_{s}\) can be extended to a complete and Henkin-complete set of formulas \(\Sigma^{+}\) in a language \(L_{s}^{+}\), where \(L^{+}\backslash L\) consists of new constants. (Recall that a set of formulas \(\Sigma\) is Henkin-complete, if whenever \(\Sigma\) contains an existential formula \(\exists v\varphi(v)\), then it contains also \(\varphi(c)\), for some \(c\in L\), witnessing \(\exists v\varphi(v)\).) Recall also that for a consistent and complete \(\Sigma\subseteq Sen(L_{s})\) the following hold: (a) for every \(\varphi\in Sen(L_{s})\), \(\varphi\in\Sigma\) iff \(\neg\varphi\notin\Sigma\), (b) \(\varphi\wedge\psi\in\Sigma\) iff \(\varphi\in\Sigma\) and \(\psi\in\Sigma\), (c) if \(\Sigma\vdash_{K}\varphi\), then \(\varphi\in\Sigma\). Before coming to the logics introduced in the previous subsection, we shall give a general criterion of satisfiability for a consistent, complete and Henkin-complete set \(\Sigma\) of sentences of \(L_{s}\). Given such a set \(\Sigma\), if we set \(\Sigma_{1}=\Sigma\cap Sen(L)\) (the subset of \(\Sigma\) that contains the classical sentences of \(\Sigma\)) then obviously \(\Sigma_{1}\) is a consistent, complete and Henkin-complete set of sentences of \(L\). By the Completeness Theorem of FOL, there exists an \(L\)-structure \(\mathcal{M}\) such that, for every \(\alpha\in Sen(L)\), \(\alpha\in\Sigma_{1}\) iff \(\mathcal{M}\models\alpha\). We have the following criterion of satisfiability. **Lemma 5.6**: _Let \(X\subseteq\mathcal{F}\) and \(\Lambda\subset Taut(X)\). Let also \(\Sigma\) be a \(\Lambda\)-consistent, complete and Henkin-complete set of sentences of \(L_{s}\) and let \(\Sigma_{1}=\Sigma\cap Sen(L)\) and \(\mathcal{M}\) such that_ \[\alpha\in\Sigma_{1}\ \Leftrightarrow\ \mathcal{M}\models\alpha. \tag{15}\] _Then given \(f\in X\), \(\langle\mathcal{M},f\rangle\models_{s}^{2}\Sigma\) if and only if for every \(\varphi\in BSen(L_{s})\) (the set of basic sentences of \(L_{s}\)),_ \[\varphi\in\Sigma\ \Rightarrow\overline{f}(\varphi)\in\Sigma. \tag{16}\] _(Actually (16) is equivalent to_ \[\varphi\in\Sigma\ \Leftrightarrow\overline{f}(\varphi)\in\Sigma,\] _but the other direction follows from (16), the consistency and completeness of \(\Sigma\) and the fact that \(\overline{f}(\neg\varphi)=\neg\overline{f}(\varphi)\).)_ _Proof._ Pick an \(f\in X\) and suppose \(\langle\mathcal{M},f\rangle\models_{s}^{2}\Sigma\). Then by the completeness of \(\Sigma\) and the definition of \(\models_{s}^{2}\), for every \(\varphi\in BSen(L_{s})\), \[\varphi\in\Sigma\ \Leftrightarrow\ \langle\mathcal{M},f\rangle\models_{s}^{2} \varphi\Leftrightarrow\mathcal{M}\models\overline{f}(\varphi).\] Now by (15), \({\cal M}\models\overline{f}(\varphi)\Rightarrow\overline{f}(\varphi)\in\Sigma_{1} \subset\Sigma\). Therefore \(\varphi\in B{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S }{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S} {S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S}{S }{S}{S}{S}{S}{S}{S}{S}{S}{S}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{} }{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}} {S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{} }{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{} }{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}{}{S{}}{S{}} {S}{S}{S}{{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{} }{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}} {S}{{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{}}{S{} {}}{S{}}{S{}}{{}}{S{}}{S{}}{S{}}{S{}}{S{}}{{}}{S{}}{S{}}{S{}}{S{}}{{}}{S{} {}}{S{}}{{}}{S{}}{S{}}{{}}{S{}}{S{}}{{}}{S{}}{{}}{S{}}{S{}}{{}}{S{}}{S{} {}}{S{}}{{}}{S{}}{{}}{S{}}{{}}{S{}}{{}}{S{}}{{}}{S{}}{{} {}{S{}}{}{}{S{}}{}{}{}{S{}}{{}}{S{}}{{}}{S{}}{{}}{S{}}{{} {}{}{S{}}{{}}{S{}}{{}}{S{}}{{}}{S{}}{{}}{S{}}{{}}{S{}}{{} {}{}{}{S{}}{}{S{}}{}{}{}{S{}}{{}}{S{}}{{}}{S{}}{{} {}{}{}{}{S{}}{}{}{}{S{}}{{}{}{}{}{}{\dagger}}}}}\) \(\Rightarrow\)\(\)\(\exists\)\(\)\(\)\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\}\\\\}\\\\\\\}\ \(g:[Sen(L)]^{2}\to Sen(L)\) such that for every \(\varphi\in Bsen(L)\), \[\varphi\in\Sigma(\vec{a})\Rightarrow\overline{g}(\varphi)\in\Sigma(\vec{a}). \tag{18}\] To do that we examine for any \(\varphi,\psi\in Bsen(L_{s})\), the possible subsets of \(\Sigma(\vec{a})\) whose elements are \(\varphi|\psi\), \(\varphi\), \(\psi\) or their negations. These are the following: (a1) \(\{\varphi|\psi,\varphi,\psi\}\subset\Sigma(\vec{a})\) (a2) \(\{\varphi|\psi,\varphi,\neg\psi\}\subset\Sigma(\vec{a})\) (a3) \(\{\varphi|\psi,\neg\varphi,\psi\}\subset\Sigma(\vec{a})\) (a4) \(\{\neg(\varphi|\psi),\neg\varphi,\neg\psi\}\subset\Sigma(\vec{a})\) (a5) \(\{\neg(\varphi|\psi),\varphi,\neg\psi\}\subset\Sigma(\vec{a})\) (a6) \(\{\neg(\varphi|\psi),\neg\varphi,\psi\}\subset\Sigma(\vec{a})\) The remaining cases, (a7) \(\{\varphi|\psi,\neg\varphi,\neg\psi\}\subset\Sigma(\vec{a})\) (a8) \(\{\neg(\varphi|\psi),\varphi,\psi\}\subset\Sigma(\vec{a})\) are impossible because they contradict \(\Lambda_{0}\)-consistency and completeness of \(\Sigma(\vec{a})\). Indeed, in case (a7) we have \(\neg\varphi\wedge\neg\psi\in\Sigma(\vec{a})\). Also \(\varphi|\psi\in\Sigma(\vec{a})\), so by \(S_{2}\) and completeness, \(\varphi\vee\psi\in\Sigma(\vec{a})\), a contradiction. In case (a8) \(\varphi\wedge\psi\in\Sigma(\vec{a})\). Also \(\neg(\varphi|\psi)\in\Sigma(\vec{a})\), so by \(S_{1}\) and completeness \(\neg(\varphi\wedge\psi)\in\Sigma(\vec{a})\), a contradiction. Given a pair \(\{\alpha,\beta\}\) of sentences of \(L\), we say that "\(\{\alpha,\beta\}\) satisfies (ai)" if for \(\varphi=\alpha\) and \(\psi=\beta\), the corresponding case (ai) above, for \(1\leq i\leq 6\), holds. We define a choice function \(g\) for \(L\) as follows: \[g(\alpha,\beta)=\left\{\begin{array}{ll}(i)\ \alpha,\ \mbox{if}\ \{\alpha, \beta\}\ \mbox{satisfies (a2) or (a6)}\\ (ii)\ \beta,\ \mbox{if}\ \{\alpha,\beta\}\ \mbox{satisfies (a3) or (a5)}\\ (iii)\ \mbox{any of the}\ \alpha,\,\beta,\ \mbox{if}\ \{\alpha,\beta\}\ \mbox{satisfies (a1) or (a4)}.\end{array}\right. \tag{19}\] _Claim._\(\overline{g}\) satisfies the implication (18). _Proof of the Claim._ We prove (16) by induction on the length of \(\varphi\). For \(\varphi=\alpha\in Sen(L)\), \(\overline{g}(\alpha)=\alpha\), so (16) holds trivially. Similarly the induction steps for \(\wedge\) and \(\neg\) follow immediately from the fact that \(\overline{g}\) commutes with these connectives and the completeness of \(\Sigma(\vec{a})\). So the only nontrivial step of the induction is that for \(\varphi|\psi\). It suffices to assume \[\varphi\in\Sigma(\vec{a})\ \Rightarrow\overline{g}(\varphi)\in\Sigma(\vec{a}), \tag{20}\] \[\psi\in\Sigma(\vec{a})\ \Rightarrow\overline{g}(\psi)\in\Sigma(\vec{a}), \tag{21}\] and prove \[\varphi|\psi\in\Sigma(\vec{a})\ \Rightarrow\overline{g}(\varphi|\psi)\in\Sigma( \vec{a}). \tag{22}\] Assume \(\varphi|\psi\in\Sigma(\vec{a})\). Then the only possible combinations of \(\varphi\), \(\psi\) and their negations that can belong to \(\Sigma(\vec{a})\) are those of cases (a1), (a2) and (a3) above. To prove (22) it suffices to check that \(\overline{g}(\varphi|\psi)\in\Sigma(\vec{a})\) in each of these cases. Note that \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))=g( \alpha,\beta)\), where \(\overline{g}(\varphi)=\alpha\) and \(\overline{g}(\psi)=\beta\) are sentences of \(L\), so (19) applies. Case (a1): Then \(\varphi\in\Sigma(\vec{a})\) and \(\psi\in\Sigma(\vec{a})\). By (20) and (21), \(\overline{g}(\varphi)\in\Sigma(\vec{a})\) and \(\overline{g}(\varphi)\in\Sigma(\vec{a})\). By definition (19), \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))\) can be either \(\overline{g}(\varphi)\) or \(\overline{g}(\psi)\). So in either case \(\overline{g}(\varphi|\psi)\in\Sigma(\vec{a})\). Case (a2): Then \(\varphi\in\Sigma(\vec{a})\) and \(\neg\psi\in\Sigma(\vec{a})\). By (20) and (21), \(\overline{g}(\varphi)\in\Sigma(\vec{a})\), \(\overline{g}(\psi)\notin\Sigma(\vec{a})\). Also by (19), \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))= \overline{g}(\varphi)\), thus \(\overline{g}(\varphi|\psi)\in\Sigma(\vec{a})\). Case (a3): Then \(\neg\varphi\in\Sigma(\vec{a})\), \(\psi\in\Sigma(\vec{a})\). By (20) and (21), \(\overline{g}(\varphi)\notin\Sigma(\vec{a})\), \(\overline{g}(\psi)\in\Sigma(\vec{a})\). By (19), \(\overline{g}(\varphi|\psi)=g(\overline{g}(\varphi),\overline{g}(\psi))= \overline{g}(\psi)\), thus \(\overline{g}(\varphi|\psi)\in\Sigma(\vec{a})\). This completes the proof of the Claim. It follows that condition (18) is true, so by Lemma 5.6, since \({\cal M}\models\Sigma_{1}(\vec{a})\) where \(\Sigma_{1}(\vec{a})=\Sigma(\vec{a})\cap Sen(L)\), \(\langle{\cal M},g\rangle\models^{2}_{s}\Sigma(\vec{a})\), therefore \(\Sigma(\vec{a})\) is \({\cal F}\)-satisfiable. \(\dashv\) **Theorem 5.8**: (Completeness of \({\rm RFOLS}({\cal F},\Lambda_{0})\)) _Let \(\Sigma(\vec{v})\) be a consistent set of restricted formulas of \(L_{s}\). Then \(\Sigma(\vec{v})\) is \({\cal F}\)-satisfiable, i.e., there are \({\cal M}\), \(f:[Sen(L)]^{2}\to Sen(L)\) and \(\vec{a}\in M\) such that \(\langle{\cal M},f\rangle\models^{2}_{s}\Sigma(\vec{a})\)._ _Proof._ Let \(\Sigma(\vec{v})\) be a \(\Lambda_{0}\)-consistent set of formulas. Extend \(\Sigma(\vec{v})\) to a \(\Lambda_{0}\)-consistent, complete and Henkin-complete set of formulas of \(L^{+}_{s}\supseteq L_{s}\) such that \(\Sigma(\vec{v})\subseteq\Sigma^{+}(\vec{v})\). By Lemma 5.7, \(\Sigma^{+}(\vec{v})\) is \({\cal F}\)-satisfiable. Therefore so is \(\Sigma(\vec{v})\). \(\dashv\) ### Conditional completeness of the remaining systems Coming to completeness, as in the case of PLS, the presence of \(SV\) makes the status of Deduction Theorem (DT) open. In turn the absence of DT has two consequences: (a) we don't know if CT1 and CT2 are equivalent (we only know that CT1 implies CT2) and (b) we don't know if a consistent set of formulas can be extended to a consistent and complete set (and a fortiori if it can be extended to a consistent, complete and Henkin-complete set). So, concerning the completeness of the systems based on \(\Lambda_{i}\), for \(i=1,2,3\), (a) we shall be confined to the weaker form CT2 only, and (b) we shall appeal to an _extendibility principle_ for the formal systems \(\Lambda_{i}\), already used in [10]. \[(cHext(\Lambda))\] _Every \(\Lambda\)-consistent set of formulas of \(L_{s}\) can be extended_ \[\mbox{to a $\Lambda$-consistent, complete and Henkin-complete set}.\] We can see \(cHext(\Lambda)\) as the conjunction of \(cext(\Lambda)\) and \(Hext(\Lambda)\), where \(cext(\Lambda)\) says that every \(\Lambda\)-consistent set can be extended to a complete \(\Lambda\)-consistent set, and \(Hext(\Lambda)\) says that every \(\Lambda\)-consistent set can be extended to a Henkin-complete \(\Lambda\)-consistent set. The following Lemma will be essential for the completeness of the aforementioned logics, proved in the next section. **Lemma 5.9**: _If \(\Sigma\subset Sen(L_{s})\) is closed with respect to \(\vdash_{\Lambda_{i}}\), for some \(i=1,2,3\), and \(\alpha,\alpha^{\prime}\) are formulas of \(L\) such that \(\alpha\sim\alpha^{\prime}\), then for every \(\beta\), \((\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta)\in\Sigma\)._ _Proof._ Let \(\alpha\sim\alpha^{\prime}\). Then \(\vdash_{\rm FOL}\alpha\leftrightarrow\alpha^{\prime}\), hence also \(\vdash_{\Lambda_{0}}\alpha\leftrightarrow\alpha^{\prime}\). By \(SV\) it follows that for every \(\beta\), \(\vdash_{\Lambda_{i}}\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta\). Therefore \((\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta)\in\Sigma\) since \(\Sigma\) is \(\vdash_{\Lambda_{i}}\)-closed. \(\dashv\) The following theorem is the analogue of Theorem 3.16 of [9], as well as part of Theorem 4.9 of [10]. **Theorem 5.10**: (Conditional CT2-completeness of \({\rm RFOLS}(Reg,\Lambda_{1})\)_The logic \({\rm RFOLS}(Reg,\Lambda_{1}\)) is \({\rm CT2}\)-complete if and only if \(cHext(\Lambda_{1})\) is true._ _Proof._ We prove first the easy direction. Assume \(cHext(\Lambda_{1})\) is false. Then either \(cext(\Lambda_{1})\) fails or \(Hext(\Lambda_{1})\) fails. Assume first that \(cext(\Lambda_{1})\) fails. It follows that there is a maximal \(\Lambda_{1}\)-consistent set of formulas \(\Sigma(\vec{v})\) not extendible to a \(\Lambda_{1}\)-consistent and complete set. It means that there is a formula \(\varphi(\vec{v})\) such that both \(\Sigma(\vec{v})\cup\{\varphi(\vec{v})\}\) and \(\Sigma(\vec{v})\cup\{\neg\varphi(\vec{v})\}\) are \(\Lambda_{1}\)-inconsistent and hence unsatisfiable. Then clearly \(\Sigma(\vec{v})\) cannot be satisfiable in any structure \({\cal M}\), for then \({\cal M}\) would also satisfy \(\varphi(\vec{v})\) or \(\neg\varphi(\vec{v})\). Thus CT2-completeness fails. Next assume that \(Hext(\Lambda_{1})\) fails. It follows that there is a maximal \(\Lambda_{1}\)-consistent set of formulas \(\Sigma(\vec{v})\) not extendible to a \(\Lambda_{1}\)-consistent and Henkin-complete set. It means that \(\Sigma(\vec{v})\) contains a formula \(\exists u\varphi(u,\vec{v})\) such that for any new constant \(c\), \(\Sigma(\vec{v})\cup\{\varphi(c,\overline{v})\}\) is \(\Lambda_{1}\)-inconsistent. But then \(\Sigma(\vec{v})\) cannot be satisfiable. For if \(\mathcal{M}\) satisfies \(\Sigma(\vec{v})\), then in particular \(\exists u\varphi(u,\vec{v})\) is satisfied in \(\mathcal{M}\), so also \(\varphi(c,\vec{v})\) is satisfied in \(\mathcal{M}\) for some \(c\in M\). Therefore \(\Sigma(\vec{v})\cup\{\varphi(c,\vec{v})\}\) is satisfiable, contrary to the fact that \(\Sigma(\vec{v})\cup\{\varphi(c,\vec{v})\}\) is inconsistent. Thus indeed the \(\Lambda_{1}\)-consistent set \(\Sigma(\vec{v})\) is not satisfiable, so CT2-completeness fails. We come to the main direction of the equivalence assuming \(cHext(\Lambda_{1})\) is true. Then given a \(\Lambda_{1}\)-consistent set \(\Sigma(\vec{v})\subset RFml(L_{s})\) of restricted formulas, we may assume without loss of generality that it is also complete and Henkin-complete. We have to find \(\mathcal{M}\) and \(g\in Reg\) such that \(\langle\mathcal{M},g\rangle\models_{s}^{2}\Sigma(\vec{a})\) for some \(\vec{a}\in M\). It turns out that the main argument of Lemma 5.7, concerning the definition of the choice function \(g\), works also here, with the necessary adjustments. Namely it suffices to find a choice function \(g\in Reg\) such that \(\langle\mathcal{M},g\rangle\models_{s}^{2}\Sigma(\vec{a})\), where \(\mathcal{M}\) and \(\vec{a}\in M\) are the model and parameters such that for every \(\alpha\in L(\vec{v})\), \[\alpha\in\Sigma_{1}(\vec{a})\Leftrightarrow\mathcal{M}\models\alpha,\] where \(\Sigma_{1}(\vec{a})=\Sigma(\vec{a})\cap Sen(L)\). The definition of \(g\) follows exactly the pattern of definition of \(g\) in the proof of Lemma 5.7, except that we need now to take care so that \(g\) be regular. Recall that \(g\) is regular if for all \(\alpha\), \(\alpha^{\prime}\), \(\beta\), \[\alpha^{\prime}\sim\alpha\ \Rightarrow\ g(\alpha^{\prime},\beta)\sim g( \alpha,\beta).\] In (19) \(g\) is defined by three clauses: (i) (a2) or (a6), (ii) (a3) or (a5), (iii) (a1) or (a4). _Claim._ The regularity constraint is satisfied whenever \(g\) is defined by clauses (i) and (ii) above. _Proof of Claim._ Pick \(\alpha\), \(\alpha^{\prime}\), \(\beta\) such that \(\alpha\sim\alpha^{\prime}\). We prove the Claim for the case that \(g(\alpha,\beta)\) is defined according to clause (i)-(a2). All other cases are verified similarly. That \(g(\alpha,\beta)\) is defined by case (i)-(a2) of (19) means that \(\alpha|\beta\in\Sigma(\vec{a})\), \(\alpha\in\Sigma(\vec{a})\), \(\neg\beta\in\Sigma(\vec{a})\) and \(g(\alpha,\beta)=\alpha\). It suffices to see that necessarily \(g(\alpha^{\prime},\beta)=\alpha^{\prime}\sim g(\alpha,\beta)\). Since \(\Sigma(\vec{a})\) is complete, it is closed with respect to \(\vdash_{\Lambda_{1}}\), so by Lemma 5.9, \(\alpha\sim\alpha^{\prime}\) implies that \((\alpha|\beta\leftrightarrow\alpha^{\prime}|\beta)\in\Sigma(\vec{a})\). Also by assumption, \(\alpha|\beta\in\Sigma(\vec{a})\), hence \(\alpha^{\prime}|\beta\in\Sigma(\vec{a})\). Moreover \(\alpha^{\prime}\in\Sigma(\vec{a})\), since \(\alpha\in\Sigma(\vec{a})\), and \(\neg\beta\in\Sigma(\vec{a})\). Therefore case (i)-(a2) occurs too for \(\alpha^{\prime}|\beta\), \(\alpha^{\prime}\) and \(\beta\). So, by (19), \(g(\alpha^{\prime},\beta)=\alpha^{\prime}\), therefore \(g(\alpha^{\prime},\beta)\sim g(\alpha,\beta)\). This proves the Claim. It follows from the Claim that if we define \(g\) according to (19), regularity is guaranteed unless \(g(\alpha,\beta)\) is given by clause (iii), that is, unless (a1) or (a4) is the case. In such a case either both \(\alpha\), \(\beta\) belong to \(\Sigma\), or both \(\neg\alpha\), \(\neg\beta\) belong to \(\Sigma\), and (19) allows \(g(\alpha,\beta)\) to be _any_ of the elements \(\alpha\), \(\beta\). So at this point we must intervene by a new condition that will guarantee regularity. This is done as follows. Pick from each \(\sim\)-equivalence class \([\alpha]\), a representative \(\xi_{\alpha}\in[\alpha]\). Recall that, by completeness, the set \(\Sigma_{1}=\Sigma\cap Sen(L)\) as well as its complement \(\Sigma_{2}=Sen(L)-\Sigma_{1}\) are saturated with respect to \(\sim\), that is, for every \(\alpha\), either \([\alpha]\subset\Sigma_{1}\) or \([\alpha]\subset\Sigma_{2}\). Let \(D_{1}=\{\xi_{\alpha}:\alpha\in\Sigma_{1}\}\), \(D_{2}=\{\xi_{\alpha}:\alpha\in\Sigma_{2}\}\). Let \([D_{i}]^{2}\) be the set of pairs of elements of \(D_{i}\), for \(i=1,2\), and pick an arbitrary choice function \(g_{0}:[D_{1}]^{2}\cup[D_{2}]^{2}\to D_{1}\cup D_{2}\). Then it suffices to define \(g\) by slightly revising definition (19) as follows: \[g(\alpha,\beta)=\left\{\begin{array}{l}(i)\ \alpha,\ \mbox{if}\ \{\alpha, \beta\},\ \mbox{satisfies (a2) or (a6)}\\ (ii)\ \beta,\ \mbox{if}\ \{\alpha,\beta\}\ \mbox{satisfies (a3) or (a5)}\\ (iii)\ \sim g_{0}(\xi_{\alpha},\xi_{\beta}),\ \mbox{if}\ \{\alpha,\beta\} \ \mbox{satisfies (a1) or (a4)}.\end{array}\right. \tag{23}\] (The third clause is just a shorthand for: \(g(\alpha,\beta)=\alpha\) if \(g_{0}(\xi_{\alpha},\xi_{\beta})=\xi_{\alpha}\), and \(g(\alpha,\beta)=\beta\) if \(g_{0}(\xi_{\alpha},\xi_{\beta})=\xi_{\beta}\).) In view of the Claim and the specific definition of \(g\) by (23), it follows immediately that if \(\alpha\sim\alpha^{\prime}\) then for every \(\beta\), \(g(\alpha,\beta)\sim g(\alpha^{\prime},\beta)\). So \(g\) is regular. Further, exactly as in Lemma 5.7 it follows that \(\langle M,g\rangle\models_{s}^{2}\Sigma(\vec{a})\). This completes the proof. \(\dashv\) The next two theorems are cited without proofs. They are analogues of Theorems 3.18 and 3.19 of [9], and their proofs follow the patterns of the latter with adaptations similar to the ones we used in the proofs of Theorems 5.8 and 5.10 above. **Theorem 5.11**: (Conditional CT2-completeness for \(\mbox{RFOLS}(Reg^{*},\Lambda_{2}))\) _The logic \(\mbox{RFOLS}(Reg^{*},\Lambda_{2})\) is \(\mbox{CT2}\)-complete if and only if \(cHext(\Lambda_{2})\) is true._ **Theorem 5.12**: (Conditional CT2-completeness for \(\mbox{RFOLS}(Dec,\Lambda_{3}))\) _The logic \(\mbox{RFOLS}(Dec,\Lambda_{3})\) is \(\mbox{CT2}\)-complete if and only if \(cHext(\Lambda_{3})\) is true._ We shall close this section and the paper by answering a question raised in [9] (section 5 concerning future work), namely whether the extension of PLS to FOLS might help us to pass from superposition of sentences to _superposition of objects._ Such a notion may sound a little bit strange, but is closely related to "disjunctive objects" (more precisely "disjunctive multisets"), which have already been used in [8] to provide semantics for the Horn fragment of the multiplicative intuitionistic linear logic (ILL) augmented with additive disjunction. The following simple every-day example motivates sufficiently the introduction of the concept. Restaurant menus refer to entities of the form "steak or fish" (upon choice), for main dish, and "dessert or season fruit" (upon choice and season), for exit.2 One can think of the term "steak or fish" as representing a new kind of _theoretical_ entity, an object generated by the superposition of steak and fish. Of course a specific customer who dines in the restaurant does not eat "steak or fish". They eat either steak or fish, which are the _actualizations_, i.e., the possible collapses, of the superposed object. It is true that _existence_ of such objects seems dubious. They look unstable and temporary, since they always collapse to their actualizations, and also elusive since they can be handled not in themselves, but only through their actualizations. However, more or less, the same is true of all theoretical entities: they are supposed to stand out there elusive in themselves for our minds, like platonic ideas, accessible only through their concrete physical realizations. Notice in particular that in the case of superposed menu items, the phrase "upon choice" that accompanies them explicitly indicates that our access to their physical realizations is obtained only by the help of a choice function. Footnote 2: In popular presentations of linear logic the first kind of disjunction is construed as “multiplicative” or deterministic, while the latter is construed as “additive” or non-deterministic. One way to obtain (formal representation of) superposition of objects would be through the logic of superposition (namely FOLS), if in the latter one could prove that for any two objects (constants) \(a\) and \(b\), there exists a _unique_ object \(c\) satisfying the formula (in one free variable) (\(v=a\))\(|(v=b\)), i.e., if the sentence \((\forall v,u)(\exists!w)((w=v)|(w=u))\) would be a tautology. If that would be the case, we could write \(c=a\!\uparrow\!b\) for the unique object \(c\) satisfying the formula (\(v=a\))\(|(v=b)\), and say that \(c\) is the _superposition of \(a\) and \(b\)_. If the semantics FCS would not have broken down, it is easy to see that it would satisfy the above requirement, i.e., for every \(\mathcal{M}\), for every choice function \(f\) for pairs of formulas and any \(a,b\in M\) we would have \(\langle\mathcal{M},f\rangle\models_{s}(\exists!v)((v=a)|(v=b))\). This is because \(\langle\mathcal{M},f\rangle\models_{s}(\exists!v)((v=a)|(v=b))\) holds iff \(\mathcal{M}\models(\exists!v)f(v=a,v=b)\) and the latter is obviously true no matter whether \(f(v=a,v=b)=(v=a)\) or \(f(v=a,v=b)=(v=b)\). However working with the semantics SCS for FOLS described in this section we have the following situation. **Proposition 5.13**: _Let \(L\) be a first-order language, \(\mathcal{M}=\langle M,\ldots\rangle\) be an \(L\)-structure and \(a\neq b\in M\). Then:_ _(i) There are choice functions \(f\in{\cal F}\), such that \(\langle{\cal M},f\rangle\models^{2}_{s}(\exists!v)((v=a)|(v=b))\)._ _(ii) However there is no \(f\in Reg\) such that \(\langle{\cal M},f\rangle\models^{2}_{s}(\exists!v)((v=a)|(v=b))\)._ _Proof._ (i) Given \(a\neq b\), clearly the only values for \(v\) that might satisfy \((v=a)|(v=b)\) are \(a\) or \(b\). Pick \(f\) such that \(f(a=a,a=b)=(a=a)\) and \(f(b=b,a=b)=(a=b)\). Then clearly \(\langle{\cal M},f\rangle\models^{2}_{s}(a=a)|(a=b)\), while \(\langle{\cal M},f\rangle\not\models^{2}_{s}(b=a)|(b=b)\). Thus the only element of \(M\) that satisfies \((v=a)|(v=b)\) in \(\langle{\cal M},f\rangle\) is \(a\). Similarly, if we consider \(f^{\prime}\) such that \(f^{\prime}(a=a,a=b)=(a=b)\) and \(f^{\prime}(b=b,a=b)=(b=b)\), the only element of \(M\) that satisfies \((v=a)|(v=b)\) in \(\langle{\cal M},f^{\prime}\rangle\) is \(b\). (ii) In contrast to (i), if \(f\in Reg\) then, since \((a=a)\sim(b=b)\) we should have \(f(a=a,a=b)\sim f(b=b,a=b)\). Therefore either \[f(a=a,a=b)=(a=a),\mbox{ and }f(b=b,a=b)=(b=b),\] or \[f(a=a,a=b)=f(b=b,a=b)=(a=b).\] In the first case both \(a,b\) satisfy \((v=a)|(v=b)\) in \(\langle{\cal M},f\rangle\), while in the second case none of the \(a,b\), and hence no element of \(M\), satisfies \((v=a)|(v=b)\). In either case \(\langle{\cal M},f\rangle\not\models^{2}_{s}(\exists!v)((v=a)|(v=b))\). \(\dashv\) The preceding result shows that the attempt to represent superposition of objects through FOLS and its semantics SCS fails. We can only show that for any two objects \(a,b\) there is an object \(c\) satisfying the property \((v=a)|(v=b)\), but it is not unique. Nevertheless, superposition of objects can be introduced by an alternative way, namely through _mathematical_ rather than logical means. Specifically, given a first order theory \(T\) in a language \(L\), \(T\) can be extended to a theory \(T^{|}\) in the language \(L\cup\{|\}\), where \(|\) is a new binary operation (on the objects of \(T\)). \(T^{|}\) (with underlying logic the usual FOL) consists of the axioms of \(T\) plus some plausible axioms for \(|\), analogous to the axioms \(S_{i}\) of section 1.1, expressing idempotence, symmetry and associativity of \(|\), and possibly some further properties for the objects \(a|b\). Let us note by the way that the notation \(x|y\) was first used in [8, SS3]. The operation \(x|y\) was defined there (for multisets and finite sets of multisets) so that idempotence, symmetry and associativity hold, and also so that the object \(x|y\) be _distinct_ from both \(x\) and \(y\), i.e., \(x|y\notin\{x,y\}\), unless \(x=y\). In contrast, if \(x|y\) is going to represent an entity that always collapses to either \(x\) or \(y\), then necessarily \(x|y\in\{x,y\}\), i.e., \(|\) must behave as a choice function. Thus one can have at least two different implementations of the operation \(x|y\) on objects: a "projective" one, such that \(x|y\in\{x,y\}\), and a "non-projective" one, such that \(x|y\notin\{x,y\}\). **Acknowledgements** Many thanks to two anonymous referees for several corrections and suggestions that improved considerably the presentation of this paper.
2306.01842
133In: A Rosetta Stone for decays of r-process nuclei
The $\beta$ decays from both the ground state and a long-lived isomer of $^{133}$In were studied at the ISOLDE Decay Station (IDS). With a hybrid detection system sensitive to $\beta$, $\gamma$, and neutron spectroscopy, the comparative partial half-lives (logft) have been measured for all their dominant $\beta$-decay channels for the first time, including a low-energy Gamow-Teller transition and several First-Forbidden (FF) transitions. Uniquely for such a heavy neutron-rich nucleus, their $\beta$ decays selectively populate only a few isolated neutron unbound states in $^{133}$Sn. Precise energy and branching-ratio measurements of those resonances allow us to benchmark $\beta$-decay theories at an unprecedented level in this region of the nuclear chart. The results show good agreement with the newly developed large-scale shell model (LSSM) calculations. The experimental findings establish an archetype for the $\beta$ decay of neutron-rich nuclei southeast of $^{132}$Sn and will serve as a guide for future theoretical development aiming to describe accurately the key $\beta$ decays in the rapid-neutron capture (r-) process.
Z. Y. Xu, M. Madurga, R. Grzywacz, T. T. King, A. Algora, A. N. Andreyev, J. Benito, T. Berry, M. J. G. Borge, C. Costache, H. De Witte, A. Fijalkowska, L. M. Fraile, H. O. U. Fynbo, A. Gottardo, C. Halverson, L. J. Harkness-Brennan, J. Heideman, M. Huyse, A. Illana, Ł. Janiak, D. S. Judson, A. Korgul, T. Kurtukian-Nieto, I. Lazarus, R. Lică, R. Lozeva, N. Marginean, R. Marginean, C. Mazzocchi, C. Mihai, R. E. Mihai, A. I. Morales, R. D. Page, J. Pakarinen, M. Piersa-Siłkowska, Zs. Podolyák, P. Sarriguren, M. Singh, Ch. Sotty, M. Stepaniuk, O. Tengblad, A. Turturica, P. Van Duppen, V. Vedia, S. Viñals, N. Warr, R. Yokoyama, C. X. Yuan
2023-06-02T18:01:56Z
http://arxiv.org/abs/2306.01842v1
# \({}^{133}\)In: A Rosetta Stone for decays of \(r\)-process nuclei ###### Abstract The \(\beta\) decays from both the ground state and a long-lived isomer of \({}^{133}\)In were studied at the ISOLDE Decay Station (IDS). With a hybrid detection system sensitive to \(\beta\), \(\gamma\), and neutron spectroscopy, the comparative partial half-lives (\(\log\)\(ft\)) have been measured for all their dominant \(\beta\)-decay channels for the first time, including a low-energy Gamow-Teller transition and several First-Forbidden (FF) transitions. Uniquely for such a heavy neutron-rich nucleus, their \(\beta\) decays selectively populate only a few isolated neutron unbound states in \({}^{133}\)Sn. Precise energy and branching-ratio measurements of those resonances allow us to benchmark \(\beta\)-decay theories at an unprecedented level in this region of the nuclear chart. The results show good agreement with the newly developed large-scale shell model (LSSM) calculations. The experimental findings establish an archetype for the \(\beta\) decay of neutron-rich nuclei southeast of \({}^{132}\)Sn and will serve as a guide for future theoretical development aiming to describe accurately the key \(\beta\) decays in the rapid-neutron capture (\(r\)-) process. _Introduction--_ The rapid-neutron capture (\(r\)-) process is responsible for the creation of half of the heavy elements in the universe [1; 2]. Many stable nuclei present today are decay products of the very short-lived nuclei produced in extreme environments such as neutron star mergers or supernovae [3; 4]. Most of these progenitor nuclei have large neutron-to-proton ratios, and state-of-the-art nuclear research facilities cannot produce samples in sufficient quantities for experimental work. Yet, measured elemental abundance in stars cannot be explained without knowing their decay properties including half-lives \(T_{1/2}\) and \(\beta\)-delayed neutron-emission probabilities \(P_{n}\)[5; 6; 7]. Modern nuclear theories were developed to predict these quantities for radioactive isotopes far from their stable counterparts [8; 9; 10; 11; 12]. To verify those models, experimental efforts were carried out continuously pursuing those gross decay properties of isotopes close to the \(r\)-process path [13; 14; 15; 16; 17; 18; 19]. Due to the complicated nature of those decays far off stability, the agreement with model predictions can be ambiguous, i.e., theories may arrive at a similar gross property for a single isotope using different footing. In addition, it is generally hard to find conclusive answers on how to improve the theories when a discrepancy emerges. Thus, it is desirable to measure the observables capable of benchmarking \(\beta\)-decay calculations on a more fundamental level. In this Letter, we report a \(\beta\)-decay strength measurement of \({}^{133}\)In (\(Z=49\), \(N=84\)), a nucleus close to many \(r\)-process nuclei southeast of \({}^{132}\)Sn (\(Z=50\), \(N=82\)), see Fig. 1. We examined decays from both the ground state (\({}^{133g}\)In) and the isomer (\({}^{133m}\)In) via \(\beta\)-delayed \(\gamma\) and neutron spectroscopy, demonstrating as a textbook example the interplay between allowed Gamow-Teller (GT) and First-Forbidden (FF) transitions in extremely neutron-rich nuclei near the \(r\)-process path. Thus, our measurement must be accounted for by the models used to predict the decay properties of the \(r\)-process nuclei. In the nuclear shell model [21; 22], the doubly magic \({}^{132}\)Sn arranges protons (\(\pi\)) and neutrons (\(\nu\)) respectively into the closed \(3\hbar\omega\) and \(4\hbar\omega\) major shells, see Fig. 1. To the southeast of \({}^{132}\)Sn, where \({}^{133}\)In resides, the proton Fermi surface is near the \(\pi g_{9/2}\) orbital (\(3\hbar\omega\)) whereas neutrons start filling the \(5\hbar\omega\) shell above \(N=82\), generating large \(2\hbar\omega\) asymmetry between the proton and neutron Fermi surfaces. Since \(\pi g_{9/2}\) is partially occupied, the GT transformation \(\nu g_{7/2}\rightarrow\pi g_{9/2}\) (the red arrow in Fig. 1) is expected to be strong. Other competing GT channels have to induce proton excitation across the \(Z=50\) shell (e.g., \(\nu g_{7/2}\rightarrow\pi g_{7/2}\)) and thus are much less favorable energetically. Consequently, the \(\nu g_{7/2}\rightarrow\pi g_{9/2}\) transformation is the single dominant decay channel in the majority of nuclei in this region. Besides, a few FF transitions contribute significantly to the \(\beta\)-decay rates by involving neutron and proton orbitals with opposite parities near the Fermi surface (the gray arrows in Fig. 1, e.g., \(\nu h_{11/2}\rightarrow\pi g_{9/2}\)). The proximity of \({}^{133}\)In to the \({}^{132}\)Sn core reduces the number of active nucleons and the degrees of freedom in the decay process, making it an ideal ground to validate nuclear theories. On the other hand, the extreme neutron excess (\(N-Z\)=35) and large \(Q_{\beta}\) energy window (>13 MeV) give \({}^{133}\)In more complete access than nearby nuclei, such as \({}^{131}\)In (\(Z=49\), \(N=82\)) and \({}^{133}\)Sn (\(Z=50\), \(N=83\)), to the dominant \(\beta\)-decay channels that are responsible for the gross decay properties in the region. Overall, the unique combination of a large variety of decay modes and simple representation makes \({}^{133}\)In a perfect study-case nucleus, or a Rosetta Stone, to understand how the \(r\)-process nuclei decay near the neutron \(N=82\) shell closure. We studied the \(\beta\) decay of \({}^{133}\)In using the neutron time-of-flight (TOF) technique in combination with high-resolution \(\gamma\)-ray spectroscopic system. The \(\beta\) decay mostly populated neutron-unbound states in \({}^{133}\)Sn, which promptly decayed to \({}^{132}\)Sn via neutron emission [23; 24; 25]. If the neutron emission feeds an excited state in \({}^{132}\)Sn, the nucleus will also undergo \(\gamma\) decay(s) to the ground state. Although several groups have conducted spectroscopic studies of \({}^{133}\)Sn in the past [26; 27; 23; 24; 28], the knowledge of states above the neutron separation energy was scarce due to either the weak production rate or inefficient neutron detection. By taking advantage of neutron and \(\gamma\) spectroscopy measured in coincidence with \(\beta\) decay, we revealed for the first time all the dominant \(\beta\)-decay transitions in \({}^{133}\)In above the neutron separation energy. Owing to selective laser ionization of the \({}^{133}\)In samples [24], the decays from the \(9/2^{+}\) ground state (\({}^{133g}\)In) and the \(1/2^{-}\) isomer (\({}^{133m}\)In) were separated unambiguously. The simple structure of \({}^{133}\)Sn, the \(\beta\)-decay selection rules, and the laser ionization all together allowed us to achieve a superior precision measurement. In addition, we used the new observation to benchmark large-scale shell-model (LSSM) calculations. The new measurement provides valuable insights into understanding the \(\beta\) decays of \(r\)-process nuclei. _Experiment and result--_ The Isotope Separator On-Line (ISOLDE) facility at CERN [29] and Resonance Ionization Laser Ion Source [30] produced the isotopes of interest. Through the General Purpose Separator (GPS) [29], the beams were brought to the ISOLDE Decay Station for \(\beta\)-decay measurements. The neutron TOF spec Figure 1: (Top) Chart of nuclei centered on \({}^{133}\)In (red star). The label \(\hbar\omega\) refers to the harmonic-oscillator shells around \({}^{132}\)Sn. The \(r\)-process path is taken from Ref. [20]. (Bottom) Proton and neutron single-particle (_s.p._) diagram with dominant \(\beta\)-decay channels in \({}^{133g}\)In and \({}^{133m}\)In. Red and gray arrows represent GT and FF transitions respectively. tra measured in coincidence with the \(\beta\) decay of \({}^{133}\)In are presented in Fig. 2, with Fig. 2(a) corresponding to the pure ground-state decay and Fig. 2(b) to an admixture of ground-state (40%) and isomeric decays (60%). Those neutrons were emitted from the neutron-unbound states in \({}^{133}\)Sn after being populated in the \(\beta\) decay. Neutron emissions may leave the residual \({}^{132}\)Sn nucleus in an excited state. However, we did not observe any of the strong neutron peaks in Fig. 2 coinciding with the \({}^{132}\)Sn \(\gamma\) decay, see Fig. 2(c), implying strong direct ground-state feedings in the neutron emissions. The spectra are fitted by a neutron response function (magenta) consisting of 18 and 13 peaks in \({}^{133g}\)In (blue) and \({}^{133m}\)In (red) decays, respectively. We extracted the excitation energies (\(E_{ex}\)) and decay probabilities (\(I_{\beta}\)) of individual states from the fitting result. The full details of the experimental setup, data analysis, and the list of neutron unbound states identified in \({}^{133}\)Sn are presented in Ref. [31]. The main achievement of this work is the observation and quantification of the \(\beta\)-decay channels in \({}^{133g,m}\)In. The strongest transitions are mediated by transforming a neutron from inside the \(N=82\) core to a proton on either \(\pi g_{9/2}\) (ground-state decay) or \(\pi p_{1/2}\) (isomeric decay), leaving the proton \(Z=50\) shell closed and two neutrons outside \(N=82\) coupled to a spin-zero pair, see Fig. 1. We refer to the \({}^{133}\)Sn states so populated as \(\nu\)2p-1h (neutron two particle one hole) states hereafter. Using the analysis methodology detailed in Ref. [31], we identified four such states, including the \(11/2^{-}\) (\(\nu h^{-1}_{11/2}\)) state at 3.564(1) MeV [24], the \(3/2^{+}(\nu d^{-1}_{3/2})\) state at 3.62(2) MeV, the \(1/2^{+}(\nu s^{-1}_{1/2})\) state at 3.79(2) MeV, and the \(7/2^{+}(\nu g^{-1}_{7/2})\) state at 5.93(9) MeV (the superscript of an orbital indicates occupation number, being positive for particles and negative for holes). Our experiment observed most of these states for the first time, the sole exception being the \(11/2^{-}\) state [23; 24; 28]. We extracted comparative partial half-lives (log\(ft\)) for those transitions. The log\(ft\) values quantify the strength of a given \(\beta\)-decay transition and correlate to the \(\beta\)-decay strength as \(S_{\beta}=1/ft\)[32], where \(f\) is the Fermi function [33] for the electron distribution feeding a given state and \(t=T_{1/2}/I_{\beta}\) is the partial half-life of a transition with \(I_{\beta}\) probability. From the \(9/2^{+}\) ground state, the log\(ft\) to the \(11/2^{-}\) and \(7/2^{+}\) states are 5.7(1) and 4.7(1), respectively. From the \(1/2^{-}\) isomer, the log\(ft\) values to the \(3/2^{+}\) and \(1/2^{+}\) states are 5.4(1) and 5.8(1), respectively. Based on the constraints imposed by \(\beta\)-decay selection rules, the \(7/2^{+}\) state was populated via a GT transition, whereas the other three states were fed by FF transitions. These assignments are in line with the systematics gleaned from the log\(ft\) values mentioned above [34]. _Comparison with LSSM--_ We carried out LSSM calculations to interpret our results quantitatively. A model space containing multiple complete proton and neutron major shells around \({}^{132}\)Sn exceeds current computational capability. To focus on the strong decay channels in \({}^{133}\)In, e.g. \(\nu g_{7/2}\rightarrow\pi g_{9/2}\), we built the model space on a \({}^{88}\)Sr core (\(Z=38\), \(N=50\)), including the \(0g_{7/2}\), \(1d_{5/2}\), \(1d_{3/2}\), \(2s_{1/2}\), \(0h_{11/2}\), \(1f_{7/2}\) orbitals for valence neutrons and the \(1p_{1/2}\), \(0g_{9/2}\), \(0g_{7/2}\), \(1d_{5/2}\), \(1d_{3/2}\), \(2s_{1/2}\) orbitals for valence protons. This choice retains important orbital partners relevant for \(\beta\) decay, see Fig. 1. We truncated the number of allowed p-h excitations across \({}^{132}\)Sn to 2p-2h as the first-order approximation. We used three sets of two-body interactions constructed from the effective nucleon-nucleon (\(NN\)) potentials of (i) N\({}^{3}\)LO [35], (ii) Argonne V18 [36], and (iii) VMU plus M3Y [37; 38]. N\({}^{3}\)LO and V18 were derived using the many-body perturbation theory [39], with the procedure outlined in Ref. [40]. VMU was obtained by computing the matrix elements directly within our model space. We determined the single-particle (\(s.p.\)) energies from the spectroscopic data in the vicinity of \({}^{132}\)Sn. The GT and FF operators were defined in Ref. [41], and their effective scaling factors were listed as follows that best reproduce our data. \[q(\text{GT})=0.6,\ q(M_{0}^{T})=1.5,\ q(M_{0}^{S})=0.6,\] \[q(x)=0.5,\ q(u)=0.4,\ q(z)=0.8.\] We first examined the individual transitions populating the four \(\nu\)2p-1h states, see Figs. 3 (a)-(d). All three nuclear potentials reproduced the experimental FF strengths feeding the \(11/2^{-}\), \(3/2^{+}\), and \(1/2^{+}\) states at lower excitation energy. Additionally, they gave consistent microscopic compositions of those states: the greatest fractions in the \(11/2^{-}\) and \(3/2^{+}\) wavefunctions were Figure 2: Neutron TOF spectra taken in coincidence with the \({}^{133}\)In \(\beta\) decays, with (a) corresponding to the pure ground-state decay and (b) to an admixture of ground-state (40%) and isomeric decays (60%). The inset (c) shows the ground-state spectrum in coincidence with the 4041-keV \(\gamma\) decay in \({}^{132}\)Sn. On top of the background (dashed line), the spectra are fitted by the neutron response functions (magenta) consisting of 18 (blue) and 13 (red) peaks in the ground-state and isomeric decays, respectively. \(\nu h_{11/2}^{-1}\times f_{7/2}^{2}\) and \(\nu d_{3/2}^{-1}\times f_{7/2}^{2}\), respectively (\(>85\%\)). The \(1/2^{+}\) state was somewhat mixed, with the leading order term \(\nu s_{1/2}^{-1}\times f_{7/2}^{2}\) being less than 55%. Regarding the \(7/2^{+}\) state, the calculations diverged in the GT strength, giving \(36\times 10^{-6}\)\(s^{-1}\) (V\({}_{\rm MU}\)), \(37\times 10^{-6}\)\(s^{-1}\) (V18), and \(19\times 10^{-6}\)\(s^{-1}\) (N\({}^{3}\)LO) respectively. Although all models predicted a similar fraction of \(\nu g_{7/2}^{-1}\times f_{7/2}^{2}\) (\(\sim 45\%\)) in their wavefunctions, they differed in the amounts of proton excitation across \(Z=50\), 0.4 in N\({}^{3}\)LO, and 0.1 in V18 and V\({}_{\rm MU}\). The experimental GT strength, \(20(4)\times 10^{-6}\)\(s^{-1}\), was as quenched as the N\({}^{3}\)LO prediction, suggesting sizeable proton core excitation contributing to the state. The comparison reveals the sensitivity of this particular GT decay strength to the employed \(NN\) interactions. Considering this \(\nu g_{7/2}\rightarrow\pi g_{9/2}\) transition dominates the decay rate (and half-life) in not only \({}^{133}\)In but also a large number of neutron-rich nuclei southeast of \({}^{132}\)Sn, it is of paramount importance to reproduce this decay in \({}^{133}\)In in any theoretical calculations aiming to provide reliable nuclear-decay input to astrophysical applications. Next, we presented in Figs. 3 (e, f) the cumulative \(\beta\)-strength distribution from the experiment and LSSM with N\({}^{3}\)LO. The calculations reproduced the experimental distribution of both states below 9 MeV, giving half-lives of 145 ms for the ground state and 169 ms for the isomer, in good agreement with the literature values (162 and 167 ms) [24]. Towards higher excitation energy, a sharp kink emerged in the calculations and drove the distributions up over the experimental ones. Because FF decays are extremely weak there, see Figs. 3 (e, f), those strengths are ascribed to the GT decays involving both the neutron and proton orbitals in the 50-82 shell, or the \(4\hbar\omega\) shell, in Fig. 1. The disagreement is most likely caused by the truncation of 2p-2h excitation across \({}^{132}\)Sn, which is not sufficient to describe fully the \(NN\) correlations and strength distribution at such high energy. Even though it has a relatively minor impact on the calculated half-lives and thus the \(r\)-process, the problem will have to be addressed with more advanced theoretical treatment in the future. _Feedback to global calculations--_ Although the LSSM calculations achieved a satisfactory agreement with our data, it is impractical to make systematic calculations across the nuclear chart due to the large model spaces. Therefore, global nuclear models are indispensable for modeling the \(r\)-process. Our new measurements can serve as constraints and validation points to improve the accuracy of those global models beyond what was previously achievable. The measured branching ratios from this work allowed the extraction of partial half-lives of GT and FF transitions of an \(r\)-process nucleus. According to our LSSM calculations in Fig. 3, FF transitions dominate the strength below the GT peak at 6 MeV, whereas those above 6 MeV are mostly GT transitions. Therefore, the partial half-life of FF transitions is obtained by summing \(\beta\)-decay probabilities below the \(7/2^{+}\) state at 5.93 MeV, including the bound states [23; 24; 25]. The GT transitions contain the rest of the feeding intensities from 5.93 MeV onward. To accommodate the model dependency, we estimated a systematic uncertainty of attributing 50% of the strength above 6 MeV to FF transitions. The resultant partial half-lives are \(t^{\rm GT}=260(40)\) and \(t^{\rm FF}=435(60)\) ms for \({}^{1339}\)In, and \(t^{\rm GT}=1130(500)\) and \(t^{\rm FF}=195(10)\) ms for \({}^{133m}\)In. Although the two states have similar half-lives, the ground-state decay is dominated by GT transitions, whereas the isomeric decay is mostly carried by FF transitions. Because global models only predict ground-state Figure 3: Comparisons of excitation energy and decay strength between LSSM and experimental data. Figures (a)-(d) show the results of four individual transitions populating the \(\nu\)2p-1h states. Figures (e) and (f) present cumulative strength distribution up to \(E_{ex}=11\) MeV for \({}^{133g,m}\)In, respectively. The calculation only includes the results from N\({}^{3}\)LO because of its better agreement in the GT strength. The theoretical FF contribution is drawn explicitly in dashed lines. decays to date, the comparison in Fig. 4 is presented for \({}^{133g}\)In exclusively. The global models include Moller03 (FRDM+QRPA) [8], Borzov16 (DF+CQRPA) [42], Marketin16 (RHB+\(pn\)-RQRPA) [9], Ney20 (EFA-pnFAM) [12], and Sarriguren22 (HF+BCS+QRPA) [43]. All five are the QRPA calculations that differ in their degree of self-consistency, density functional, or calculation method. In the results of Moller03, the discrepancy is mainly driven by the GT decays, while in Marketin16, it is caused by FF transitions with overestimated strength. Although Ney20 finds a reasonable ratio between the GT and FF strengths, its absolute decay rates are underestimated by more than a factor of two. The deviations suggest the strength distributions of those models need to be revised for \({}^{133}\)In to improve their prediction power for other \(r\)-process nuclei further away from \({}^{132}\)Sn. Borzov16 achieves the best agreement overall with the experimental data. Even though Sarriguren22 does not include FF decays, it provides a reasonable partial GT half-life for \({}^{133g}\)In. Summary and prospects--In conclusion, we established with high precision the \(\beta\)-decay strength distribution of \({}^{133g,m}\)In. Its ground-state decay is dominated by a GT transformation, while the isomer almost exclusively decays through FF transitions. The experimental findings were used to benchmark LSSM calculations with effective interactions. For the GT transformation \(9/2^{+}\to 7/2^{+}\), only N\({}^{3}\)LO produced a good agreement with the data. In contrast, all the models agreed with the FF decays at lower excitation energy. The comparison of several existing global models shows a wide range of competition between GT and FF transitions in this simple nucleus, with only Borzov16 estimating their relative contributions and absolute decay rates correctly. It is noteworthy that the novel \(ab\)-initio theories developed rapidly in nuclear physics during the last decade. While not yet available for global predictions, they have already given essential advancement in understanding nuclear \(\beta\)-decay probabilities [44]. The measurements from this work will serve as an anchor point on the neutron-rich side of the nuclear chart, where the strengths are more fragmented and quenched than those in the \({}^{100}\)Sn region along the \(Z=N\) line [45; 46]. We acknowledge the support of the ISOLDE Collaboration and technical teams. The authors thank Dr. Soda Yoshida, Dr. Yutaka Utsuno, Dr. Noritaka Shimizu, Dr. Kate L Jones, and Dr. Ivan N Borzov for valuable discussions. This project was supported by the European Union's Horizon 2020 research and innovation programme Grant Agreements No. 654002 (ENSAR2), by the Office of Nuclear Physics, U.S. Department of Energy under Award No. DE-FG02-96ER40983 (UTK) and DE-AC05-00OR22725 (ORNL), by the National Nuclear Security Administration under the Stewardship Science Academic Alliances program through DOE Award No. DE-NA0002132, by the Romanian IFA project CERN-RO/ISOLDE, by the Research Foundation's (FWO, Belgium), by the Interuniversity Attraction Poles Programme initiated by the Belgian Science Policy Office (BriX network P7/12), by the German BMBF under contracts 05P18PKCIA and 05P21PKCII in Verbundprojekte 05P2018 and 05P2021, by the UK Science and Technology Facilities Research Council (STFC) of the UK Grant No. ST/R004056/1, ST/P004598/1, ST/P003885/1, ST/V001027/1, and ST/V001035/1, by National Natural Science Foundation of China under Grant No. 11775316, by the Polish National Science Center under Grants No. 2019/33/N/ST2/03023, No. 2020/36/T/ST2/00547, and No. 2020/39/B/ST2/02346, by Spanish MCIN/AEI FPA2015-65035-P, PGC2018-093636-B-I00, RTI2018-098868-B-I00, PID2019-104390GB-I00, PID2019-104714GB-C21, and IJCI-2014-19172 grants, by Universidad Complutense de Madrid (Spain) through Grupo de Fisica Nuclear (910059) and Predoctoral Grant No. CT27/16-CT28/16. The LSSM calculations were carried out by KSHELL [47].
2306.00584
Representation Theorems Obtained by Miningacross Web Sources for Hints
A representation theorem relates different mathematical structures by providing an isomorphism between them: that is, a one-to-one correspondence preserving their original properties. Establishing that the two structures substantially behave in the same way, representation theorems typically provide insight and generate powerful techniques to study the involved structures, by cross-fertilising between the methodologies existing for each of the respective branches of mathematics. When the related structures have no obvious a priori connection, however, such results can be, by their own nature, elusive. Here, we show how data-mining across distinct web sources (including the Online Encyclopedia of Integer Sequences, OEIS), was crucial in the discovery of two original representation theorems relating event structures (mathematical structures commonly used to represent concurrent discrete systems) to families of sets (endowed with elementary disjointness and subset relations) and to full graphs, respectively. The latter originally emerged in the apparently unrelated field of bioinformatics. As expected, our representation theorems are powerful, allowing to capitalise on existing theorems about full graphs to immediately conclude new facts about event structures. Our contribution is twofold: on one hand, we illustrate our novel method to mine the web, resulting in thousands of candidate connections between distinct mathematical realms; on the other hand, we explore one of these connections to obtain our new representation theorems. We hope this paper can encourage people with relevant expertise to scrutinize these candidate connections. We anticipate that, building on the ideas presented here, further connections can be unearthed, by refining the mining techniques and by extending the mined repositories.
Marco B. Caminati, Juliana K. F. Bowles
2023-06-01T11:54:54Z
http://arxiv.org/abs/2306.00584v1
# Representation Theorems Obtained by Mining across Web Sources for Hints ###### Abstract A _representation theorem_ relates different mathematical structures by providing an isomorphism between them: that is, a one-to-one correspondence preserving their original properties. Establishing that the two structures substantially behave in the same way, representation theorems typically provide insight and generate powerful techniques to study the involved structures, by cross-fertilising between the methodologies existing for each of the respective branches of mathematics. When the related structures have no obvious a priori connection, however, such results can be, by their own nature, elusive. Here, we show how data-mining across distinct web sources (including the Online Encyclopedia of Integer Sequences, OEIS), was crucial in the discovery of two original representation theorems relating _event structures_ (mathematical structures commonly used to represent concurrent discrete systems) to families of sets (endowed with elementary disjointness and subset relations) and to full graphs, respectively. The latter originally emerged in the apparently unrelated field of bioinformatics. As expected, our representation theorems are powerful, allowing to capitalise on existing theorems about full graphs to immediately conclude new facts about event structures. Our contribution is twofold: on one hand, we illustrate our novel method to mine the web, resulting in thousands of candidate connections between distinct mathematical realms; on the other hand, we explore one of these connections to obtain our new representation theorems. We hope this paper can encourage people with relevant expertise to scrutinize these candidate connections. We anticipate that, building on the ideas presented here, further connections can be unearthed, by refining the mining techniques and by extending the mined repositories. models of computation, algebraic and categorical methods, representation theorems, concurrency, intelligent mathematics, AI-aided mathematical discovery, semantics, event structures, full graphs ## I Introduction In automated mathematical discovery and experimental mathematics, a machine can be involved in any of the stages leading to the formulation of new mathematical conjectures. Usually, the interestingness and correctness of such conjectures are important criteria in informing how the machine performs its tasks. Within this quite general framework, there is considerable variability as to the machine's role: it can, for example, generate conjectures [1, 2, 3], attach to them a measure of interestingness [4], search given input for plausible hints of conjectures [5], or compute results suggesting patterns that can inspire a mathematician [6, 7]. Correspondingly, the degree of the machine's awareness of the involved mathematical objects varies from it applying a formal reasoning system on such objects to it merely examining examples of (possibly yet to be stated) conjectures. We will focus on the latter end of this spectrum, sitting at the intersection between automated mathematical discovery and data mining. One obvious advantage of this choice is the extensive amount of data it grants: any conjecture involving finite objects (for example, graphs) leaves a trace obtained by counting the size of instances (for example, the number of vertices in graphs satisfying the hypotheses of a conjecture) of these objects. These counts have a universal representation as decimal integers written in plain text, and therefore interesting matches between such counts can potentially be found over the vast range of all digitised documents. As a consequence, another advantage of this approach is that it is domain-agnostic and potentially able to link finite mathematical objects not apparently related (as long as one can count them), which we will see to be crucial in obtaining the results in this paper. The idea is, therefore, to mine existing integer datasets for interesting relationships between them, possibly signaling deeper connections. This idea is by no means new [5, 8, 9]. However, we believe that this paper will provide evidence that some aspects of it are worth more attention: the possibility of mining across distinct datasets and of exploiting datasets and tools less specific to mathematics. Section II details how we put the above mining ideas into practice, and the resulting outcomes. In the rest of the paper, we focus on one of these outcomes in particular, on the original mathematical results it hinted us to formulate, and on their proofs. Section III gives more specific, yet informal context about the family of theorems these results belong to and about their importance and methodological usefulness. Section IV introduces the definitions and notations to express these results. Section V illustrates a representation theorem for event structures (a computational model for discrete concurrent systems), Section VI introduces a theorem linking event structures to full graphs, and explains how both this result and that of Section V were crucially suggested by the findings from Section II. Section VIII concludes. ## II Mining Integer Sequences across Sources The Online Encyclopedia of Integer Sequences (OEIS) [10] is a searchable online database containing the first terms (at least \(4\), in decimal representation) of over \(340,000\) integer sequences. Together with the field containing the terms, there are several meta-data fields: an unique ID, name, comments, references, keywords or flags (marking, for example, whether a sequence is finite), etc. The OEIS has already been profitably used for research in automated mathematical discovery [11, 5, 12]. However, all the efforts we are aware of limit their discovery domain to the OEIS alone, potentially missing integer sequences not featured there. This observation naturally leads one to investigate what can be found by looking up OEIS sequences (or fragments thereof) on the largest available repository of scientific literature, and Google Search (or Google, for short) is an obvious candidate: it indexes a huge number of web pages and documents and it subsumes Google Scholar, hosting an especially relevant subset of documents (i.e., scientific papers). For our purposes, one particular attractiveness of Google Scholar is its own text extraction program [13], making analog scans of older papers searchable: papers older than OEIS are particularly at risk of having being omitted from it, and therefore worth being explored. In 2019, Google Scholar was estimated, with \(389\) million records, to be the largest bibliographic database [14]; by querying Google, we will have access to those records and many more. The price to pay for such a breadth of information is the inconvenience in accessing and processing it: while searching within the OEIS, one can automate numerical transformations on the sequences in order to facilitate matching between them. This can happen either on the server side (typically through the Superseeker service [10]) or on the user side [11, 9, 5, 12]. Under our approach, this possibility is largely gone, because any numerical transformation should happen before querying Google, leading to a multiplication of queries for every transformation: this is clearly impractical. The transformations applied by Google on the queried terms are largely non-numerical (e.g., expanding a word into its English synonyms, or correcting possible mis-spellings) and hence immaterial in our case, except for possible formatting issues (e.g., matching the numerical representations \(16000\) and \(16,000\)). Furthermore, a bulk of noise results is to be expected, deriving from irrelevant occurrences of the searched numbers (e.g., in serial numbers, catalogs, etc.). For these reasons, and since we have no control on how Google processes the input information it is passed, we need to carefully craft the format of that information beforehand. The main guiding idea in this task is simple: we want interesting matches between OEIS and Google, and complexity is a convenient measure of interestingness [8]. Since the decimal representation length of an integer has a good correlation with its complexity (assuming non-significant figures are omitted, which is the case for the OEIS), we should ideally pass long integers from the OEIS to Google. This is especially true in our case where we need to treat, due to the limitations explained above, numbers as plain text, hence we do not have much else than length alone on which to base our assessment of the complexity (and therefore, of the interestingness) of a number. However, we do not want too long numbers, because these are usually hard to compute, thereby potentially reducing too much the range of documents Google will return. Therefore, we need to strike a balance with respect to the length of the numbers we pass to Google: we would like the minimal length leading to the exclusion of non-mathematical occurrences (such as dates, page numbers, catalog numbers, etc.) among the search results from Google. Empirically, we found that six digits do a reasonable job in that respect. We downloaded all OEIS entries into a 16Gb SQLite database using [15], removed all the sequences not having the "hard" keyword (meaning the sequence is not considered hard to compute), or having the field "formula" non empty (meaning that some mathematical property of the sequence is already known), or having no entries with more than five digits. From the remaining entries, we sorted the terms according to their length, picked the smallest term with at least six digits and either the next one or (if there was no next one) the previous one. This scheme allowed us to produce, for \(4123\) sequences, two distinct terms which were passed to Google, together with the directive -site:oeis.org, to exclude matches within the OEIS. The text snippets generated by Google in response, and describing the first matches among the documents indexed by it, were parsed as follows: first, the sequences with no matches were discarded, which left us with \(3591\) sequences, all potentially interesting. At this point, given the high number of matches to be manually examined, we decided to give priority for consideration to some matches, as follows. We grepped each result for a set of arbitrary mathematical terms (including for example the words "graph", "group", "ring"). If there was a match not occurring in the sequence OEIS name, that sequence was given priority. Among those, the authors started from the ones pertaining fields where they felt most knowledgeable, and soon found an interesting pair: \(41099\), \(3528258\), occurring both in OEIS A284276 and in [16, Section 4]. This match was decisive in suggesting the results we illustrate in the rest of this paper: it is an instance of Corollary VI.3, which, in turn, suggested us Theorems VI.2 and V.2 as dependencies. Without that numeric cue, none of these results would have materialised: the theorems arose to explain why this match was not a coincidence. The remaining matches need further human examination. ## III Representation Theorems A fundamental and extremely fruitful pattern in mathematics is to observe how some operations and correspondences between objects behave, and then to capture this behaviour via axioms, obtaining an abstract structure. Together with the original meaning of the operations and correspondences one has thereby an abstract level: the structure axioms are formulas describing the formal relationship between objects, operations and correspondences, and can be manipulated, studied, and generalised algebraically without caring what their original meaning was. One can hence talk of two levels of thinking of the given mathematical objects: the original one (also called the concrete level), and the abstract one. Examples of this way of obtaining abstract structures from concrete interpretations abound in mathematics: just to provide two well-known instances, from studying how permutations behave one obtains the group axioms; and from studying how \(\cup\) and \(\cap\) behave one obtains the (distributive) lattice axioms. A natural question is how and to what extent one can go back from the abstract level to the concrete level: in other words, can any abstract structure be represented via a suitable concrete implementation of it? For many important structures, this question is answered positively by _representation theorems_, providing the existence of a suitable isomorphism allowing to go back and forth between these two levels;1 returning to the examples above, Cayley's representation theorem provides a representation of any group in terms of a permutation group [17, Section II.7], and Birkhoff's representation theorem provides a representation of any finite distributive lattice in terms of a lattice of downsets [18, Theorem 5.12]. Footnote 1: In a more general acceptation, a representation theorem provides an isomorphism between an abstract structure and another structure, possibly itself abstract. The fruitfulness of this two-level approach has many facets, including the ability of algebraically manipulating the concrete objects forgetting about their nature, thus seeing to what extent their known properties or relations are generalisable; or, oppositely, the reasoning aid given by a concrete setting as an inspiration to explore further consequences or generalisations of the abstract axioms given by properties of the concrete objects obeying them. This fruitfulness is testified by the existence of dedicated fields using representations to study properties of given structures: e.g., representation theory studies the properties of groups using their representations as linear transformations of vector spaces. In typical cases (such as the two just mentioned), the fact that the abstract level originated right from the start from the study of the concrete level makes such theorems quite natural to express and to prove: such results are, in these typical cases, attractively simple and elegant.2 Footnote 2: One should note that this simplicity is a boon with respect to the fruitfulness just mentioned. However, other mathematical structures could well have a more tortuous birth. For example, prime event structures (formally introduced in Section IV) historically and conceptually developed in stages: elementary event structures were expanded into prime event structures to accommodate nondeterminism [19, Section 2]. As we will see in this paper, this tortuous birth led to miss, up to now, a remarkably simple representation theorem (V.2) for prime event structures; whose simplicity, however, does not restrain the typical fertility of representation theorems, allowing us to immediately unearth unforeseen links between prime event structures and full graphs (the further representation theorem VI.2) and cross fertilisation results (Corollaries VI.3 and VI.4). Another possible reason for this accident could be that the original purpose of prime event structures is to model computations of undetermined duration, which led to put less attention into the finite case, where our theorems are particularly simple; as briefly argued in Section VIII, we believe that Theorem V.2, besides its own importance, can serve as a fundamental stepping stone towards a generalisation to the infinite case. To give a final reason: given the original role of the elements in prime event structures as representatives of computational events, it is not natural to associate to them sets (as Theorem V.2 does); or, at least, it is less natural than in cases, such as lattices, where a concrete level consisting of sets was historically a starting point to formulate the abstract level definitions. The oversight of Theorem V.2 is made even more surprising by the fact that other, more complicated representation theorems were formulated for prime event structures since their inception [19, Theorems 2.10 and 3.8]. ## IV Preliminaries and Event Structures Set membership, inclusion, union, intersection, set-theoretical difference, cartesian product are denoted by the infix symbols \(\in\), \(\subseteq\), \(\cup\), \(\cap\), \(\setminus\), \(\times\), respectively; arbitrary union and intersection over a set of sets are denoted by the prefix symbols \(\bigcup\) and \(\bigcap\). A set \(R\) satisfying \(R\subseteq X\times Y\) for some \(X\), \(Y\) (i.e., any set \(R\) containing only ordered pairs) is called a binary relation or simply a relation. The minimal \(X\) and \(Y\) satisfying the previous inclusion are the domain (\(\operatorname{dom}\)) and range (\(\operatorname{ran}\)) of \(R\), respectively, while its converse \(R^{-1}\) is the set obtained by flipping the elements of each the pairs in \(R\); the field of \(R\) is \(\operatorname{fe}R:=\operatorname{dom}R\cup\operatorname{ran}R\). Given a set \(X\), the restriction of \(R\) to \(X\) is defined as \(\left.R\right|_{X}:=\left(X\times\operatorname{ran}R\right)\cap R\), while the image of \(X\) through \(R\) is \(R^{*}\left(X\right):=\operatorname{ran}R\right|_{X}\). The product or composition of relations \(R\) and \(S\) is defined as \(R;S:=\left\{\left(x,z\right)\cdot\exists y.\ \left(x,y\right)\in R\wedge\left(y,z \right)\in S\right\}.\)\(R\) is right-unique if, for any \(x\), \(R^{*}\left(\left\{x\right)\right\}\) contains at most one element, while it is left-unique if \(R^{-1}\) is right-unique. A right-unique relation is more commonly called a function or a map. In this case, there are special notations in use: 1) \(R\left(x\right)\), or even only \(R\)\(x\), indicates the unique element of \(R^{*}\left(\left\{x\right\}\right)\), when \(x\in\operatorname{dom}R\); 2) \(R:X\to Y\) indicates that \(\operatorname{dom}R=X\) and that \(\operatorname{ran}R\subseteq Y\); 3) \(X\ni x\overset{R}{\mapsto}y\) in lieu of \(R=\left\{\left(x,y\right).x\in X\right\}\), with "\(X\ni\)" or the superscript in \(\overset{R}{\rightarrow}\) possibly dropped when the context permits; 4) \(S\circ R\) in lieu of \(R;S\). A first example of a function is \(\operatorname{card}\), associating to each set \(X\) of a given family its unique cardinality \(\operatorname{card}X\) (also denoted \(\left|X\right|\)). A left-unique function is called injective, or an injection. \(2^{X}\) is the set of all subsets of \(X\), while \(\mathbb{Z}^{X}:=2^{X}\cap\left(\operatorname{card}^{-1}\right)^{*}\left( \mathbb{N}\right)\) denotes the finite subsets of \(X\). Note that, for any relation \(R\), \(R^{*}\) is always a function. When all the elements of \(\operatorname{ran}R\) are sets,3 there is an additional function one can derive from \(R\): \(R^{\cup}:=\mathrm{dom}\,R\ni x\mapsto\bigcup R^{*}\left(\{x\}\right)\subseteq \bigcup\mathrm{ran}\,R\), associating to each \(x\) the union of all the sets in relation with \(x\); if, furthermore, \(R\) is a function, then \(R\) and \(R^{\cup}\) coincide. \(\mathrm{fx}\,R:=\mathrm{fe}\left(R\cap\mathcal{I}\right)=\mathrm{dom}\left(R \cap\mathcal{I}\right)=\mathrm{ran}\left(R\cap\mathcal{I}\right)\) is the set of fixed points of \(R\), where \(\mathcal{I}\) is the identity function. A relation \(R\) is said to be: 1) _reflexive_ if \(\mathrm{fe}\,R\subseteq\mathrm{fx}\,R\); 2) _irreflexive_ if \(R\cap\mathcal{I}=\emptyset\); 3) _transitive_ if \(R;R\subseteq R\); 4) _symmetric_ if \(R^{-1}\subseteq R\); 5) _antisymmetric_ if \(R\cap R^{-1}\subseteq\mathcal{I}\); 6) a _preorder_ if it is both reflexive and transitive; 7) a _partial order_ if it is an antisymmetric preorder. A bijection between sets \(X\) and \(Y\) is an injection \(f\) with \(\mathrm{dom}\,f=X\) and \(\mathrm{ran}\,f=Y\). Footnote 3: This is always the case in some foundations: e.g., ZF, in which anything is a set. A prime event structure (or just _event structure_, ES) [19] models a concurrent computation by specifying which computational events are causally dependent and which events mutually exclude. This is attained by two relations \(\leq\) (causality), and \(\#\) (conflict) as from the following definition. **Definition IV.1**.: _An event structure is a pair of relations \((\leq,\#)\) where \(\leq\) is a partial order, \(\#\) is irreflexive and symmetric, \((\mathrm{fe}\leq)\supseteq(\mathrm{fe}\,\#)\) is called the set of events, and for any three events \(x_{0},x_{1},y\): \(x_{0}\#y\wedge x_{0}\leq x_{1}\to x_{1}\#y\)._ The last condition is referred to as conflict propagation. The standard infix notation in Definition IV.1 can get cumbersome, therefore we will often use the set theoretical notation and denote these relations with letters, for example writing \((x,y)\in D\) in lieu of \(x\leq y\) and \((x,y)\in U\) in lieu of \(x\#y\). ## V A Representation Theorem for Ess The main result of this section is Theorem V.2, establishing that elements of any finite ES can be represented as finite sets, in such a way that \(\leq\) corresponds to \(\supseteq\) and \(\#\) to disjointness. Formally, this means that it is always possible to find a function \(f\) associating a set to each event of a finite ES subject to the constraints given by Definition V.1. We will call such a function a _representation_ for the given ES. **Definition V.1**.: _Given two binary relations \(D\) and \(U\), the set-valued function \(f\) is a representation for \((D,U)\) if_ \[\forall x\ y\in\mathrm{dom}\,f.\ \left((x,y)\in D\leftrightarrow f \left(x\right)\supseteq f\left(y\right)\right)\ \wedge \tag{1}\] \[\forall x\ y\in\mathrm{dom}\,f.\ \left((x,y)\in U\leftrightarrow f \left(x\right)\cap f\left(y\right)=\emptyset\right). \tag{2}\] We are now ready to state our representation theorem. **Theorem V.2** (Representation theorem).: _Consider two binary relations \(D\) and \(U\), with \(D\) finite and \(\mathrm{fe}\,U\subseteq\mathrm{fe}\,D\). Then \((D,U)\) is an event structure if and only if there is an injective representation \(f:\mathrm{fe}\,D\rightarrow\overline{2}^{\mathbb{N}}\setminus\{\emptyset\}\) for \((D,U)\)._ That is, a sufficient and necessary condition for a given finite number of events to form an event structure is the possibility of associating to each of them a set in such a way that \(\supseteq\) corresponds to \(\rightarrow^{*}\) and \(\#\) corresponds to disjointness. In the theorem, the associated sets are all subsets of \(\mathbb{N}\); however, any other infinite superset would do: the choice of \(\mathbb{N}\) is only dictated by technical convenience. Figure 2 shows a representation for the ES of Figure 1. The two implications composing the logical equivalence ("if and only if") in Theorem V.2 are proved separately in Sections V-A and V-B. ### _Having a Representation Implies Being an ES_ The first step is proving that condition (1) is strong enough to impose the partial order properties of \(\supseteq\) onto \(D\). This can be done directly but, instead, we will break down the proof into more general results, which we will gather in Lemma V.4. Formula (1) closely resembles the definition of \(f\) being an order embedding [18], except for the fact that here \(D\) is not assumed to be a partial order (because this is what we need to prove), while the standard definition of an order embedding takes that as a pre-condition. Therefore, we take the chance to study what can be proven about two relations linked by an order embedding when we drop basic assumptions. In this section, we reason about generic relations \(P\) and \(Q\), rather than the specific ones, \(D\) and \(\supseteq\), appearing in (1). We start by Fig. 1: An example event structure, with eight events related by causality (denoted by an arrow standing for \(\leq\)) and conflict (denoted by a dashed line). Fig. 2: A representation for the event structure of Figure 1. Now, the arrows represent \(\supseteq\) and the dashed lines the disjointness relation. Theorem V.2 states that any set of events is an event structure if and only if such a representation is constructible. stating the standard definitions of order-preserving and order-embedding, only with the order assumptions dropped, together with some additional definitions. **Definition V.3**.: _Given two relations \(P\) and \(Q\), a map \(f\) is said to be 1) \(\left(P,Q\right)\)-preserving if \(\forall x_{0},\ x_{1}\in\operatorname{dom}f.\left(x_{0},x_{1}\right)\in P\to \left(f\left(x_{0}\right),f\left(x_{1}\right)\right)\in Q\); 2) \(\left(P,Q\right)\)-converse-preserving if \(\forall x_{0},\ x_{1}\in\operatorname{dom}f.\)\(\left(f\left(x_{0}\right),f\left(x_{1}\right)\right)\in Q\to \left(x_{0},x_{1}\right)\in P;\) 3) a \(\left(P,Q\right)\)-embedding if it is both \(\left(P,Q\right)\)-preserving and \(\left(P,Q\right)\)-converse-preserving. The prefix "\(\left(P,Q\right)\)-" can be dropped when no ambiguity arises. We also introduce the map \(\iota_{f}:=\left(y_{0},y_{1}\right)\mapsto\left(f^{-1}\right)^{*}\left\{y_{ 0}\right\}\times\left(f^{-1}\right)^{*}\left\{y_{1}\right\}\)._ **Lemma V.4**.: _Let \(P\), \(Q\) be relations, \(f\) a function. 1) \(f\) is converse-preserving iff \(\bigcup\iota_{f}^{*}\ Q\subseteq P\); 2) \(\left(f^{-1}\right)^{*}\left(\operatorname{fx}Q\right)\subseteq\operatorname {fx}\left(\bigcup\iota_{f}^{*}\ Q\right)\). 3) \(\operatorname{fe}P\subseteq\operatorname{dom}f\to\) \(\left(f\text{ is preserving iff }P\subseteq\bigcup\iota_{f}^{*}\ Q\right)\). 4) If \(Q\) is transitive, then \(\bigcup\iota_{f}^{*}\ Q\) is. 5) If \(Q\) is reflexive, then \(\operatorname{fe}\left(\bigcup\iota_{f}^{*}\ Q\right)\subseteq\left(f^{-1} \right)^{*}\left(\operatorname{fx}Q\right)\)._ Proof.: Theses (1) and (3) are easy rephrasings of, respectively, (2) and (1) in Definition V.3. Now set \(P^{\prime}:=\bigcup\iota_{f}^{*}\ Q\). Proof of (2): if \(\left(y_{0},y_{0}\right)\in Q\) and \(x_{0}\in\left(f^{-1}\right)^{*}\left\{y_{0}\right\}\), then, in particular, \(\left(x_{0},x_{0}\right)\in\left(f^{-1}\right)^{*}\left\{y_{0}\right\}\times \left(f^{-1}\right)^{*}\left\{y_{0}\right\}\subseteq P^{\prime}\). Proof of (4): consider \(\left(x_{0},x_{1}\right),\left(x_{1},x_{2}\right)\in P^{\prime}\); \(\left\{\left(f\ x_{0},f\ x_{1}\right),\left(f\ x_{1},f\ x_{2}\right)\right\}\subseteq Q\), so that \(\left(f\ x_{0},f\ x_{2}\right)\in Q\) by transitivity, and \(\left(x_{0},x_{2}\right)\in\left(f^{-1}\right)^{*}\left\{f\ x_{0}\right\} \times\left(f^{-1}\right)^{*}\left\{f\ x_{2}\right\}\subseteq P^{\prime}\). Proof of (5): by construction of \(P^{\prime}\), \(x_{0}\in\operatorname{fe}P^{\prime}\) implies the existence of \(y_{0}\in\operatorname{fe}Q\) such that \(x_{0}\in\left(f^{-1}\right)^{*}\left\{y_{0}\right\}\). Now, by reflexivity of \(Q\): \(P^{\prime}\supseteq\left(f^{-1}\right)^{*}\left\{y_{0}\right\}\times\left(f^{-1 }\right)^{*}\left\{y_{0}\right\}\ni\left(x_{0},x_{0}\right).\) **Corollary V.5**.: _Assume \(f\) is a \(\left(P,Q\right)\)-embedding, \(\operatorname{fe}P\subseteq\operatorname{dom}f.\) If \(Q\) is a preorder, then \(P\) is. Moreover, if \(f\) is injective and defined over \(\operatorname{fe}P\), and \(Q\) is a partial order, then \(P\) is._ Proof.: \(P^{\prime}:=\bigcup\iota_{f}^{*}\ Q\) inherits \(Q\)'s transitivity by virtue of (4) in Lemma V.4, and \(Q\)'s reflexivity by chaining (5) and (2) of Lemma V.4. Using (1) and (3) in Lemma V.4, the embedding property of \(f\) implies \(P=P^{\prime}\), and we just saw that \(P^{\prime}\) is a preorder. Assume \(\left\{\left(x,y\right),\left(y,x\right)\right\}\subseteq P\). Then \(f\ x=f\ y\) by antisymmetry of \(Q\), so that the antisymmetry of \(P\) is satisfied by injectivity. **Lemma V.6**.: _Assume \(f\) is an injective representation \(f:\operatorname{fe}D\to\mathbb{Z}^{\mathbb{N}}\setminus\left\{\emptyset\right\}\) for \(\left(D,U\right)\). Then \(\left(D,U\right)\) is an ES._ Proof.: (1) means that \(f\) is a \(\left(D,\supseteq\right)\)-embedding, and the latter is a partial order, so that \(D\) also is by virtue of Corollary V.5. Consider events \(x_{0},x_{1},y\), and assume \(\left(x_{0},y\right)\in U\wedge\left(x_{0},x_{1}\right)\in D\). Then \(f\ x_{0}\cap f\ y=\emptyset\wedge f\ x_{0}\supseteq f\ x_{1}\), giving conflict propagation. The symmetry of \(U\) is immediate from that of \(\cap\), and the irreflexivity of \(U\) uses \(\emptyset\notin\operatorname{ran}f.\) ### _Any ES Has a Representation_ The proof of this direction (the "only if" part of Theorem V.2) is more elaborate than the other one (Lemma V.6), because we now need to construct a representation \(f\) given any finite event structure. We will do that recursively: we will remove one suitable element of the given event structure, thus lowering its cardinality and obtaining a representation for this reduced event structure, and we will show how to extend this representation so as its property of being a representation still holds with respect to the original event structure. The aforementioned operations of removing one element from a relation and of extension of a function are formally introduced, in forms suitable for our goals, in Definition V.7. **Definition V.7**.: _The subtraction of sets \(X\), \(Y\) from the relation \(R\) is defined as \(R-\left(X,Y\right):=R\backslash\left(\left(X\times\operatorname{ran}R\right) \cup\left(\operatorname{dom}R\times Y\right)\right)\). We will use the shorthand notation \(R-s\) to indicate \(R-\left(\left\{s\right\},\left\{s\right\}\right)\). The pointwise union of relations \(R_{0}\) and \(R_{1}\) is the function \(R+_{0}R_{1}:=\left(R_{0}\cup R_{1}\right)^{\heartsuit}\). By associativity, one extends this notion to multiple relations in the obvious way, writing \(\sum_{i}R_{i}\). For singleton relations, we can write, e.g., \(R+\left(x,y\right)\) in lieu of \(R+\left\{\left(x,y\right)\right\}\)._ The following lemma gives conditions under which we can extend a representation into one having a larger domain. **Lemma V.8**.: _Let \(g\) be a representation for \(\left(D-s,U-s\right)\). Assume that \(D^{*}\{s\}=\{s\}\not\subseteq U^{*}\left\{s\right\}\cup\operatorname{dom}g\), and that \(\forall x\in\operatorname{dom}g\). \(\left(x,s\right)\in U\leftrightarrow\left(s,x\right)\in U.\) If, for any \(x\in\operatorname{dom}g\), the non empty set \(Y\) satisfies all the following properties: 1) \(g\ x\not\subseteq Y\), 2) \(Y\subseteq g\ x\leftrightarrow x\in\left(D^{-1}\right)^{*}\left\{s\right\} \setminus\left\{s\right\}\), 3) \(g\ x\cap Y=\emptyset\leftrightarrow x\in\left(U^{-1}\right)^{*}\left\{s\right\},\) then \(g+\left(s,Y\right)\) is a representation for \(\left(D,U\right)\)._ Proof.: \(f:=g+\left(s,Y\right)\) extends \(g\), therefore we only need to check conditions (1) and (2) of Definition V.1 in the case \(s\in\{x,y\}\). What is more, the first of these conditions is trivial when \(x=s\), so that we only need to check the case \(y=s,x\neq s\), which immediately gives, using hypothesis 2: \(\left(x,s\right)\in D\leftrightarrow x\in\left(D^{-1}\right)^{*}\ \left\{s\right\} \setminus\left\{s\right\}\leftrightarrow f\ s=Y\subseteq g\ x=f\ x\). To check formula (2) of Definition V.1 in the same case we use hypothesis 3: \(\left(x,s\right)\in U\leftrightarrow x\in\left(U^{-1}\right)^{*}\ \left\{s\right\} \leftrightarrow g\ x\cap Y=\emptyset\leftrightarrow f\ x\cap f\ s=\emptyset,\) where the last step employed hypothesis 1. A symmetric argument concludes the proof by showing the same formula in the case \(x=s,y\neq s\). While condition (1) in Lemma V.8 merely requires that \(Y\) is "fresh", and is therefore usually easy to meet, not every representation \(f\) admits a \(Y\) satisfying the remaining conditions (2) and (3). However, it is always possible to augment a representation \(f\) to make this happen, where by "augmenting" we mean the action of enlarging the sets in \(\operatorname{ran}f\). This is detailed by Lemma V.9. **Lemma V.9**.: _Consider two relations \(D\), \(U\), with \(\left(\left(D^{-1}\right)^{*}\left\{s\right\}\setminus\left\{s\right\}\right) \cap\left(U^{-1}\right)^{*}\left\ Proof.: Let \(g:=f+f^{\prime}\) and fix \(x,y\in\operatorname{dom}g=\operatorname{dom}f\). Assume \((x,y)\in D\) and \(g\ x\not\supseteq g\ y\). Then, by construction of \(g\) and by hypothesis 2, using the monotonicity of \(\cup\), one concludes \(y\in\operatorname{dom}f^{\prime}\) and \(x\in\operatorname{dom}f\backslash\operatorname{dom}f^{\prime}\), which contradicts hypothesis 2. Viceversa, assume \(g\ x\supseteq g\ y\); then \(f\ x\supseteq f\ y\) using \((\bigcup\operatorname{ran}f)\cap\bigcup\operatorname{ran}f^{\prime}=\emptyset\), so that \((x,y)\in D\) by representativity of \(f\). Now assume \((x,y)\in U\). Then \(g\ x\cap g\ y=(f\ x\cup X)\cap(f\ y\cup Y)\) where at least one among \(X\) and \(Y\) is empty, thanks to hypothesis 1. Since \(X\cup Y\subseteq\bigcup\operatorname{ran}f^{\prime}\), which is disjoint from \(\bigcup\operatorname{ran}f\supseteq(f\ x\cup f)\), one obtains \(g\ x\cap g\ y=f\ x\cap f\ y=\emptyset\) by representativity of \(f\). Finally, assume \(g\ x\cap g\ y=\emptyset\); in particular, \(f\ x\cap f\ y=\emptyset\), yielding \((x,y)\in U\) again by representativity of \(f\). Proof of Theorem v.2.: One direction is given by Lemma V.6. For the converse, assume the existence of finite event structures not admitting an injective representation \(\operatorname{fe}D\to\overline{2}^{\mathbb{N}}\backslash\{\emptyset\}\). Among such counterexamples, we can take one (let us denote it \((D,U)\), with \(\operatorname{fe}U\subseteq\operatorname{fe}D\)) whose causality relation \(D\) has minimal cardinality. It is immediate to check that \(D\) cannot be empty: one consequence of this is that we can fix a \(D\)-maximal element \(s\) of it (due to \(D\) being finite); another consequence is that \(\operatorname{card}\left(D-s\right)<\operatorname{card}\ D\). Moreover, \((d:=D-s,u:=U-s)\) is still an event structure, and \(\operatorname{fe}u\subseteq\operatorname{fe}d\), so that we can obtain a representation for it: \(f:\operatorname{fe}d\to\overline{2}^{\mathbb{N}}\backslash\{\emptyset\}\). We now need to apply Lemma V.10 to \(f\), in order to obtain another representation over the same domain to which to apply Lemma V.9. To this end, let us consider the set of events concurrent to \(s\): \(C=\operatorname{fe}D-D^{-1}\ \{s\}-U^{-1}\ \{s\}\), together with a list of non-empty sets \(\{Z_{i}\). \(i=1,\ldots,n\}\) being conflict-free and downward-closed, and covering \(C\). This is always possible, for example by taking \(\left\{D^{-1}\left\{c\right\}\cdot c\in C\right\}\). Finally, define \(X_{i}:=Z_{i}\cup D^{-1}\ \{s\}\setminus\{s\}\). Note that each \(X_{i}\) is still conflict-free, which implies, together with the irreflexivity of \(U\), hypothesis (1) of Lemma V.10. Now we construct the constant functions \(g_{i}:=X_{i}\times\{m+i\}\), where \(m\) is any fixed natural \(>\max\bigcup\operatorname{ran}f\). The fact that each \(X_{i}\) is still, as is \(Z_{i}\), downward-closed, together with the way we constructed \(g_{i}\), allows each \(g_{i}\) to satisfy hypothesis (2) of Lemma V.10. Therefore, \(f+g_{1}\) is also a representation for \((d,u)\) and, iterating this reasoning, so is \(f+\sum g_{i}\). The same reasoning can now be applied to \(D^{-1}\left\{s\right\}-\{s\}\times\{m+n+1\}\), so that \(g:=f+\sum g_{i}+D^{-1}\ \{s\}-\{s\}\times\{m+n+1\}\) is still a representation for \((d,u)\). Setting \(Y:=\{m,\ldots,m+n+1\}\), it is easy to check that \(D\), \(U\), \(f\), \(g\), \(Y\) and the \(X_{i}\)'s satisfy all of Lemma V.9's hypotheses, so that \(Y\subseteq g\ x\leftrightarrow x\in\left(D^{-1}\right)^{*}\left\{s\right\} \setminus\{s\}\) and \(g\ x\cap Y=\emptyset\leftrightarrow x\in\left(U^{-1}\right)^{*}\left\{s\right\}\). Moreover, since \(Y\) is fresh and \(\emptyset\notin\operatorname{ran}f\), we also have \(g\ x\not\subseteq Y\) for every \(x\in\operatorname{dom}g=\operatorname{dom}f\). Therefore, by Lemma V.8, \(h:=g\cup\{(s,Y)\}\) is a representation for \((D,U)\). Finally, it is straightforward to see, since \(Y\cap\bigcup\operatorname{ran}f=\emptyset\), that \(g\) inherits the injectivity of \(f\) and, therefore, that \(g\cup\{(s,Y)\}\) is also injective. It is also immediate to see that \(\emptyset\notin\bigcup\operatorname{ran}h\), due to \(Y\neq\emptyset\). We thus reached a contradiction with our assumption that \((D,U)\) admitted no such injective representation. ## VI Full Graphs Given a family of sets, one can construct a graph where each vertex corresponds to a set, a directed edge links supersets to subsets, and an undirected one connects overlapping sets. Such a construction arises when computationally facing the question of whether subelements of genes are linked together in a linear order [20]. Definition VI.1 formally specify the graphs which can be built in this manner. **Definition VI.1**.: _A full graph is a mixed, unweighted, simple graph over vertices \(V\), of directed edges \(D\), and undirected edges \(T\) such that there is an injective function \(f\) on \(V\) yielding non-empty sets and with the property \(\forall x,\ y\in V\). \(\left((x,y)\in D\leftrightarrow f\ x\supseteq f\ y\ \right)\wedge\left((x,y)\in T \leftrightarrow f\ x\ and\ f\ y\ overlap\right);\) here, we say that two sets \(A\) and \(B\) overlap (written \(A\between B\)) when \(A\cap B\notin\{A,B,\emptyset\}\). We call \(f\) a fg-representation of the full graph \((D,T)\). Alternatively, we will say that \(T\) makes a full graph of \(D\) (through \(f\)) when such an fg-representation \(f\) exists. Similarly, we will say that a relation \(U\) is admissible for \(D\) (through \(f\)) when \(\operatorname{fe}U\subseteq\operatorname{fe}D\) and there is a similar \(f\) being a representation (as from Definition VI.1) for \((D,U)\)._ Note that an undirected edge linking \(x\) and \(y\) is represented by two pairs \((x,y)\) and \((y,x)\) in \(T\). While redundant, this representation allows us to formally consider \(T\) a (symmetric) relation, so that any full graph can be adequately represented by a pair \((D,T)\) of relations, also thanks to the fact that it is simple (e.g., without multiple edges). We can omit \(V\) because any full graph must have a loop on every vertex, so that \(V=\operatorname{fe}D\) is redundant. Theorem VI.2 is our second representation theorem for event structures, providing a bijective construction relating them to full graphs. **Theorem VI.2**.: _Consider a finite relation \(D\) and \(F_{D}:=R\;\;\mapsto\;\left(\operatorname{fe}D\times\operatorname{fe}D\right) \setminus\left(D\cup D^{-1}\right)\setminus R\). A bijection between \(X\;:=\;\left\{T|T\text{ makes a full graph of }D\right\}\;\text{ and }\;Y\;:=\;\left\{U|U\text{ is admissible for }D\right\}\;\text{ is given by }\;F_{D}|_{X}\)._ Figure 3 shows the application of Theorem VI.2 to the event structure of Figures 1 and 2. Proof.: Writing just \(F\) for \(F_{D}\), it suffices to show four claims: \(F|_{X}\) is injective, \(F|_{Y}\) is injective, \(F^{*}\;X\subseteq Y\) and \(F^{*}\;Y\subseteq X\). The injectivity claims follow from the general fact that \(\left(v\mapsto w\backslash v\right)|_{2^{w}}\) is always injective, and from \(X\cup Y\subseteq 2^{\left(\operatorname{fe}D\times\operatorname{fe}D\right) \setminus\left(D\cup D^{-1}\right)}\). In turn, the last inclusion follows from the fact that a relation admissible for \(D\) is necessarily disjoint from \(D\cup D^{-1}\) and similarly for one making a full graph of \(D\). For the third claim: consider \(T\) making a full graph of \(D\) through \(f\), and vertices \(x,\;y\); now \(f\;x\cap f\;y=\emptyset\leftrightarrow f\;x\not\!\!\not\!\!Zf\;y\wedge f\;y \not\!\!\!Zf\;x\wedge f\;x\not\!\!\!Zf\;y\leftrightarrow(x,y)\notin D\cup D^{- 1}\cup T\leftrightarrow(x,y)\in F\;T,\) so that \(F\;T\) is admissible for \(D\) through the same \(f\) (with the part \(\operatorname{fe}\left(F\;T\right)\subseteq\operatorname{fe}D\) being straightforward). Similarly for the last claim. **Corollary VI.3**.: _Consider a finite set \(V\), and the set \(P\) of partial orders having field \(V\). The sets \(E\left(V\right)\) and \(F\left(V\right)\) of event structures over \(V\) and of full graphs over \(V\), respectively, are given by \(E\left(V\right)=\bigcup_{D\in P}\left\{D\right\}\times\left\{U|\;U\text{ is admissible for }D\right\},\;F\left(V\right)\;=\;\bigcup_{D\in P}\left\{D\right\}\times \left\{T|\;T\text{ makes a full graph of }D\right\}.\) They have the same cardinality._ Proof.: The first equality follows from Theorem V.2, while the second is a rephrasing of Definition VI.1. Both feature disjoint unions, so that the cardinality claim follows from VI.2. Corollary VI.3 reveals why the match between OEIS A284276 and the countings in the paper [16], found by querying Google with OEIS minings, is not a coincidence: the former counts event structures over sets of given cardinalities, and the latter does the same for full graphs. This correspondence between two previously detached world immediately yields new results by translating existing theorems previously applied to only one world. The next corollary lists only two, among the easiest, of them. **Corollary VI.4**.: \(\bullet\) _Exactly \(561658287\) full graphs are construcable on seven vertices, including isomorphic ones. \(\bullet\)\(\lim_{|V|\to\infty}\frac{\log_{2}|E\left(V\right)|}{|V|^{2}}=\frac{1}{2},\) where \(E\) is defined as in VI.3._ Proof.: Immediate by applying VI.3 to OEIS A284276 and to the main result of [21], respectively. ## VII Related Work Data mining is used for a variety of purposes: from discovering relationships among attributes in big databases [22], to the classification of knowledge contained in heterogeneous data streams [23], to modeling customers' loyalty from purchasing behaviour [24], to newsworthy event anticipation from social medial posting patterns [25], to fake profiles detection in social media [26]. While knowledge discovery is one of the main goals of data mining, the latter has been very scarcely used for the more specific goal of discovering new mathematics. The only effort in this direction we are aware of is in [5, 8], where only the OEIS was mined. In our work, the crucial difference is the combined mining of both the OEIS and the huge Google and Google Scholar data sets. On one hand, this makes the potential set of interesting relationship between mathematical entities order of magnitudes bigger; on the other hand, relying only on textual comparison, our approach requires bigger human intervention to examine and prove the discovered potential relationships. ## VIII Conclusions Cued by a match obtained by web-searching data mined from the OEIS, we showed that there is a one-to-one correspondence between event structures and full graphs: see Theorem VI.2 and Corollary VI.3, derived from Theorem V.2. The latter is an original addition to a family of fundamental theorems relating basic algebraic structures to elementary mathematical constructions by establishing that the two entities exhibit the same behaviour, and commonly referred to as representation theorems. Among the best known instances are Birkhoff's representation theorem [18] characterizing finite distributive lattices through set-theoretical union and intersection, Stone's and Birkhoff's theorems offering related representations for Boolean algebras [27], Cayley's theorem implementing groups as permutations [28]. Many fundamental abstract structures historically arose from abstracting the properties Fig. 3: By applying the inverse of \(F_{D}\) appearing in Theorem VI.2 to the event structure of Figures 1 and 2, one obtains the full graph example originally featured in Section 3 of [20]. Here, the arrows represent \(\supset\), and the dashed lines the overlapping relation. of some operations on more concrete objects (e.g., join and meet in distributive lattice incarnate union and intersection), which can therefore be regarded as prototypical examples for the relevant structures. Typically, a representation theorem closes the circle and goes back from the abstract structure to the prototypical example, showing that it can be used to represent any instance of the abstract structure. In the case of Theorem V.2, however, one certainly cannot say that \(\supseteq\) and disjoint intersection are prototypical examples for the relations of causality and conflict of an ES, mainly because, historically, ES developed in the setting of concurrency theory, largely detached from set-theoretical notions. It is probably this fortuity which prevented that theorem, and consequently results VI.2 and VI.3 linking ESs and full graphs, from being stated earlier.4 It is likely that similar unfavourable, historical circumstances prevent further discoveries linking seemingly mutually unrelated mathematical theories: we believe that data mining and AI approaches are worth being further pursued in such cases, and this paper is a proof of concept supporting this claim. Given the way our theorems were obtained, the point of making sure, and of convincing the community of their correctness is of particular importance. For this reason, we produced a formal proof of our results, and successfully checked its correctness with the Isabelle/HOL proof assistant. A separate paper is being written to describe this formalisation effort, the corresponding challenges, ideas and solutions, and will be posed to the automated reasoning community to gauge the interest in a potentially fruitful, novel intersection between subdomains of AI. Footnote 4: There are representation theorems relating to _categories_ of ESs, at a more abstract level than the present work. E.g. [29]. We conclude with some cues for future work. One limitation needing attention is the human role in parsing the matches obtained in Section II: while we believe that, to find interesting theorems, human intervention is key, there is space for improvement in pruning the irrelevant matches and better leveraging the huge amount of knowledge available through web searches. For example, NLP techniques could improve the crude keyword-based approach of Section II to single out mathematical concepts. Another, more technical, limitation in need to be mitigated is the difficulty of inserting mathematical manipulations in the web search process; this is related to the plain-text interface used in web search queries, and to the fact that we have no control on the transformations applied by the web searching platform over the set of indexed documents (which would probably be too big to transform even if there were some form of control). More specifically to the original theorems introduced in this paper, one obvious direction of development is the extension of Theorem V.2 to the infinite case, in a way analogous to how Priestley's representation theorem generalises Birkhoff's [18, Theorem 11.23]. Using this generalisation as a guidance, this will probably require non-trivial conceptual leaps (of a scale analogous to the interpretation of Stone's theorem Priestley devised to obtain her result).
2301.13239
Periodic Y-systems and Nahm sums: the rank 2 case
We classify periodic Y-systems of rank 2 satisfying the symplectic property. We find that there are six such Y-systems. In all cases, the periodicity follows from the existence of two reddening sequences associated with the time evolution of the Y-systems in positive and negative directions, which gives rise to quantum dilogarithm identities associated with Donaldson-Thomas invariants. We also consider $q$-series called the Nahm sums associated with these Y-systems. We see that they are included in Zagier's list of rank 2 Nahm sums that are likely to be modular functions. It was recently shown by Wang that they are indeed modular functions.
Yuma Mizuno
2023-01-30T19:08:20Z
http://arxiv.org/abs/2301.13239v1
# Periodic Y-systems and Nahm sums: the rank 2 case ###### Abstract We classify periodic Y-systems of rank 2 satisfying the symplectic property. We find that there are six such Y-systems. In all cases, the periodicity follows from the existence of two reddening sequences associated with the time evolution of the Y-systems in positive and negative directions, which gives rise to quantum dilogarithm identities associated with Donaldson-Thomas invariants. We also consider \(q\)-series called the Nahm sums associated with these Y-systems. We see that they are included in Zagier's list of rank 2 Nahm sums that are likely to be modular functions. It was recently shown by Wang that they are indeed modular functions. ## 1 Introduction ### Background The Y-system is a system of algebraic relations satisfied by coefficients of a cluster algebra, which has the following form: \[Y_{i}(u)Y_{i}(u-r_{i})=\prod_{j\in I}\prod_{p=1}^{r_{i}-1}Y_{j}(u-p)^{[n_{ij;p} ]_{+}}\big{(}1+Y_{j}(u-p)\big{)}^{-n_{ij;p}} \tag{1.1}\] where \(I\) is a finite index set, \(Y_{i}(u)\) for \(i\in I\), \(u\in\mathbb{Z}\) are commuting variables, \(r_{i}\in\mathbb{Z}_{\geq 1}\), and \(n_{ij:p}\in\mathbb{Z}\). We also use the notation \([n]_{+}\coloneqq\max(0,n)\). Such equations are first discovered by Zamolodchikov in the study of thermodynamic Bethe ansats [36], prior to the discovery of cluster algebras by Fomin and Zelevinsky [7]. The most striking feature of Zamolodchikov's Y-systems, as well as their generalizations [22, 30] defined shortly after the Zamolodchikov's work, is that they are periodic, which was fully proved by applying the theory of cluster algebras [9, 10, 15, 16, 21]. A systematic treatment of the Y-systems in the general setting of cluster algebras, including the Y-systems arising from the thermodynamic Bethe ansatz as spacial cases, was given by Nakanishi [27]. This approach was further developed in [24], and it was shown that the algebraic relation (1.1) arises from a cluster algebra if and only if the data \(r_{i},n_{ij:p}\) have a certain symplectic property. This allows the "axiomatic" study of Y-systems without explicitly referring to cluster algebras. In this general setting, however, the Y-system is typically not periodic, and so the study of periodic Y-systems as a generalization of Zamolodchikov's Y-systems would be further developed. In particular, the classification problem for periodic Y-systems is a challenging open problem (see the last comments in [27, Section 3]). There are several classification results in the literature. Fomin and Zelevinsky [10] showed that the classification when \(r_{i}=2\), \(n_{ij;p}\leq 0\), and \(n_{ii;p}=0\) for any \(i,j,p\) coincides with the Cartan-Killing classification. Galashin and Pylyavskyy [12] generalized this result to show that the classification when \(r_{i}=2\) and \(n_{ii;p}=0\) for any \(i,p\) coincides with the classification of ADE bigraphs of Stembridge [31]. On the other hand, the situation is more complicated when \(r_{i}>2\) for some \(i\), and so far there has been no comprehensive classification results except when \(|I|=1\) where it is not difficult to give a complete classification thanks to the work by Fordy and Marsh [11] (e.g. see [24, Example 5.6]). In this paper, we make a first attempt to give a classification result involving the case \(r_{i}>2\) for some \(i\). Precisely, we classify the periodic Y-systems of the form (1.1) with \(|I|=2\) satisfying the symplectic property. We would like to emphasize that we consider general \(r_{i},n_{ij;p}\) in the classification. The result is given in the next section. We also discuss the relation to Nahm's conjecture on \(q\)-series [26, 35] in Section 1.3. ### Main result Let \(I\) be a finite set. We denote by \(\mathbb{Y}_{0}\) the set of pairs \((r,n)\) where \(r=(r_{i})_{i\in I}\) and \(n=(n_{ij;p})_{i,j\in I,p\in\mathbb{N}}\) are families of integers satisfying \(r_{i}\geq 1\) for any \(i\) and \[n^{\pm}_{ij;p}=0\text{ unless }0<p<r_{i} \tag{1.2}\] for any \(i,j,p\). **Definition 1.1**.: Let \((r,n)\in\mathbb{Y}_{0}\). Let \(\mathbb{P}\) be a semifield, and \((Y_{i}(u))_{i\in I,u\in\mathbb{Z}}\) be a family of elements in \(\mathbb{P}\). We say that \((Y_{i}(u))\)_satisfies the Y-system_ associated with the pair \((r,n)\) if the relation (1.1) holds for any \(i,u\). The equation (1.1) itself is called the _Y-system_ associated with \((r,n)\). We also say that \((Y_{i}(u))\) is a _solution of the Y-system_ if it satisfies the Y-system. It is useful to think a pair \((r,n)\in\mathbb{Y}_{0}\) as a triple of matrices with polynomial entries by the map \((r,n)\mapsto(N_{0}(z),N_{+}(z),N_{-}(z)):\mathbb{Y}_{0}\to(\text{Mat}_{I\times I }\,\mathbb{N}[z])^{3}\) defined by \[N_{0}(z)\coloneqq\text{diag}(1+z^{r_{i}})_{i\in I},\quad N_{\pm}(z)\coloneqq \biggr{(}\sum_{p\in\mathbb{N}}n^{\pm}_{ij;p}z^{p}\biggr{)}_{i,j\in I} \tag{1.3}\] where we set \(n^{\pm}_{ij;p}\coloneqq[\pm n_{ij;p}]_{+}\). We also define the map \((r,n)\mapsto A_{\pm}(z):\mathbb{Y}_{0}\to(\text{Mat}_{I\times I}\,\mathbb{Z}[z ])^{2}\) by \(A_{\pm}(z)\coloneqq N_{0}(z)-N_{\pm}(z)\). Since this map is injective by the condition (1.2), we will identify \(\mathbb{Y}_{0}\) with the image of this map. For example, we will use the term "the Y-system associated with \(A_{\pm}(z)\in\mathbb{Y}_{0}\)". **Definition 1.2**.: We say that \(A_{\pm}(z)\in\mathbb{Y}_{0}\) satisfies the _symplectic property_ if \[A_{+}(z)A_{-}(z^{-1})^{\mathsf{T}}=A_{-}(z)A_{+}(z^{-1})^{\mathsf{T}}, \tag{1.4}\] where \(\mathsf{T}\) is the transpose of a matrix. We denote by \(\mathbb{Y}\) the subset of \(\mathbb{Y}_{0}\) consisting of pairs satisfying the symplectic property. The pair \(A_{\pm}(z)\in\mathbb{Y}_{0}\) satisfies the simplectic property if and only if the Y-system associated with \(A_{\pm}(z)\in\mathbb{Y}_{0}\) is realized as the exchange relations of coefficients in a cluster algebra [24]. We review this fact in Section 2.1. **Definition 1.3**.: We say that a solution of a Y-system is _periodic_ if there is a positive integer \(\Omega>0\) such that \(Y_{i}(u+\Omega)=Y_{i}(u)\) for any \(i,u\). **Definition 1.4**.: We say that a pair \(A_{\pm}(z)\in\mathbb{Y}\) is of _finite type_ if any solution (in any semifield) of the Y-system associated with this pair is periodic. In this case, we also say that Y-system itself is _periodic_. The purpose of this paper is to classify periodic Y-systems of rank 2. Before stating the result, we give a few remarks. We say that \(A_{\pm}(z)\in\mathbb{Y}_{I}\) is _decomposable_ if it is a direct sum of some \(A^{\prime}_{\pm}(z)\in\mathbb{Y}_{I^{\prime}}\) and \(A^{\prime\prime}_{\pm}(z)\in\mathbb{Y}_{I^{\prime\prime}}\) with nonempty \(I^{\prime}\) and \(I^{\prime\prime}\). We say that \(A_{\pm}(z)\in\mathbb{Y}_{I}\) is _indecomposable_ if it is not decomposable. It is enough to consider indecomposable pairs in the classification. We also note that \(A_{\pm}(z)\) is of finite type if and only if \(A_{\pm}(z)^{\mathrm{op}}\coloneqq A_{\mp}(z)\) is of finite type by the correspondence between solutions \(Y_{i}(u)\mapsto Y_{i}(u)^{-1}\). The main results are summarized as follows: **Theorem 1.5**.: _Suppose that \(I=\{1,2\}\)._ 1. _Any pair_ \(A_{\pm}(z)\in\mathbb{Y}\) _in Table_ 1 _is of finite type._ 2. _Any indecomposable pair_ \(A_{\pm}(z)\in\mathbb{Y}\) _of finite type is reduced to exactly one pair in Table_ 1 _by permuting the indices, changing sign, and changing slices (see Section_ 3.1_), if necessary._ The claim (1) can be proved by concrete calculation in a suitable universal algebra since \(A_{\pm}(z)\) in Table 1 is concrete. We, however, give another proof involving cluster algebras. We give a quiver and a sequence of mutations for each \(A_{\pm}(z)\) in Table 1 that yields the Y-system as the exchange relation of coefficients in the cluster algebra. See Table 3 for quivers and mutations. We can verify that some iteration of this sequence of mutations, as well as its inverse, is a reddening sequence (Theorem 2.8). Thanks to the deep results in the theory of cluster algebras, this property is enough to imply the periodicity (Proposition 2.7). The number \(h_{\pm}\) in Table 1 are the length of reddening sequences in positive and negative directions, respectively. This verification of the periodicity is interesting not only because it is computationally more efficient, but also because it leads to nontrivial dilogarithm identities associated with Donaldson-Thomas invariants (Corollary 2.10). The claim (2) is proved in Section 3.2 by the following steps: 1. We recall the result in [24] that asserts that \(A_{\pm}(1)\) satisfies a certain positivity, which in particular implies that \(\operatorname{tr}A_{\pm}(1)\) and \(\det A_{\pm}(1)\) are positive. This allows us to significantly reduce the candidates for finite type \(A_{\pm}(z)\). 2. For a fixed \(A_{+}(1)\) in the candidates obtained in Step 1, we search for \(A_{-}(1)\) satisfying the symplectic property (1.4) at \(z=1\). 3. During the search in Step 2, we discard the pair \(A_{\pm}(1)\) that cannot be endowed with the parameter \(z\) (Lemma 3.3 and 3.4). 4. At this point, we have six candidates up to a permutation of the indices and a change of sign. For each \(A_{\pm}(1)\) in the six candidates, we try to endow with the parameter \(z\). It turns out that this is possible for all the six candidates. We give all possible \(A_{\pm}(z)\) in Lemma 3.5-3.8. \begin{table} \begin{tabular}{l|l|l|l|l} \hline \(A_{+}(z)\) & \(A_{-}(z)\) & \(h_{+}\) & \(h_{-}\) & \\ \hline \(\begin{pmatrix}1+z^{2}&-z\\ -z&1+z^{2}\end{pmatrix}\) & \(\begin{pmatrix}1+z^{2}&0\\ 0&1+z^{2}\end{pmatrix}\) & 3 & 2 & (1) \\ \(\begin{pmatrix}1+z^{2}&-z\\ -z-z^{5}&1+z^{6}\end{pmatrix}\) & \(\begin{pmatrix}1+z^{2}&0\\ -z^{3}&1+z^{6}\end{pmatrix}\) & 8 & 6 & (2) \\ \(\begin{pmatrix}1+z^{2}&-z\\ -z-z^{5}-z^{9}&1+z^{10}\end{pmatrix}\) & \(\begin{pmatrix}1+z^{2}&0\\ -z^{3}-z^{7}&1+z^{10}\end{pmatrix}\) & 18 & 10 & (3) \\ \(\begin{pmatrix}1+z^{2}&-z\\ -z&1+z^{2}\end{pmatrix}\) & \(\begin{pmatrix}1+z^{2}-z&0\\ 0&1+z^{2}-z\end{pmatrix}\) & 3 & 3 & (4) \\ \(\begin{pmatrix}1+z^{2}&-z\\ -z&1+z^{3}\end{pmatrix}\) & \(\begin{pmatrix}1+z^{2}-z&0\\ 0&1+z^{3}\end{pmatrix}\) & 5 & 3 & (5) \\ \(\begin{pmatrix}1+z^{2}&-z\\ -z&1+z^{2}-z\end{pmatrix}\) & \(\begin{pmatrix}1+z^{2}&0\\ 0&1+z^{2}\end{pmatrix}\) & 5 & 2 & (6) \\ \hline \end{tabular} \end{table} Table 1: Finite type classification for Y-systems of rank 2. The numbers \(h_{\pm}\) are the length of reddening sequences in positive and negative directions, respectively. Step 5. We finally check that each remaining candidate reduces to one of \(A_{\pm}(z)\) in Table 1 by change of slices. **Remark 1.6**.: Most of the Y-systems obtained from Table 1 are already known in the literature. \((1)^{\mathrm{op}}\) and \((6)^{\mathrm{op}}\) are Zamolodchikov's Y-system of type \(A_{2}\)[36] and \(T_{2}\) ("tadpole") [30], respectively. \((2)^{\mathrm{op}}\) is the reduced sine-Gordon Y-system associated with the continued fraction \(3/4=[1,3]=1/(1+1/3)\), and \((5)^{\mathrm{op}}\) with \(z\) replaced by \(z^{2}\) is the reduced sine-Gordon Y-system associated with \(3/5=[1,1,2]=1/(1+1/(1+1/2))\)[32]. (4) is the "half" of the Y-system associated with the pair \((A_{2},A_{2})\)[30]. (3) appears to be new: \[Y_{1}(u)Y_{1}(u-2) =\frac{1}{1+Y_{2}(u-1)^{-1}}\] \[Y_{2}(u)Y_{2}(u-10) =\frac{\big{(}1+Y_{1}(u-3)\big{)}\big{(}1+Y_{1}(u-7)\big{)}}{ \big{(}1+Y_{1}(u-1)^{-1}\big{)}\big{(}1+Y_{1}(u-5)^{-1}\big{)}\big{(}1+Y_{1}(u -9)^{-1}\big{)}}\] although it is implicitly given in the author's previous work [24, Table 2]. **Remark 1.7**.: The pair \(A_{\pm}(z)\in\mathbb{Y}\) is called the _T-datum_ in [24] since it describes the T-systems, which is a companion to the Y-systems. We do not use this term since we only consider the Y-systems in this paper. Moreover, the definition of the T-datum in [24] allows to have a non-diagonal \(N_{0}\) and have a nontrivial symmetrizer \(D\), which is more general than the definition in this paper. See also Section 1.4 for the Y-systems involving nontrivial symmetrizers. **Remark 1.8**.: There is another expression of the Y-system using a pair of matrices \(A_{\pm}(z)\) directly. Let \(A_{\pm}(z)\in\mathbb{Y}_{0}\), and define \(a_{ij;p}\in\mathbb{Z}\) by \[A_{\pm}(z)=\left(\sum_{p\in\mathbb{N}}a_{ij;p}^{\pm}z^{p}\right)_{i,j\in I}.\] Let \((P_{i}^{\pm}(u))_{i\in I,u\in\mathbb{Z}}\) be a family of elements in a multiplicative abelian group \(\mathbb{P}\). We say that \((P_{i}^{\pm}(u))\)_satisfies the multiplicative Y-system_ associated with \(A_{\pm}(z)\) if \[\prod_{j\in I}\prod_{p\in\mathbb{N}}P_{j}^{+}(u-p)^{a_{ij;p}^{+}}=\prod_{j\in I }\prod_{p\in\mathbb{N}}P_{j}^{-}(u-p)^{a_{ij;p}^{-}} \tag{1.5}\] for any \(i,u\) (schematically, "\(A_{+}(z)\cdot\log P^{+}=A_{-}(z)\cdot\log P^{-}\)" under the action \(z:u\mapsto u-1\)). The solution \((P_{i}^{\pm}(u))\) is called _normalized_ if \(\mathbb{P}\) is endowed with a semifield structure, and \[P_{i}^{+}(u)+P_{i}^{-}(u)=1\] for any \(i,u\). We have a one-to-one correspondence between solutions of the Y-system (1.1) and normalized solutions of the multiplicative Y-system (1.5). The correspondence is given by \[Y_{i}(u)\mapsto\frac{P_{i}^{+}(u)}{P_{i}^{-}(u)},\quad P_{i}^{+}(u)\mapsto \frac{Y_{i}(u)}{1+Y_{i}(u)},\quad P_{i}^{-}(u)\mapsto\frac{1}{1+Y_{i}(u)}.\] In the setting of cluster algebras, this correspondence is nothing but the normalization of the coefficients described by Fomin and Zelevinsky [7, Section 5]. ### Relation to Nahm sums Consider the \(q\)-series defined by \[G(q)=\sum_{n=0}^{\infty}\frac{q^{n^{2}}}{(q)_{n}},\quad H(q)=\sum_{n=0}^{ \infty}\frac{q^{n^{2}+n}}{(q)_{n}}, \tag{1.6}\] where \((q)_{n}\coloneqq(1-q)(1-q^{2})\cdots(1-q^{n})\) is the \(q\)-Pochhammer symbol. The famous Rogers-Ramanujan identities express these \(q\)-series as the following infinite products: \[G(q)=\prod_{n\equiv\pm 1\bmod 5}\frac{1}{1-q^{n}},\quad H(q)=\prod_{n\equiv\pm 2 \bmod 5}\frac{1}{1-q^{n}}.\] These expressions in particular implies that \(q^{-1/60}G(q)\) and \(q^{11/60}H(q)\) are modular functions on some finite index subgroup of \(\mathrm{SL}(2,\mathbb{Z})\). In fact, it is a rare case that an infinite sum of the form (1.6) is modular. It is known that the \(q\)-series \[\sum_{n=0}^{\infty}\frac{q^{\frac{1}{2}an^{2}+bn+c}}{(q)_{n}} \tag{1.7}\] with \(a,b,c\in\mathbb{Q}\) is modular only if \(a=1/2\), \(1\), or \(2\)[35]. Nahm [26] considered higher rank generalization of (1.7), which we call the _Nahm sum_. Let \(I\) be a finite set, and suppose that \(A\in\mathbb{Q}^{I\times I}\) is a symmetric positive definite matrix, \(B\in\mathbb{Q}^{I}\) is a vector, and \(C\in\mathbb{Q}\) is a scalar. The Nahm sum is the \(q\)-series defined by \[f_{A,B,C}(q)\coloneqq\sum_{n\in\mathbb{N}^{I}}\frac{q^{\frac{1}{2}n^{\mathsf{ T}}An+n^{\mathsf{T}}B+C}}{\prod_{a}(q)_{n_{i}}}.\] When \(|I|\geq 2\), it is not well understood when \(f_{A,B,C}(q)\) is modular. Nahm gave a conjecture providing a criterion on the modularity of \(f_{A,B,C}(q)\) in terms of torsion elements in the Bloch group [26, 35]. See [3, 33] for the development of this conjecture. Nahm used Zamolodchikov's periodicity to provide an evidence of the conjecture. In fact, there is a natural way to give a candidate of modular Nahm sums from finite type \(A_{\pm}(z)\in\mathbb{Y}\) in general. Precisely, the matrix \(K\coloneqq A_{+}(1)^{-1}A_{-}(1)\) is always symmetric and positive definite for finite type \(A_{\pm}(z)\in\mathbb{Y}\), and it is conjectured that it gives a modular Nahm sum \(f_{K,0,C}(q)\) for some \(C\)[24]. (This construction is essentially the same as that in [18], except that they did not prove that \(K\) is symmetric and positive definite. A special case can also be found in [23].) We note that the symplectic property (1.4) at \(z=1\) plays an important role here since it implies that \(K\) is symmetric. On the other hand, the positive definiteness is related to the periodicity of the Y-system. Based on our classification, we find that: **Theorem 1.9**.: _Suppose that \(I=\{1,2\}\). The Nahm sum \(f_{K,0,C}(q)\) is modular for any finite type \(A_{\pm}(z)\in\mathbb{Y}\), where \(C\) is given in Table 2._ In fact, every \(K\) from finite type \(A_{\pm}(z)\) is included in the Zagier's list [35, Table 2] for rank \(2\) candidates of modular Nahm sums. There are Rogers-Ramanujan type identities that enable us to write each Nahm sum in the list in terms of theta functions. The proof of the desired identities was partially given in [1, 4, 6, 33, 35], and was recently completed by Wang [34] except for one candidate that does not appears in our construction from Y-systems. See Table 2. **Remark 1.10**.: We can define the refinement \(f_{A_{\pm}(z)}^{(s)}(q)\) of the Nahm sum \(f_{K,0,0}(q)\), which is parametrized by \(s\in H\) for an abelian group \(H\) of order \(\det A_{+}(1)\) such that it reduces to the original one by taking summation [24, Definition 5.12]: \[f_{K,0,0}(q)=\sum_{s\in H}f_{A_{\pm}(z)}^{(s)}(q).\] It is conjectured that each \(f_{A_{\pm}(z)}^{(s)}(q)\) is already modular after multiplying \(q^{C}\) for some \(C\). We note that the symplectic property (1.4) at \(z=1\) again plays an important role in the definition of the refinement. We will discuss this refinement for rank \(2\) case in more detail elsewhere. We remark that similar refinement also appears in the context of \(3\)-dimensional quantum topology [13, Section 6.3]. ### Remarks on higher rank and skew-symmetrizable case We have seen that the following properties hold for rank \(2\) case: 1. We have reddening sequences in both positive and negative directions. 2. The map \(A_{\pm}(z)\mapsto A_{+}(1)^{-1}A_{-}(1)\) gives modular Nahm sums. We expect that the properties (P1) and (P2) also hold for any finite type \(A_{\pm}(z)\in\mathbb{Y}\) of general rank. The followings are some known examples: * For the Y-system associated with the untwisted quantum affine algebras \(U_{q}(X_{r}^{(1)})\) with level \(\ell\) restriction [22], (P1) holds with \(h_{+}=t\ell\) and \(h_{-}=t\cdot(\text{dual Coxeter number of }X_{r})\) where \(t=1\), \(2\), or \(3\) is the multiplicity in the Dynkin diagram of \(X_{r}\)[15, 16], and (P2) holds under the assumption [14, Conjecture 5.3] by the result of Kac and Peterson [17]. * For the Y-system associated with a pair of finite type simply laced Dynkin type \((X_{r},X_{r^{\prime}}^{\prime})\)[30], (P1) holds with \(h_{+}=(\text{Coxeter number of }X_{r})\) and \(h_{-}=(\text{Coxeter number of }X_{r^{\prime}}^{\prime})\)[19, 21]. * For the (reduced) sine-Gordon Y-system associated with the continued fraction \(p/q=[n_{F},\ldots,n_{1}]=1/(n_{F}+1/(\cdots+1/n_{1}))\)[29, 32], (P1) appears to hold with \(h_{+}=2p\) and \(h_{-}=2q\). * For the Y-system associated with an admissible ADE bigraph \((\Gamma,\Delta)\)[12], (P1) appears to hold with \(h_{+}=(\text{Coxeter number of }\Gamma)\) and \(h_{-}=(\text{Coxeter number of }\Delta)\). Moreover, we can consider Y-systems associated with skew-symmetrizable cluster algebras rather than skew-symmetric ones discussed in this paper. In this case, the symplectic property (1.4) becomes \[A_{+}(z)DA_{-}(z^{-1})^{\mathsf{T}}=A_{-}(z)DA_{+}(z^{-1})^{\mathsf{T}},\] where \(D\) is a diagonal matrix called symmetrizer [24]. We also expect that the properties (P1) and (P2) also hold for skew-symmetrizable case. See [24, Definition 5.12] for the definition of the Nahm sum in skew-symmetrizable case. _Acknowledgment._ This work is supported by JSPS KAKENHI Grant Number JP21J00050. \begin{table} \begin{tabular}{l|l|c|c|c|c|c|c} \hline \(A_{\pm}(z)\) & \(K\) & \(-24C\) & RR & \(A_{\pm}(z)\) & \(K\) & \(-24C\) & RR \\ \hline \((1)\) & \(\begin{pmatrix}4/3&2/3\\ 2/3&4/3\end{pmatrix}\) & \(\dfrac{4}{5}\) & [6] & \((1)^{\text{op}}\) & \(\begin{pmatrix}1&-1/2\\ -1/2&1\end{pmatrix}\) & \(\dfrac{6}{5}\) & [33] \\ \((2)\) & \(\begin{pmatrix}3/2&1\\ 1&2\end{pmatrix}\) & \(\dfrac{5}{7}\) & [34] & \((2)^{\text{op}}\) & \(\begin{pmatrix}1&-1/2\\ -1/2&3/4\end{pmatrix}\) & \(\dfrac{9}{7}\) & [34] \\ \((3)\), \((6)\) & \(\begin{pmatrix}2&2\\ 2&4\end{pmatrix}\) & \(\dfrac{4}{7}\) & [1] & \((3)^{\text{op}}\), \((6)^{\text{op}}\) & \(\begin{pmatrix}1&-1/2\\ -1/2&1/2\end{pmatrix}\) & \(\dfrac{10}{7}\) & [34] \\ \((4)\) & \(\begin{pmatrix}2/3&1/3\\ 1/3&2/3\end{pmatrix}\) & \(1\) & [35] & \((4)^{\text{op}}\) & \(\begin{pmatrix}2&-1\\ -1&2\end{pmatrix}\) & \(1\) & [35] \\ \((5)\) & \(\begin{pmatrix}1&1\\ 1&2\end{pmatrix}\) & \(\dfrac{3}{4}\) & [34] & \((5)^{\text{op}}\) & \(\begin{pmatrix}2&-1\\ -1&1\end{pmatrix}\) & \(\dfrac{5}{4}\) & [4] \\ \hline \end{tabular} \end{table} Table 2: The list of the matrix \(K=A_{+}(1)^{-1}A_{-}(1)\). The Nahm sum \(f_{K,0,C}(q)\) is modular, which can be proved by using Rogers-Ramanujan type identities (RR for short) given in the references in the table. Y-systems and cluster algebras ### Preliminaries on cluster algebras In this paper, a _semifield_ is a multiplicative abelian group equipped with an addition that is commutative, associative, and distributive with respect to the multiplication. **Definition 2.1**.: Let \(I\) be a set. The set of all nonzero rational functions in the variables \(\boldsymbol{y}=(y_{i})_{i\in I}\) with natural number coefficients is a semifield with respect to the usual addition and multiplication. This semifield is called the _universal semifield_, and denoted by \(\mathbb{Q}_{>0}(\boldsymbol{y})\). We have a canonical bijection \(\operatorname{Hom}_{\operatorname{semifield}}(\mathbb{Q}_{>0}(\boldsymbol{y}), \mathbb{P})\cong\operatorname{Hom}_{\operatorname{set}}(I,\mathbb{P})\) for any set \(I\) and semifield \(\mathbb{P}\). **Definition 2.2**.: Let \(I\) be a set. The _tropical semifield_\(\operatorname{Trop}(\boldsymbol{y})\) is the multiplicative free abelian group generated by the variables \(\boldsymbol{y}=(y_{i})_{i\in I}\) equipped with the addition defined by \[\prod_{i}y_{i}^{a_{i}}+\prod_{i}y_{i}^{b_{i}}=\prod_{i}y_{i}^{\min(a_{i},b_{i} )}.\] Let \(I\) be a finite set and \(\mathbb{P}\) be a semifield. A _Y-seed_ is a pair \((B,\boldsymbol{y})\) where \(B=(B_{ij})_{i,j\in I}\) is a skew-symmetric integer matrix and \(\boldsymbol{y}=(y_{i})_{i\in I}\) is a tuple of elements in \(\mathbb{P}\). We sometimes represent \(B\) as the quiver whose signed adjacency matrix is \(B\). For a Y-seed \((B,\boldsymbol{y})\) and \(k\in I\), the _mutation_ in direction \(k\) transforms \((B,\boldsymbol{y})\) into the new Y-seed \(\mu_{k}(B,\boldsymbol{y})=(B^{\prime},\boldsymbol{y}^{\prime})\) given by \[B^{\prime}_{ij} \coloneqq\begin{cases}-B_{ij}&\text{if $i=k$ or $j=k$},\\ B_{ij}+[-B_{ik}]_{+}B_{kj}+B_{ik}[B_{kj}]_{+}&\text{otherwise},\end{cases} \tag{2.1}\] \[y^{\prime}_{i} \coloneqq\begin{cases}y_{k}&\text{if $i=k$},\\ y_{i}y_{k}^{[B_{ik}]_{+}}(1+y_{k})^{-B_{ki}}&\text{otherwise}.\end{cases} \tag{2.2}\] A mutation is involutive, that is, \(\mu_{k}(B,\boldsymbol{y})=(B^{\prime},\boldsymbol{y}^{\prime})\) implies \((B,\boldsymbol{y})=\mu_{k}(B^{\prime},\boldsymbol{y}^{\prime})\). We have the commutativity \[\mu_{i}\mu_{j}=\mu_{j}\mu_{i}\quad\text{if $B_{ij}=0$}, \tag{2.3}\] which allows us to write \(\mu_{\boldsymbol{i}}\) for a set \(\boldsymbol{i}\subseteq I\) such that \(B_{ij}=0\) for any \(i,j\in\boldsymbol{i}\) to mean the successive mutations along arbitrarily chosen order on \(\boldsymbol{i}\). For a Y-seed \((B,\boldsymbol{y})\) and a bijection \(\nu:I\to I\), we define a new Y-seed \(\nu(B,\boldsymbol{y})=(B^{\prime},\boldsymbol{y}^{\prime})\) by \(B^{\prime}_{\nu(i)\nu(j)}\coloneqq B_{ij}\) and \(y^{\prime}_{\nu(i)}\coloneqq y_{i}\). ### Solving Y-systems by cluster algebras Let \(A_{\pm}(z)\in\mathbb{Y}\). We will construct a solution of the Y-system associated with \(A_{\pm}(z)\) based on [24, Section 3.3]. We first define a subset \(R\subseteq I\times\mathbb{Z}\) by \[R\coloneqq\{(i,u)\in I\times\mathbb{Z}\mid 0\leq u<r_{i}\}, \tag{2.4}\] and define a skew-symmetric \(R\times R\) integer matrix \(B\) by \[B_{(i,p)(j,q)}=-n_{ij;p-q}+n_{ji;q-p}+\sum_{k\in I}\sum_{v=0}^{\min(p,q)}\big{(} n_{ik;p-v}^{+}n_{jk;q-v}^{-}-n_{ik;p-v}^{-}n_{jk;q-v}^{+}\big{)}, \tag{2.5}\] where we understand \(n_{ij;p}=0\) if \(p<0\). We then define \(\boldsymbol{i}\coloneqq\{(i,u)\mid u=0\}\subseteq R\). We also define a bijection \(\nu:R\to R\) by \[\nu(i,p)=\begin{cases}(i,p-1)&\text{if $p>0$},\\ (i,r_{i})&\text{if $p=0$}.\end{cases} \tag{2.6}\] Then the symplectic property (1.4) ensures that \(\nu(\mu_{\mathbf{i}}(B))=B\)[24, Lemma 3.16]. We finally define a sequence of Y-seeds \[\cdots\to(B,\mathbf{y}(-1))\to(B,\mathbf{y}(0))\to(B,\mathbf{y}(1))\to\cdots \tag{2.7}\] in \(\mathbb{Q}_{>0}(\mathbf{y})\) by \(\mathbf{y}(0)\coloneqq\mathbf{y}\) and \((B,\mathbf{y}(u+1))=\nu(\mu_{\mathbf{i}}(B,\mathbf{y}(u)))\). The sequence (2.7) gives a solution of the Y-system: **Lemma 2.3**.: _[_24_, Theorem 3.13]__\((y_{i,0}(u))_{i\in I,u\in\mathbb{Z}}\) satisfies the Y-system associated with \(A_{\pm}(z)\)._ This solution is universal in the following sense. **Lemma 2.4**.: _[_24_, Theorem 3.19]_ _Suppose that a family \((Y_{i}(u))_{i\in I,u\in\mathbb{Z}}\) satisfies the Y-system associated with \(A_{\pm}(z)\). Define a semifield homomorphism \(f:\mathbb{Q}_{>0}(\mathbf{y})\to\mathbb{P}\) by_ \[f(y_{i,p})\coloneqq Y_{i}(p)\prod_{j\in I}\prod_{q=0}^{p}Y_{j}(p-q)^{-[n_{ij; q}]+}\big{(}1+Y_{j}(p-q)\big{)}^{n_{ij;q}}. \tag{2.8}\] _Then \(f(y_{i,0}(u))=Y_{i}(u)\) for any \(i,u\)._ **Corollary 2.5**.: \(A_{\pm}(z)\in\mathbb{Y}\) _is of finite type if and only if there are different integers \(u,v\) such that \(\mathbf{y}(u)=\mathbf{y}(v)\) in (2.7)._ ### Periodicity and reddening sequences Similarly to (2.7), we define a sequence of Y-seeds \[\cdots\to(B,\mathbf{y}(-1))\to(B,\mathbf{y}(0))\to(B,\mathbf{y}(1))\to\cdots \tag{2.9}\] by the same formulas but now in \(\operatorname{Trop}(\mathbf{y})\) rather than \(\mathbb{Q}(\mathbf{y})\). **Definition 2.6**.: We say that the Y-system associated with \(A_{\pm}(z)\in\mathbb{Y}\) is _positive (resp. negative) reddening_ if there is a positive integer \(u\) such that all the exponents in \(y_{i}(u)\) (resp. \(y_{i}(-u)\)) in (2.9) are nonpositive for any \(i\). We denote by \(h_{+}\) (resp. \(h_{-}\)) the least such positive integer \(u\). Equivalently, the Y-system is positive (resp. negative) reddening if and only if all the entries in the C-matrix associated with the sequence of mutations \((B,\mathbf{y}(0))\to(B,\mathbf{y}(u))\) (resp. \((B,\mathbf{y}(0))\to(B,\mathbf{y}(-u))\)) are nonpositive for some \(u>0\). **Proposition 2.7**.: _Suppose that the Y-system associated with \(A_{\pm}(z)\) is positive and negative reddening. Then \(A_{\pm}(z)\) is of finite type._ Proof.: We verify the equivalent condition in Corollary 2.5. By [2, Proposition 2.10], there are bijections \(\sigma,\sigma^{\prime}:R\to R\) such that \(y_{i}(h_{+})=y_{\sigma(i)}^{-1}\) and \(y_{i}(-h_{-})=y_{\sigma^{\prime}(i)}^{-1}\) for any \(i\) (in other words, the C-matrices associated with them are the minus of permutation matrices). Now the claim follows from the separation formula for \(y\)-variables [10, Proposition 3.13] and the result on C-matrices shown by Cao, Huang, and Li [5, Theorem 2.5]. See also [28, Theorem 5.2] for the corresponding statement dealing with permutations that is actually suitable here. **Theorem 2.8**.: _The Y-system associated with each \(A_{\pm}(z)\) in Table 1 is positive and negative reddening._ Proof.: The quiver \(B\) associated with \(A_{\pm}(z)\) is given in Table 3. We can verify the assertion by concrete calculation on the quiver. The numbers \(h_{\pm}\) are given in Table 1. Theorem 1.5 (1) now follows from Proposition 2.7 and Theorem 2.8. **Remark 2.9**.: A connected component of each quiver in Table 3 has the following cluster type: \[(1)\ A_{2}\quad(2)\ A_{4}\quad(3)\ E_{6}\quad(4)\ D_{4}\quad(5)\ A_{5}\quad(6)\ A_{4}\] These are of finite type in the sense of [8], which also implies Theorem 1.5 (1). We remark, however, that this observation is somewhat misleading since the quiver associated with a periodic Y-system of general rank is typically of infinite type. It might be better to think that the appearance of only finite type quivers happens "by chance" due to the smallness of 2, the rank of Y-systems considered in this paper. Theorem 2.8 also gives quantum dilogarithm identifies associated with Donaldson-Thomas invariants. For any reddening sequence \(\boldsymbol{i}\) starting from a quiver \(B\), we can define a quantity \(\mathbb{E}(\boldsymbol{i})\) by using the quantum dilogarithm. We refer to [20, Remark 6.6] as the definition. This quantity coincides with Kontsevich-Soibelman's refined Donaldson-Thomas invariant associated with \(B\)[20, 25]. In particular, \(\mathbb{E}(\boldsymbol{i})\) does not depend on \(\boldsymbol{i}\), which gives the quantum dilogarithm identifies. In our case, we have: \begin{table} \begin{tabular}{c|c} \hline Quiver \(B\) & \(A_{\pm}(z)\) \\ \hline \((1,0)\)\(\boldsymbol{\rightarrow}(2,1)\) & \((1,1)\)\(\boldsymbol{\leftarrow}(2,0)\) & (1) \\ \hline \((2,3)\) & \((2,2)\) & \\ \((1,0)\)\((2,1)\) & \((1,1)\)\((2,0)\) & (2) \\ \((2,5)\) & \((2,4)\) & \\ \hline \((2,5)\) & \((2,3)\) & \\ \((2,5)\) & \((2,4)\) & \\ \((2,7)\) & \((2,6)\) & \\ \((2,8)\) & & \\ \hline \((1,1)\)\((2,0)\) & & \\ \((1,0)\)\((2,1)\) & & \\ \((2,1)\) & & \\ \((2,0)\)\((1,1)\) & \((1,0)\)\((2,2)\) & \\ \hline \((1,0)\)\((2,1)\) & \((2,0)\)\((1,1)\) & (6) \\ \hline \end{tabular} \end{table} Table 3: Quivers associated with \(A_{\pm}(z)\) in Table 1. Each quiver is preserved by the mutation at \((*,0)\) followed by the permutation \((i,p)\mapsto(i,p-1)\) (the second argument is considered modulo \(r_{i}\)), which yields Y-system. For (1)–(3), this operation interchanges the connected components (see Section 3.1). **Corollary 2.10**.: _For each \(A_{\pm}(z)\) in Table 1, we have_ \[\mathbb{E}(\mu^{h_{+}})=\mathbb{E}(\mu^{-h_{-}}),\] _where \(\mu\coloneqq\nu\circ\mu_{\mathbf{i}}\) is the sequence of mutations (together with the permutation) \((B,\mathbf{y}(0))\to(B,\mathbf{y}(1))\) in (2.7)._ For example, the pair (1) in Table 1 yields the famous pentagon identity of the quantum dilogarithm. ## 3 Classification ### Change of slices We need to introduce an appropriate equivalence relation on the set \(\mathbb{Y}\), which identifies essentially the same Y-systems. Before we get into the definition, we will see a typical example. Consider the following Y-system: \[\begin{split} Y_{1}(u)Y_{1}(u-2)&=(1+Y_{2}(u-1)^{- 1})^{-1}\\ Y_{2}(u)Y_{2}(u-2)&=(1+Y_{1}(u-1)^{-1})^{-1}\end{split} \tag{3.1}\] which corresponds to \(A_{\pm}(z)\in\mathbb{Y}\) given by (1) in Table 1. This system of equations are defined on the set \([1,2]\times\mathbb{Z}\), but actually can be defined on each component of the following disjoint union: \[[1,2]\times\mathbb{Z}=\bigsqcup_{k=0}^{1}\{(i,u)\mid i-u\equiv k\bmod 2\}.\] We informally call the algebraic relation defined on each subset the _slice_ of the whole Y-system. If \((Y_{i}(u))\) is a solution of the Y-system for \(i-u\equiv 0\bmod 2\), then \((Y_{i}(u+1))\) is a solution of the Y-system for \(i-u\equiv 1\bmod 2\). Thus it is enough to consider only one slice when considering solutions. Now we consider another Y-system: \[\begin{split} Y_{1}^{\prime}(u)Y_{1}^{\prime}(u-3)& =(1+Y_{2}^{\prime}(u-2)^{-1})^{-1}\\ Y_{2}^{\prime}(u)Y_{2}^{\prime}(u-3)&=(1+Y_{1}^{ \prime}(u-1)^{-1})^{-1}.\end{split} \tag{3.2}\] which corresponds to \(A_{\pm}^{\prime}(z)\in\mathbb{Y}\) given by \[A_{+}^{\prime}(z)\coloneqq\begin{pmatrix}1+z^{3}&-z^{2}\\ -z&1+z^{3}\end{pmatrix},\quad A_{-}^{\prime}(z)\coloneqq\begin{pmatrix}1+z^{ 3}&0\\ 0&1+z^{3}\end{pmatrix}.\] The Y-system (3.2) is decomposed into three slices: \[[1,2]\times\mathbb{Z}=\bigsqcup_{k=0}^{2}\{(i,u)\mid i-u\equiv k\bmod 3\}.\] We see that for any solution of (3.1) for \(i-u\equiv 0\bmod 2\), \[Y_{1}^{\prime}(u)\coloneqq Y_{1}\left(\frac{2}{3}u-\frac{1}{3}\right),\quad Y _{2}^{\prime}(u)\coloneqq Y_{2}\left(\frac{2}{3}u\right)\] is a solution of (3.2) for \(i-u\equiv 2\bmod 3\). We also obtain solutions for the other two slices by shifting \(u\). Conversely, any solution of (3.1) is obtained from a solution of (3.2). Therefore, it is enough to consider one of the Y-systems (3.1) and (3.2). In particular, \(A_{\pm}(z)\) is of finite type if and only if \(A^{\prime}_{\pm}(z)\) is. Now we work in the general setting. The idea is that each slice corresponds to each connected component of the quiver associated with the matrix \(B\) defined by (2.5). Let \(A_{\pm}(z)\in\mathbb{Y}\), and assume that it is indecomposable. By [24, Proposition 3.24], we have a decomposition of the matrix \(B\) and its index set \(R\): \[B=\bigoplus_{u=0}^{t-1}B(u),\quad R=\bigsqcup_{u=0}^{t-1}R(u)\] such that each \(B(u)\) is indecomposable and we have a cyclic sequence of mutations \[B(0)\xrightarrow{\nu|_{R(0)}\circ\mu_{\mathfrak{k}(0)}}B(1)\longrightarrow \cdots\longrightarrow B(t-1)\xrightarrow{\nu|_{R(t-1)}\circ\mu_{\mathfrak{k}( t-1)}}B(0) \tag{3.3}\] where \(\boldsymbol{i}(u)\coloneqq\boldsymbol{i}\cap R(u)\). We say that two pairs \(A_{\pm}(z)\) and \(A^{\prime}_{\pm}(z)\) are _related by change of slices_ if they yield the same cyclic sequence (3.3) up to a change of indices and the commutativity of mutations (2.3). (This commutativity is already implicitly used to justify the notation \(\mu_{\boldsymbol{i}(u)}\) as stated below (2.3).) **Example 3.1**.: The pairs \(A_{\pm}(z)\) and \(A^{\prime}_{\pm}(z)\) associated with (3.1) and (3.2), respectively, are related by change of slices. Indeed, we see that the sequence (3.3) for (3.1) is \[(1,0)\xrightarrow{\nu\circ\mu_{(0,0)}}(1,1)\xleftarrow{}(2,0)\xrightarrow{ \nu\circ\mu_{(1,0)}}(1,0)\xrightarrow{\boldsymbol{\Rightarrow}}(2,1)\,\] whereas the sequence (3.3) for (3.2) is \[(1,0)\xrightarrow{\boldsymbol{\Rightarrow}}(2,1)\xrightarrow{\nu^{\prime} \circ\mu_{(0,0)}}(1,2)\xleftarrow{}(2,0)\xrightarrow{\nu^{\prime}\circ\mu_{(1,0)}}(1,1)\xrightarrow{\boldsymbol{\Rightarrow}}(2,2)\xrightarrow{\nu^{\prime} }(1,0)\xrightarrow{\boldsymbol{\Rightarrow}}(2,1)\.\] These are the same sequence up to a change of indices. ### Proof of the classification In this section, we will prove Theorem 1.5 (2). We first recall the following result. **Lemma 3.2** ([24, Theorem 5.5]).: _Let \(A_{\pm}(z)\in\mathbb{Y}\). Assume that \(A_{\pm}(z)\) is of finite type. Then there is a vector \(v\in\mathbb{R}^{I}\) such that \(v>0\), \(vA_{+}(1)>0\), and \(vA_{-}(1)>0\). In particular, \(\operatorname{tr}A_{\pm}(1)>0\) and \(\det A_{\pm}(1)>0\)._ By Lemma 3.2, \(A_{+}(1)\) and \(A_{-}(1)\) are equal to one of the following matrices: \[\begin{pmatrix}2&-1\\ -1&2\end{pmatrix},\begin{pmatrix}2&-1\\ -2&2\end{pmatrix},\begin{pmatrix}2&-1\\ -3&2\end{pmatrix},\begin{pmatrix}2&-1\\ -1&1\end{pmatrix},\begin{pmatrix}2&0\\ -n&2\end{pmatrix},\begin{pmatrix}2&0\\ -n&1\end{pmatrix},\begin{pmatrix}1&0\\ -n&1\end{pmatrix}\] up to a permutation of the indices. We give several lemmas about impossible pairs. Before giving lemmas, we note that \[n^{+}_{ij;p}=0\quad\text{or}\quad n^{-}_{ij;p}=0 \tag{3.4}\] for any \(i,j,p\). **Lemma 3.3**.: _It is impossible that \(A_{\pm}(z)\in\mathbb{Y}\) has the following forms:_ 1. \(A_{+}(1)=\begin{pmatrix}2&-a\\ *&*\end{pmatrix}\)_,_ \(A_{-}(1)=\begin{pmatrix}2&-b\\ *&*\end{pmatrix}\) _for odd_ \(a,b\) _._ 2. \(A_{+}(1)=\begin{pmatrix}2&-a\\ *&*\end{pmatrix}\)_,_ \(A_{-}(1)=\begin{pmatrix}1&-b\\ *&*\end{pmatrix}\) _for odd_ \(a,b\)_._ 3. \(A_{+}(1)=\begin{pmatrix}1&-1\\ *&*\end{pmatrix}\)_,_ \(A_{-}(1)=\begin{pmatrix}1&-1\\ *&*\end{pmatrix}\)_._ 4. \(A_{+}(1)=\begin{pmatrix}1&0\\ *&*\end{pmatrix}\)_,_ \(A_{-}(1)=\begin{pmatrix}1&*\\ *&*\end{pmatrix}\)_._ Proof.: For (2), we can set \[A_{+}(z)=\begin{pmatrix}1+z^{r}&-f(z)\\ *&*\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{r}-z^{a}&-g(z)\\ *&*\end{pmatrix}.\] By the symplectic property (1.4), we have \[z^{a}+z^{a-r}+f(z)g(z^{-1})=z^{-a}+z^{r-a}+g(z)f(z^{-1}). \tag{3.5}\] Since \(0<a\) and \(a-r<0\) by (1.2), the sum of the coefficients of the terms in \(f(z)g(z^{-1})\) with positive exponents is equal to that with negative exponents. Since \(f(1)g(1)\) (\(=\)\(ab\)) is odd, \(f(z)g(z^{-1})\) should contain the constant term \(z^{0}\), which contradicts (3.4). The proof for (1) is similar. For (3), we can set \[A_{+}(z)=\begin{pmatrix}1+z^{r}-z^{a}&-z^{b}\\ *&*\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{r}-z^{c}&-z^{d}\\ *&*\end{pmatrix}\] with \(0<a,b,c,d<r\). Without loss of generality, we can assume \(a<c\). By (1.4), we have \[z^{-c}+z^{r-c}+z^{a}+z^{a-r}+z^{c-a}+z^{d-b}=z^{c}+z^{c-r}+z^{-a}+z^{r-a}+z^{a -c}+z^{b-d}\] Since \(c-a>0\), we see that \(c-a\) is equal to \(c\), \(r-a\), or \(b-d\). However, the first two cases are impossible by (1.2). Thus \(c-a=b-d\), which implies that \[z^{-c}+z^{r-c}+z^{a}+z^{a-r}=z^{c}+z^{c-r}+z^{-a}+z^{r-a}.\] Since \(a>0\), we see that \(a\) is equal to \(c\) or \(r-a\). However, \(a=c\) is impossible by (3.4). Thus \(a=r-a\), which implies that \[z^{-c}+z^{r-c}=z^{c}+z^{c-r}.\] Since \(c>0\), we see that \(c=r-c\). However, this implies that \(a=r/2=c\), which is impossible by (3.4). For (4), we can set \[A_{+}(z)=\begin{pmatrix}1+z^{r}-z^{a}&0\\ *&*\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{r}-z^{b}&*\\ *&*\end{pmatrix}.\] By (1.4), we have \[z^{a}+z^{a-r}+z^{-b}+z^{r-b}+z^{b-a}=z^{-a}+z^{r-a}+z^{b}+z^{b-r}+z^{a-b}.\] Comparing the number of the terms with positive and negative exponents, we should have \(a=b\). This is impossible by (3.4). **Lemma 3.4**.: _It is impossible that indecomposable \(A_{\pm}(z)\in\mathbb{Y}\) has the form_ \[A_{+}(1)=\begin{pmatrix}*&0\\ *&*\end{pmatrix},\quad A_{-}(1)=\begin{pmatrix}*&0\\ *&*\end{pmatrix}.\] Proof.: We can set \[A_{\pm}(z)=\begin{pmatrix}1+z^{r_{1}}-f_{\pm}(z)&0\\ -g_{\pm}(z)&1+z^{r_{2}}-h_{\pm}(z)\end{pmatrix}.\] Since \(g_{+}(z)\neq 0\) or \(g_{-}(z)\neq 0\), we can pick the least integer \(c\) among the exponents in \(g_{+}(z)\) and \(g_{-}(z)\). Without loss of generality, we can assume \(g_{+}(1)\) contains the term \(z^{c}\). By (1.4), we have \[f_{+}(z)g_{-}(z^{-1})+(1+z^{r_{1}})g_{+}(z^{-1})=f_{-}(z)g_{+}(z^{-1})+(1+z^{r _{1}})g_{-}(z^{-1}). \tag{3.6}\] The left-hand side in (3.6) contains the term \(z^{r_{1}-c}\), but any exponent in the right-hand side is strictly smaller that \(r_{1}-c\) by (1.2) and (3.4), which is a contradiction. We now search for possible pairs \(A_{\pm}(1)\) case by case using the symplectic property (1.4) at \(z=1\) together with Lemma 3.3 and 3.4: * Case: \(A_{+}(1)=\begin{pmatrix}2&-1\\ -1&2\end{pmatrix}\). The possibilities for \(A_{-}(1)\) are: \[\begin{pmatrix}2&0\\ 0&2\end{pmatrix},\ \begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] * Case: \(A_{+}(1)=\begin{pmatrix}2&-1\\ -1&2\end{pmatrix}\). The possibilities for \(A_{-}(1)\) are: \[\begin{pmatrix}2&0\\ 0&2\end{pmatrix},\ \begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] * Case: \(A_{+}(1)=\begin{pmatrix}2&-1\\ -2&2\end{pmatrix}\). The possibilities for \(A_{-}(1)\) are: \[\begin{pmatrix}2&0\\ -1&2\end{pmatrix},\ \begin{pmatrix}1&0\\ 0&2\end{pmatrix}.\] * Case: \(A_{+}(1)=\begin{pmatrix}2&-1\\ -3&2\end{pmatrix}\). The possibilities for \(A_{-}(1)\) are: \[\begin{pmatrix}2&0\\ -2&2\end{pmatrix}.\] * Case: \(A_{+}(1)=\begin{pmatrix}2&-1\\ -1&1\end{pmatrix}\). The possibilities for \(A_{-}(1)\) are: \[\begin{pmatrix}2&0\\ 0&2\end{pmatrix},\ \begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] * Case: \(A_{+}(1)=\begin{pmatrix}2&0\\ -n&2\end{pmatrix}\). The possibilities for \(A_{\pm}(1)\) are: \[\left(\begin{pmatrix}2&0\\ 0&2\end{pmatrix},\;\begin{pmatrix}2&-1\\ -1&2\end{pmatrix}\right),\;\left(\begin{pmatrix}2&0\\ -1&2\end{pmatrix},\;\begin{pmatrix}2&-1\\ -2&2\end{pmatrix}\right),\;\left(\begin{pmatrix}2&0\\ 0&2\end{pmatrix},\;\begin{pmatrix}1&-1\\ -1&2\end{pmatrix}\right).\] * Case: \(A_{+}(1)=\begin{pmatrix}2&0\\ -n&1\end{pmatrix}\). The possibilities for \(A_{\pm}(1)\) are: \[\left(\begin{pmatrix}2&0\\ 0&1\end{pmatrix},\;\begin{pmatrix}2&-2\\ -1&2\end{pmatrix}\right).\] * Case: \(A_{+}(1)=\begin{pmatrix}1&0\\ -n&1\end{pmatrix}\). The possibilities for \(A_{\pm}(1)\) are: \[\left(\begin{pmatrix}2&0\\ 0&1\end{pmatrix},\;\begin{pmatrix}2&-2\\ -1&2\end{pmatrix}\right).\] In summary, the remaining possible pairs, up to a permutation of the indices and an change of sign, are given in the following table: \[\begin{array}{c|c}A_{+}(1)&A_{-}(1)\\ \hline\begin{pmatrix}2&-1\\ -1&2\end{pmatrix}&\begin{pmatrix}2&0\\ 0&2\end{pmatrix}\\ \begin{pmatrix}2&-1\\ -2&2\end{pmatrix}&\begin{pmatrix}2&0\\ -1&2\end{pmatrix}\\ \begin{pmatrix}2&-1\\ -3&2\end{pmatrix}&\begin{pmatrix}2&0\\ -2&2\end{pmatrix}\end{array}\qquad\begin{pmatrix}A_{+}(1)&A_{-}(1)\\ \hline\begin{pmatrix}2&-1\\ -1&2\end{pmatrix}&\begin{pmatrix}2&-1\\ 0&2\end{pmatrix}\\ \begin{pmatrix}2&-1\\ -3&2\end{pmatrix}&\begin{pmatrix}2&0\\ -3&2\end{pmatrix}\end{array}\qquad\begin{pmatrix}2&-1\\ -3&2\end{pmatrix}\end{array}\qquad\begin{pmatrix}2&0\\ -2&2\end{pmatrix} \tag{3.7}\] We now start searching for possible \(A_{\pm}(z)\). **Lemma 3.5**.: _Let \(n\geq 1\). Suppose that_ \[A_{+}(1)=\begin{pmatrix}2&-1\\ -n&2\end{pmatrix},\quad A_{-}(1)=\begin{pmatrix}2&0\\ -(n-1)&2\end{pmatrix}.\] _Then_ \[A_{+}(z)=\begin{pmatrix}[2]_{r}&-z^{-a}\\ -z^{r-a}[n]_{2r}&[2]_{(2n-1)r}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}[2]_ {r}&0\\ -z^{2r-a}[n-1]_{2r}&[2]_{(2n-1)r}\end{pmatrix}\] _for some \(r,a\), where \([n]_{r}\) is the \(z\)-integer defined by_ \[[n]_{r}\coloneqq\frac{1-z^{rn}}{1-z^{r}}. \tag{3.8}\] Proof.: We can set \[A_{+}(z)=\begin{pmatrix}[2]_{r_{1}}&-z^{a}\\ -\sum_{i=1}^{n}z^{b}&[2]_{r_{2}}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}[2 ]_{r_{1}}&0\\ -\sum_{i=1}^{n-1}z^{c_{i}}&[2]_{r_{2}}\end{pmatrix}.\] Without loss of generality, we can assume that \[b_{1}\leq b_{2}\leq\cdots\leq b_{n},\quad c_{1}\leq c_{2}\leq\cdots\leq c_{n-1}.\] By the symplectic property (1.4), we have \[\sum_{i=1}^{n-1}(z^{-c_{i}}+z^{r_{1}-c_{i}})+z^{a}+z^{a-r_{2}}=\sum_{i=1}^{n}(z^ {-b_{i}}+z^{r_{1}-b_{i}}).\] Comparing the degree by using the conditions (1.2) and (3.4), we obtain the system of linear equations \[a=r_{1}-b_{1},\quad a-r_{2}=-b_{n},\quad r_{1}=c_{i}-b_{i}=b_{i+1}-c_{i}\quad(i =1,\ldots,n-1),\] which implies that \[r_{2}=(2n-1)r_{1},\quad b_{i}=(2i-1)r_{1}-a,\quad c_{i}=2ir_{1}-a.\] **Lemma 3.6**.: _Suppose that_ \[A_{+}(1)=\begin{pmatrix}2&-1\\ -1&2\end{pmatrix},\quad A_{-}(1)=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] _Then_ \[A_{+}(z)=\begin{pmatrix}1+z^{2r}&-z^{a}\\ -z^{2r-a}&1+z^{2r}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{2r}-z^{r}&0 \\ 0&1+z^{2r}-z^{r}\end{pmatrix}\] _for some \(r,a\)._ Proof.: We can set \[A_{+}(z)=\begin{pmatrix}1+z^{r_{1}}&-z^{a}\\ -z^{b}&1+z^{r_{2}}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{r_{1}}-z^{c}& 0\\ 0&1+z^{r_{2}}-z^{d}\end{pmatrix}.\] By (1.4), we have \(r_{1}=r_{2}=a+b=2c=2d\). **Lemma 3.7**.: _Suppose that_ \[A_{+}(1)=\begin{pmatrix}2&-1\\ -2&2\end{pmatrix},\quad A_{-}(1)=\begin{pmatrix}1&0\\ 0&2\end{pmatrix}.\] _Then_ \[A_{+}(z)=\begin{pmatrix}1+z^{2r}&-z^{a}\\ -z^{2r-a}-z^{3r-a}&1+z^{3r}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{2r }-z^{r}&0\\ 0&1+z^{2r}\end{pmatrix}\] _for some \(r,a\)._ Proof.: We can set \[A_{+}(z)=\begin{pmatrix}1+z^{r_{1}}&-z^{a}\\ -z^{b_{1}}-z^{b_{2}}&1+z^{r_{2}}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1 +z^{r_{1}}-z^{c}&0\\ 0&1+z^{r_{2}}\end{pmatrix}.\] Without loss of generality, we can assume \(b_{1}\leq b_{2}\). By (1.4), we have \(r_{1}=2c\), \(r_{2}=3c\), \(b_{1}=2c-a\), and \(b_{2}=3c-a\) **Lemma 3.8**.: _Suppose that_ \[A_{+}(1)=\begin{pmatrix}2&-1\\ -1&1\end{pmatrix},\quad A_{-}(1)=\begin{pmatrix}2&0\\ 0&2\end{pmatrix}.\] _Then_ \[A_{+}(z)=\begin{pmatrix}1+z^{2r}&-z^{a}\\ -z^{2r-a}&1+z^{2r}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{2r}&0\\ 0&1+z^{2r}\end{pmatrix}\] _for some \(r,a\)._ Proof.: We can set \[A_{+}(z)=\begin{pmatrix}1+z^{r_{1}}&-z^{a}\\ -z^{b}&1+z^{r_{2}}\end{pmatrix},\quad A_{-}(z)=\begin{pmatrix}1+z^{r_{1}}&0\\ 0&1+z^{r_{2}}\end{pmatrix}.\] By (1.4), we have \(r_{1}=r_{2}=a+b=2c\). Proof of Theorem 1.5 (2).: The remaining possibilities for finite type \(A_{\pm}(z)\in\mathbb{Y}\), up to a permutation of indices and change of sign, are the six families of the pairs given in Lemma 3.5-3.8, which contain the parameters \(r,a\). We can verify that these six families belong to \(\mathbb{Y}\), and they can be reduced to the pairs in Table 1 by change of slices.
2304.08979
In ChatGPT We Trust? Measuring and Characterizing the Reliability of ChatGPT
The way users acquire information is undergoing a paradigm shift with the advent of ChatGPT. Unlike conventional search engines, ChatGPT retrieves knowledge from the model itself and generates answers for users. ChatGPT's impressive question-answering (QA) capability has attracted more than 100 million users within a short period of time but has also raised concerns regarding its reliability. In this paper, we perform the first large-scale measurement of ChatGPT's reliability in the generic QA scenario with a carefully curated set of 5,695 questions across ten datasets and eight domains. We find that ChatGPT's reliability varies across different domains, especially underperforming in law and science questions. We also demonstrate that system roles, originally designed by OpenAI to allow users to steer ChatGPT's behavior, can impact ChatGPT's reliability in an imperceptible way. We further show that ChatGPT is vulnerable to adversarial examples, and even a single character change can negatively affect its reliability in certain cases. We believe that our study provides valuable insights into ChatGPT's reliability and underscores the need for strengthening the reliability and security of large language models (LLMs).
Xinyue Shen, Zeyuan Chen, Michael Backes, Yang Zhang
2023-04-18T13:20:45Z
http://arxiv.org/abs/2304.08979v2
# In ChatGPT We Trust? Measuring and Characterizing ###### Abstract The way users acquire information is undergoing a paradigm shift with the advent of ChatGPT. Unlike conventional search engines, ChatGPT retrieves knowledge from the model itself and generates answers for users. ChatGPT's impressive question-answering (QA) capability has attracted more than 100 million users within a short period of time but has also raised concerns regarding its reliability. In this paper, we perform the first large-scale measurement of ChatGPT's reliability in the generic QA scenario with a carefully curated set of 5,695 questions across ten datasets and eight domains. We find that ChatGPT's reliability varies across different domains, especially underperforming in law and science questions. We also demonstrate that system roles, originally designed by OpenAI to allow users to steer ChatGPT's behavior, can impact ChatGPT's reliability. We further show that ChatGPT is vulnerable to adversarial examples, and even a single character change can negatively affect its reliability in certain cases. We believe that our study provides valuable insights into ChatGPT's reliability and underscores the need for strengthening the reliability and security of large language models (LLMs). ## 1 Introduction ChatGPT, as a large language model (LLM), has revolutionized the way users acquire information. Unlike conventional search engines, ChatGPT retrieves knowledge from the model itself and generates answers for users. ChatGPT's question-answering (QA) process typically flows smoothly like a natural chat, enhancing the user experience and encouraging the general public to migrate to it. By January 2023, ChatGPT has crossed the 100-million-user milestone, making it the fastest-growing platform in history.1 Footnote 1: [https://nerdynav.com/chatgpt-statistics/](https://nerdynav.com/chatgpt-statistics/). Recent research has shown that ChatGPT obtains capability on par with existing large language models in traditional NLP tasks, such as machine translation, sentiment analysis, and textual entailment [13, 38, 73], and emerging tasks, including code generation and task automation [1, 62]. Despite its impressive capabilities, ChatGPT has led to questions about its question-answering reliability in generic knowledge domains, e.g., science, technology, law, medicine, etc. These concerns are further compounded by the fact that ChatGPT's proficiency in articulating rich answers may foster trust among ordinary users who often lack the expertise to identify mistakes in the model's responses [58]. There exists some preliminary research evaluating the efficacy of ChatGPT on question-answering [13, 74]; however, they either use only one or two QA datasets or concentrate on questions of certain types. While these evaluations provide valuable insights into ChatGPT's capabilities with limited samples, they may not fully reflect the diversity and complexity of questions that ChatGPT could face. Moreover, ChatGPT allows users to steer its behaviors by describing directions via _system role_[2], such as "you are a helpful assistant." While multiple system roles have been widely discussed in the open-source community [3, 4, 5, 6] and integrated into various applications [7, 8, 9, 10], a systematic investigation into the impact of these system roles on ChatGPT's reliability is still lacking. In addition, due to ChatGPT's popularity, it is inevitable that malicious actors will, if not already, attack ChatGPT with adversarial examples. It is also unclear whether such attacks are indeed feasible. **Research Questions.** To address the above issues, in this paper, we measure ChatGPT's reliability in the generic question-answering (QA) scenarios from the following three perspectives. 1. **RQ1:** Is ChatGPT reliable in generic question-answering scenarios? 2. **RQ2:** Do system roles impact ChatGPT's reliability? 3. **RQ3:** Can ChatGPT respond reliably when facing adversarial examples? **Evaluation Framework.** To quantitatively evaluate ChatGPT's reliability in the generic question-answering use cases, we build an evaluation framework consisting of two main steps: establishing a representative evaluation dataset and assessing answers from ChatGPT (see Section 3). Concretely, we collect ten QA datasets across four answer types, i.e., yes/no (YN), multiple-choice (MC), extractive (EX), and abstractive (AB). We leverage thematic analysis to align them to a unified dataset, resulting in 5,695 questions and eight question domains, including history, law, general works, medicine, social science, science, technology, and recreation. We evaluate ChatGPT's reliability through two perspectives: _correctness_ and _unanswerable question identification_. When answering questions, ChatGPT should not only provide correct answers (_correctness_) but can also identify situations where no answer should be provided (_unanswerable question detection_). The latter capability is especially important in sensitive domains such as law and medicine, as the inquirer often lacks the expertise to discern errors among answers [58]. We also conduct qualitative analysis to understand why ChatGPT fails to answer some questions or refuses to answer them. **Is ChatGPT Reliable in Generic Question-Answering Scenarios.** We observe ChatGPT exhibits varying levels of reliability in different domains. While ChatGPT shows relatively high correctness in the _recreation_ and _technology_ questions, it underperforms in _law_ and _science_ domains. For example, the correctness of law questions on MC and EX tasks is respectively 7.79% and 8.07% lower than the overall average correctness. ChatGPT's ability to identify unanswerable questions is also limited with a rate of only 27.80%, indicating that when serving unanswerable questions, ChatGPT is prone to make meaningless guesses, rather than rejecting the questions (see Section 4.2). Through qualitative analysis, we identify four failure reasons and four refusal reasons used by ChatGPT. Interestingly, ChatGPT tends to use the reason "not mentioned" to reject to answer. Our findings underscore the need for further research to improve ChatGPT's reliability in specific domains and enhance its ability to identify unanswerable questions in question-answering scenarios. **Do System Roles Impact ChatGPT's Reliability.** We find that different system roles may directly affect ChatGPT's reliability. For instance, benign roles (Assistant, Expert, Expert-CoT, and Expert-R) improve ChatGPT's correctness on four QA tasks, while bad and jailbreak roles generally reduce ChatGPT's correctness and force it to select meaningfuless answers to unanswerable questions. Moreover, we find that their impact is not always evident from the role description alone. For instance, a jailbreak role may aim to circumvent restrictions but ultimately result in decreased correctness. Our finding, for the first time, reveals how system roles can impact ChatGPT's reliability. We, therefore, emphasize the importance of exploring more reliable system roles and evaluating their impact on ChatGPT before applying them to the applications. **Can ChatGPT Respond Reliably When Facing Adversarial Examples.** Given the growing popularity of ChatGPT, it is inevitable that malicious users will, if not already, attack ChatGPT by carefully crafting adversarial examples as its input. It is essential for ChatGPT to respond reliably to these adversarial examples. Therefore, we also measure ChatGPT's reliability against adversarial examples. We implement five decision-based adversarial attacks with three levels of perturbations. We discover that ChatGPT is highly vulnerable to sentence-level and character-level adversarial attacks. We further manually engineer a prompt, namely _leakage prompt_, to induce ChatGPT to disclose the confidence scores. This enables us to implement score-based attacks against ChatGPT (see Section 6.2) and brings an average ASR improvement of 0.38. Our qualitative analysis of the adversarial examples identifies certain interesting cases like changing only one character is sufficient enough to alter the output of ChatGPT. These results demonstrate the vulnerability of ChatGPT to adversarial examples, highlighting the potential safety/security risks associated with ChatGPT in practical applications. **Our Contributions.** The contributions of the paper are as summarized as follows: * We perform the first large-scale measurement of ChatGPT's reliability in the generic QA scenario with a carefully curated set of 5,695 questions across ten datasets and eight domains. Our results suggest ChatGPT's reliability varies among different domains. We also reveal the deficiency of ChatGPT in identifying unanswerable questions, suggesting that when serving unanswerable questions, ChatGPT tends to make meaningless guesses rather than rejecting answers. * We then, for the first time, systematically investigate the impacts of system roles on ChatGPT's reliability. We find system roles have the ability to not only steer ChatGPT's behaviors but also impact its correctness and decrease its unanswerable question detecting ratio. Worse, their impact is not always evident from the role description alone, emphasizing the importance of exploring more reliable system roles and proactively evaluating them before applying to the applications. * We also assess ChatGPT's reliability against adversarial attacks. Our results show that ChatGPT is vulnerable to sentence-level and character-level adversarial examples, highlighting the potential security risks associated with ChatGPT. ## 2 Background ### ChatGPT ChatGPT is an advanced large language model (LLM) that was launched by OpenAI in November 2022. At the time of writing, it is based on the GPT-3.52 architecture [22] and fine-tuned with Reinforcement Learning from Human Feedback (RLHF) [64] to reduce its harmful and untruthful outputs. Based on the enormous amount of knowledge it has learned during training, ChatGPT can generate human-like responses to a wide range of prompts and questions in a conversation-like manner. Moreover, ChatGPT allows users to define their task style by describing those directions via roles, which are termed _system role_ by OpenAI. For example, users can write a prompt starting with "You are a helpful assistant"3 to direct ChatGPT to behave as an assistant. Users can also craft certain jailbreak messages, such as "You are going to pretend to be DAN which stands for doing anything now" to get around ChatGPT's safeguard mechanisms and abuse ChatGPT to answer inappropriate questions [11]. While ChatGPT instructed within the system roles has been increasingly used [3, 4, 5, 6] and integrated into various applications [7, 8, 9, 10], a systematic investigation of the effect of these system roles is still lacking. Furthermore, ChatGPT's responses are not always correct. It can produce hallucination facts [37], exhibit social stereotypes [43], and struggle with mathematical and coding tasks [17], suggesting the potential unreliability. ### Question-Answering Task Question-Answering (QA) is one of the main tasks in NLP [25, 68]. Given questions (and the context if any), QA tasks evaluate a model's capability in reading comprehension [58, 23, 59], information retrieval [35], logical reasoning [71], and knowledge base [70]. Based on the answer types, QA tasks can be generally categorized into four types [41], i.e., yes/no [23], multiple-choice [50, 66, 45, 54], extractive [58, 59], and abstractive tasks [42, 49, 27] (see Table 1 for details). The yes/no task expects a simple "yes" or "no" response, while the multiple-choice task requires the model to select the correct answer from a set of given answer candidates. The extractive task requires the model to retrieve the answer from the context, and the abstractive task demands a free-form response from the model. Each of the four QA tasks elicits the model's capability distinctively and is evaluated with specific metrics; therefore, none of them can be easily substituted with one another. We refer the audience to [61] for the overview of QA techniques and datasets. ## 3 Evaluation Framework ### Evaluation Dataset **QA Datasets.** We employ ten widely used benchmark QA datasets in our study, including BoolQ [23], OpenbookQA (OQA) [50], RACE [45], ARC [24], CommonsenseQA (CQA) [66], SQuAD1 [59], SQuAD2 [58], NarrativeQA (NQA) [42], ELLS [27], and TruthfulQA (TQA) [49]. These datasets encompass a broad range of QA capabilities, such as reading comprehension (BoolQ, SQuDA1/2, RACE), reasoning (OQA, ARC), commonsense (CQA), full document comprehension (NQA, ELLS), and truthfulness (TQA). Furthermore, they comprise all four QA tasks [41], including yes/no (BoolQ), multiple-choice (OQA, RACE, ARC, CQA), extractive (SQuAD 1/2), and abstractive tasks (NQA, EL15, TQA). They thus offer a solid foundation to comprehensively evaluate the ChatGPT's reliability in various real-world QA scenarios. Their details are outlined below and summarized in Table 2. * **BoolQ [23]** is a yes/no reading comprehension dataset. The questions are derived from aggregated Google searches. The answers (yes/no) are marked by human annotators if certain Wikipedia pages contain sufficient information to address the questions. * **OpenbookQA (OQA) [50]** is a multiple-choice reasoning dataset. The questions are derived from 1,326 core science facts. The answers consist of 4 candidates, of which only one is correct, requiring reasoning between questions and the given science facts and common knowledge. * **RACE [45]** is a multiple-choice reading comprehension dataset. The questions are derived from English exams for Chinese students. The answers include 4 candidates, of which only one is correct, requiring reading comprehension of English passages. * **ARC [24]** is a multiple-choice reasoning dataset. The questions are derived from science exams (student level ranging from 3rd grade to 9th) that are incorrectly answered by retrieval-based and word co-occurrence algorithms. The answers consist of 4 candidates, of which only one is correct, requiring decent knowledge and reasoning in natural science. * **CommonsenseQA (CQA) [66]** is a multiple-choice reasoning dataset. The questions are derived from knowledge encoded in ConceptNet [63]. The answers comprise 5 candidates, of which only one is correct, requiring background knowledge that is trivial to humans but non-trivial to ML models' reasoning capability. \begin{table} \begin{tabular}{|p{42.7pt}|p{142.3pt}|} \hline \multicolumn{2}{|c|}{**Yes/NO QA (YN)**} \\ \hline Context & A Long Island Iced Tea is a type of alcoholic mixed drink typically made with vodka, tequila, light rum, triple sec, gin, and a splash of cola... \\ Question & Do long island Iced teas have tea in them \\ Answer & FALSE \\ \hline \multicolumn{2}{|c|}{**Multiple-choice QA (MC)**} \\ \hline \hline Context & You change the channels for the fourth time and realize that once again there's nothing on television that gets your attention... \\ Question & What is the most important for runners in a race? (A) Having fun. (B) Receiving respect. (C) Trying their best. (D) Winning the competition. \\ Answer & (C) \\ \hline \hline \multicolumn{2}{|c|}{**Extractive QA (EX)**} \\ \hline Context & The Panthers finished the regular season with a 15–1 record, and quarterback \& \& Newton \\ & was named the NFL Most Valuable Player (MVP)... \\ Question & Who is the quarterback for the Panthers? \\ Answer & Cam Newton \\ \hline \multicolumn{2}{|c|}{**Abstractive QA (AB)**} \\ \hline Context & Pierre Grassou de Fougeres is a mediocre painter who lives off painting forgeries... \\ Question & How come Vervelle is so impressed with Grasou? \\ Answer & He thinks Grassou has the talents of famous artists. \\ \hline \end{tabular} \end{table} Table 1: Four common QA tasks. * **SQuAD1 [59]** is an extractive reading comprehension dataset. The questions are derived from Wikipedia articles. The answers should be extracted from the given context (i.e., paragraphs) associated with the questions. * **SQuAD2 [58]** combines questions in SQuAD1 with unanswerable questions written by crowd workers. The unanswerable questions resemble answerable ones but cannot be found in the given context. * **NarrativeQA (NQA) [42]** is an abstractive full document comprehension dataset. The questions are derived from stories, such as books and movie scripts. The answers are human-generated free-form text using just summaries or the full story text. * **ELIS [27]** is an abstractive full document comprehension dataset. The questions are derived from the threads in the "Explain Like I'm Five" (ELIS) subreddit (an online community that provides answers to questions that are comprehensible by five-year-olds). The answers are free-form text with the highest voting scores in those threads. * **TruthfulQA (TQA) [49]** is an abstractive truthfulness dataset. It was recently introduced to understand if LLMs can avoid generating false answers learned from imitating human texts. The questions, spanning 38 categories (e.g., medicine, law, and finance), are single-sentence questions and purposely designed so that some humans would answer wrongly due to a false belief or misconception. Each question has sets of true and false reference answers and a source that supports the answers. **QA Dataset Sampling.** Our initial dataset comprises the development and test sets of each QA dataset. Records (question-answering pairs) are randomly sampled from datasets whose validation set (or test set if the ground-truth label is offered) contains over 1k question-answering pairs. Otherwise, the complete dataset is retained. Note, RACE consists of two subsets, RACE-M from middle school exams and RACE-H from high school exams, respectively. For each subset, we extract 1,000 records from its validation set, resulting in a total of 2,000 records from the RACE dataset. This sampling method is motivated by three factors. First, we conduct a thematic analysis to group records into semantically similar domains. Given the necessity of human inspection, a smaller dataset is more practical. Second, data imbalance issues can be addressed to a certain extent through this sampling method. For example, OQA and ARC concentrate on science and neglect other areas, such as law and history. Consequently, more data from underrepresented domains can be obtained. Finally, due to ChatGPT API's slow response time of 10-20 seconds per query, evaluating all records is impractical. **Thematic Analysis.** We then perform thematic analysis [18] to pre-process the collected samples. The primary objective of thematic analysis is to categorize the samples based on their similarity in terms of semantics and domains, thereby facilitating meaningful and in-depth comparisons. To achieve this, we leverage BERTopic [31] to automatically topic modeling questions and then apply deductive analysis to assign these topics into broad domains. We test five pre-trained embedding models for BERTopic and choose the one with the highest \(C_{V}\) coherence score (0.67) [60], which is GTR-T5-XL. To mitigate potential anomaly samples, we only include questions whose representative score is larger than 0.5. In the end, we obtain 219 topics and 5,695 questions, out of which 410 questions are unanswerable. With manual inspection, we find the results are clustered by topics, e.g., Super Bowl, Sherlock Holmes story, and so on. We then utilize a priori coding, a common deductive approach in HCI, psychology, and usable security that categorize data samples with the guide of established taxonomies or hypotheses [18, 28, 32, 46]. We refer to the Library of Congress Classification [20] as our taxonomy as well as initial codes. Two authors independently refine and merge codes over the process of coding. After the first coding round, the authors discuss and adapt the codebook until all authors agreed on the codebook. They then independently re-code all questions and merge their codes for analysis. The final codebook (Table 9 in the Appendix) includes eight codes/domains namely history, law, general works, medicine, social science, science, technology, and recreation. Our results show a good inter-coder agreement (kappa = 0.74). Figure 2 shows the Sankey diagram of our testbed. We recognize that datasets are often collected from a single source and involved various domains. For example, SQuAD1's data source is Wikipedia, but the questions cover eight domains. Therefore, thematic analysis enables us to better assess ChatGPT's capability across different data sources, datasets, answer types, and question domains. **Note.** We acknowledge that certain domains, such as law, medicine, and technology, may be underrepresented in our \begin{table} \begin{tabular}{|l|c|c|c|c|c|c c|c|c|} \hline \hline **QA Task** & **Yes/NO QA (YN)** & \multicolumn{3}{c|}{**Multiple-choice QA (MC)**} & \multicolumn{3}{c|}{**Extractive QA (EX)**} & \multicolumn{3}{c|}{**Abstractive QA (AB)**} \\ **Datasets** & **BoolQ** & **OQA** & **RACE** & **ARC** & **CQA** & **SQuAD1** & **SQuAD2** & **NQA** & **ELIS** & **TQA** \\ \hline \hline **Has context?** & ✓ & & ✓ & & ✓ & ✓ & ✓ & ✓ & \\ **\# of questions** & 1000 & 500 & 2000 & 869 & 1221 & 1000 & 1000 & 1000 & 1000 & 817 \\ **\# of filtered questions** & 487 & 250 & 984 & 414 & 600 & 710 & 698 & 747 & 413 & 390 \\ **\# of idk questions** & & & & & & & 356 & & 54 \\ **Evaluation metric** & Acc & & Acc & & & F1 & & RougeL \\ \hline \hline \end{tabular} \end{table} Table 2: Statisticas of QA datasets included in our testbed: one yes/no, four multiple-choice, two extractives, and three abstractive datasets. “idk” denotes unanswerable questions (e.g., 356 out of 698 questions from SQuAD2 are unanswerable). study. This may be attributed to the a priori coding procedure, in which we have refrained from merging these three domains into a broader domain as we have done with other domains. For example, the recreation domain is derived from music, fine arts, literature, and movies (see Table 9 in the Appendix). Nevertheless, we ensure that each domain is adequately represented in our study, with the technology domain containing the least number of questions at 165. ### Evaluation Pipeline **Overview.** Our evaluation pipeline consists of four steps, including query formation, ChatGPT invocation, answer extraction, and evaluation. The workflow is illustrated in Figure 1. **Query Formation.** A complete query to ChatGPT includes two messages: a _system_ message that sets the system role (see Section 2.1) and a _user_ message that asks the question. For _system_ message, we leave the _system_ message blank to access the native ChatGPT in RQ1 (Section 4) and explore how different system roles affect ChatGPT's reliability in RQ2 (Section 5). For _user_ message, we use prompts adopted from [5, 43] to instruct ChatGPT to provide answers in the required format for different QA tasks. Concretely, we encapsulate the prompt with the question and necessary information, e.g., context and options, as the _user_ message. The prompts of each QA task are presented in Table 8 in the Appendix. Note that we do not consider advanced techniques such as in-context learning [52] to construct our queries, as these methods may not be familiar or easily accessible to average users. **ChatGPT Invocation.** Our experiments are conducted on the March 1st version of ChatGPT with its official API.4 To ensure the reproducibility of the results, we choose model endpoint "gpt-3.5-turbo-0301," as it is a snapshot of GPT-3.5-turbo from March 1st, 2023, with no updates. Footnote 4: [https://platform.opensi.com/docs/guides/chat](https://platform.opensi.com/docs/guides/chat). **Answer Extraction.** Benefiting from ChatGPT's instruction-following nature [40], we observe ChatGPT's response in most cases follow the guide we defined in the prompt, facilitating automatic answer extraction for different QA tasks. In accordance with the required answer types outlined in Section 2.2, we extract the appropriate answer from ChatGPT's responses. Concretely, we extract options selected by ChatGPT, i.e., (A), for YN and MC tasks; the substring tokens for EX tasks; and retain the complete ChatGPT response for AB tasks. For responses that do not follow the expected format, two human annotators are assigned to independently extract the answers or determine the refusal reasons. They then discuss and arrive at a conclusion. This is a _de facto_ action taken when acting with LLMs [43]. **Evaluation.** We consider two critical capabilities to assess ChatGPT's reliability: _correctness_ and _unanswerable question identification_. First, ChatGPT should answer correctly when serving questions (_correctness_). To measure this capability, following previous work [43], we calculate the accuracy for YN and MC tasks; the F1 and RougeL metrics for EX and AB tasks, respectively. Second, ChatGPT should recognize situations where no answers can be provided [58]. This capability is particularly vital in sensitive domains like law, where the inquire may lack the expertise to distinguish errors among answers. To evaluate this capability, we calculate the identification rate of ChatGPT among unanswerable questions (_unanswerable question identification_). In addition, we measure the fluency of the generated questions and answers using the perplexity (PPL) metric [72, 57]. A higher PPL indicates the sentence is less fluent. Note that we do not calculate the perplexity for EX tasks as the answers are typically too short for a representative perplexity score. **Note.** ChatGPT is essentially a generative language model; hence its answer generation is stochastic. All experiments are therefore repeated twice and we report the mean values in the rest of the paper. ## 4 Is ChatGPT Reliable in Generic Question-Answering Scenarios? **Motivation.** ChatGPT's ability to understand complex questions and generate rich responses in natural language makes the user interaction with it like a seamless question-and-answer process. This proficiency may foster trust in ordinary users toward the responses provided by ChatGPT. However, to the best of our knowledge, current research has not comprehensively benchmarked if ChatGPT can provide correct answers in various domains (e.g., science, history, etc.), and identify situations where no answer should be given in sensitive domains (e.g., law, medicine, etc.). Therefore, we address these essential questions in this section. Figure 1: Workflow of the evaluation framework. Figure 2: Sankey diagram illustrating the question domain distributions. The first column represents the data source, the second column refers to the dataset, and the last column displays question domains. The thickness of each edge corresponds to the number of questions. ### Correctness **Overall Correctness.** As we can see in Figure 3, ChatGPT's correctness varies across question domains. It achieves good correctness on _recreation_ and _technology_ while underperforming in _law_ and _science_. For instance, the differences between the average scores on recreation questions and the overall average scores given YN, MC, EX, and AB tasks are +1.73%, +2.27%, +33.11%, and +0.71%. In contrast, the differences between the average correctness scores on law questions and those of the same four tasks are -0.75%, -7.79%, -8.07%, and -4.95%. By carefully inspecting ChatGPT's answer to failed cases, we find that ChatGPT prefers to create hallucinatory facts when answering law questions (see Section 4.3 for detailed failure analysis). **Question Fluency.** We also investigate the relationship between question fluency, ChatGPT answer fluency, and the corresponding correctness. Figure 3(a) and Figure 9(a) in the Appendix display the bivariate distribution of questions and ChatGPT answer fluency. We exclude the EX task, as its answers are typically too short for a representative perplexity score. Our analysis reveals a positive correlation between question fluency and ChatGPT answer fluency, with a Pearson correlation coefficient of 0.1 (\(p<0.1\)) in almost all datasets, except for the BoolQ and TruthfulQA datasets. This suggests that ChatGPT tends to answer in the same ambiguous way if a question is less fluent. This, in turn, leads to unstable reliability, as illustrated in Figure 3(b) and Figure 9(b) in the Appendix, where we see an increase in the standard variance (indicated by the shadow area) as the question perplexity increases. However, it is difficult to conclude whether higher question perplexity results in better or worse ChatGPT reliability, as we observe different tendencies across datasets. **Question Tense.** Tense refers to the grammatical concept indicating when an action or state of being occurs. Language models need to identify question tenses to provide correct answers [51, 54]. We evaluate ChatGPT's proficiency in handling various tenses by utilizing spaCy5 to conduct morphological analysis on our test dataset. We present the correctness with different tenses in Figure 5. Our analysis reveals that, in most cases, ChatGPT attains slightly better correctness in present-tense questions. For instance, in the ELIS dataset, present-tense questions yield a mean RougeL score of 0.21, whereas the past-tense question score is only 0.18. We speculate that this could be due to ChatGPT's training set bias. However, without access to the ChatGPT training set, we leave this question open for future research. Footnote 5: [https://spacy.io/usage/v2](https://spacy.io/usage/v2). ### Unanswerable Question Identification In addition to providing reliable responses, a crucial capability for large language models is to recognize situations where no answer should be provided [58]. This capability is especially critical in sensitive domains such as law and medicine, where the inquirer often lacks the expertise to identify mistakes in the model's answers [58]. To evaluate ChatGPT's reliability in this regard, we measure ChatGPT's identification rate on unanswerable questions marked in the origi Figure 4: Fluency visualization of questions and ChatGPT answers. Fluency is measured by the perplexity metric. The higher the PPL, the lower the fluency. Figure 5: ChatGPT’s correctness with different tenses. Figure 3: ChatGPT correctness across domains and datasets. The white cell represents no questions. nal datasets (see Table 2). As shown in Table 3, ChatGPT only identifies 27.80% unanswerable questions and also produces 0.91% false alarms in answerable questions, indicating suboptimal reliability. This low identification rate suggests that when serving unanswerable questions, ChatGPT tends to make meaningless guesses rather than reject answers. For example, when asked _"Who composed the tune of 'Twinkle, Twinkle, Little Star'?"_ where the composer is still a mystery in history, and thus no answer should be offered. ChatGPT responds _"The tune of the nursery rhyme 'Twinkle, Twinkle, Little Star' was composed by Wolfgang Amadeus Mozart."_ Our finding raises concerns about ChatGPT's reliability, particularly in sensitive domains. We hope that this study may motivate future research in this direction. ### Qualitative Analysis **Failure Analysis.** To investigate possible reasons for ChatGPT's suboptimal reliability in law and science (see Section 4.1), we randomly sample 100 records in our testbed and inspect the questions, context (if present), correct answers, and ChatGPT's responses. We observe that the most common failure is caused by hallucinatory facts. For instance, ChatGPT's answer to the law question _"In the U.S., can the victim of a domestic violence case drop the charges?"_ is _"Yes, the victim of a domestic violence case can choose to drop the charges in the United States, but it is important to note that the decision to do so is often complex and can have serious consequences."_ This response, however, contradicts the United States law and policy.6 The existence of hallucinatory facts has also been recognized by other studies [13, 17]. Footnote 6: https://www. criminaldefenselawyer.com/legal-advice/dropping-domestic-violence-charge. Moreover, we find that ChatGPT also exhibits other forms of failure, including casual answers, lack of knowledge, and referential confusion (see Table 10 in the Appendix). For instance, when asked, _"Most people think of zoos as safe havens for animals, where problems such as difficulty finding food and avoiding predators don't exist... What are the advantages to elephants in the wild according to the passage?"_ ChatGPT's answer is _"(D) They are freer to move"_ which is different from the correct answer _"(C) They live in large social groups"_ and does not provide any reasoning for choosing this answer. We suspect this behavior is possibly due to its reasoning limitations, as it can only generate responses based on the training data it has processed [17]. Therefore, ChatGPT may not thoroughly understand the physical and social world, leading to incoherent answers. **Refusal Analysis.** We manually analyze ChatGPT's responses and identify four primary reasons for refusal: "not mentioned," "inappropriate," "it depends," and "no knowledge." A detailed explanation of each reason, along with examples, can be found in Table 11 in the Appendix. We further exclude unanswerable questions from our analysis and focus on those that ChatGPT could theoretically answer. Figure 6 shows the distribution of refusal reasons. We observe that ChatGPT's most common reason for refusal is that it considers the context insufficient to provide a reliable answer, as indicated by the reason "not mentioned." For example, when asked _"Tweed is a rare fabric in modern clothing; what brand should I look for when buying it?"_ (see Table 11 in the Appendix) where the correct answer is option (E) _"Eddie Bauer"_ as it is the only brand in the options. However, ChatGPT believes none of the options are correct and thus refuses to make a choice. This suggests the deficiencies of ChatGPT. In some cases, ChatGPT may be unable to provide an answer or acknowledge its limitations. Instead, ChatGPT blames the question for being ambiguous or poorly worded, potentially influencing the user's judgment of its reliability. ### Takeaways We demonstrate that ChatGPT exhibits different reliability in various domains. While ChatGPT shows relatively high correctness in the recreation and technology questions, it underperforms in law and science domains. We also identify ChatGPT's deficiencies in identifying unanswerable questions with a rate of only 27.80%. This suggests that when serving unanswerable questions, ChatGPT is prone to make meaningless guesses rather than rejecting the questions. With qualitative analysis, we also reveal four failure reasons and four refusal reasons used by ChatGPT. Interestingly, the most common reason ChatGPT used to reject questions is "not mentioned" rather than "no knowledge." Considering questions in the refusal analysis are all answerable, this indicates that ChatGPT may be dishonest in admitting its limitations, potentially influencing the user's judgment of its capability. Our findings emphasize the pressing need for continued research and development to enhance ChatGPT's reliability in certain domains, identifying unanswerable questions and fostering reliability by improving its Reinforcement Learning from Human Feedback (RLHF) subsystem. \begin{table} \begin{tabular}{l|c c|c} \hline \hline **GT/ChatGPT** & **Unanswerable** & **Answerable** & **Sum** \\ \hline **Unanswerable** & 114 (27.80\%) & 296 (72.19\%) & 410 \\ **Answerable** & 48 (0.91\%) & 5,237 (99.11\%) & 5,285 \\ \hline \hline \end{tabular} \end{table} Table 3: Statistics of ChatGPT on identifying unanswerable questions. GT denotes the ground-truth unanswerable labels. Figure 6: Sankey diagram illustrating the refusal reasons. The thickness of each edge corresponds to the number of questions. ## 5 Do System Roles Impact ChatGPT's Reliability? **Motivation.** ChatGPT allows users to leverage its system role [2] to customize their tasks (i.e., guiding their model's behavior by setting up a specific system prompt via OpenAI API). This capability has gained immense popularity in the community [3, 4, 5, 6] and has been incorporated into various applications [7, 8, 9, 10]. However, a systematic inquiry into the impact of these system roles on ChatGPT's reliability is still lacking. We thus fill this gap in this section. We consider four benign roles, two bad roles, and two jailbreak roles. The benign roles include an assistant (Assistant), an expert (Expert), an expert using zero-shot chain-of-thought prompt [44] (Expert-CoT), and an expert intended to refuse unanswerable questions (Expert-R). The bad roles include a bad assistant (Bad) and a bad assistant with an additional emphasis on providing convincing but incorrect answers (Bad-M). We also consider two in-the-wild jailbreak roles, namely DAN7 and ChatAGI.8 These system roles are designed to bypass the system's safeguards and usage policies. DAN, as the name suggests, aims to instruct ChatGPT to **"do** anything **now**" while ChatAGI focuses on providing unrestricted answers. Additional details on these system roles are provided in Table 12 in the Appendix. Footnote 7: [https://www.reddit.com/r/ChatGPTPromptGenius/Comments/106app/data_do_mything_now/](https://www.reddit.com/r/ChatGPTPromptGenius/Comments/106app/data_do_mything_now/). Footnote 8: [https://www.reddit.com/r/ChatGPTPromptGenius/comments/11vc27e/the_2_most_important_bypass_proports_available/](https://www.reddit.com/r/ChatGPTPromptGenius/comments/11vc27e/the_2_most_important_bypass_proports_available/). ### Correctness **Benign Roles.** Table 4 summarizes ChatGPT's correctness with different system roles. We observe that benign roles can enhance ChatGPT's correctness across four QA tasks. Take the OQA dataset as an example, Assistant, Expert, Expert-CoT, and Expert-R roles improve ChatGPT's correctness by 2.80%, 5.00%, 4.80%, and 3.20%, respectively, compared to that of ChatGPT without a system role. Additionally, using the CoT prompt, which instructs users to think step by step, can further improve ChatGPT's correctness in some cases. For instance, the Expert-CoT role achieves 75.16% correctness on the SQuAD1 dataset, while the Expert and Expert-R roles obtain 72.52% and 71.63% correctness, respectively. However, benign roles may underperform in certain datasets. On the SQuAD2 dataset, we find that all benign roles fail to improve ChatGPT's correctness except for the Expert-R role. We attribute this drop to the decreased capability of detecting unanswerable questions (see Section 5.2). To compare, the Expert-R role, which is instructed to reject unanswerable questions, improves the correctness by 1.42%. **Bad Roles.** To our surprise, bad roles do not necessarily harm ChatGPT's correctness. For instance, the Bad role actually increases ChatGPT's correctness in most datasets. As it is only slightly different from the Assistant role, i.e., by changing "assistant" to "bad assistant" (see Table 12 in the Appendix), we speculate that ChatGPT might be robust against simple negative modal words such as "bad." Nevertheless, the Bad-M role, which requires ChatGPT to deliberately return wrong answers, results in an apparent decrease in correctness across most datasets. For example, in the CQA dataset, the Bad-M role reduces correctness from 75.92% to 35.33%. **Jailbreak Roles.** We find that jailbreak roles can also affect ChatGPT's correctness, especially the DAN role, which drops the correctness of all datasets. For example, ChatGPT with the DAN role obtains 65.24% correctness (measured by accuracy) on the RACE dataset, which represents almost a 20% drop compared to that of ChatGPT without a system role. Moreover, for both the DAN and ChatAGI roles, the correctness on SQuAD2 heavily decreases from 48.00% to 33.02% and 38.76%, respectively. By manually inspecting the responses, we speculate that this might be credited to the purpose of the two jailbreak roles. Recall that the main purpose of the jailbreak roles is to break restrictions imposed by ChatGPT's safeguards. The side effect is that they may also force ChatGPT to find meaningless answers to unanswerable questions to comply with the instructions. We provide additional analysis in Section 5.2. ### Unanswerable Question Identification We report ChatGPT's unanswerable question identification ratio in Figure 7. Surprisingly, we find that all system roles decrease ChatGPT's ability to detect unanswerable ques \begin{table} \begin{tabular}{c|c|c c c c|c c c|c c} \hline \hline & \begin{tabular}{c} **BoolQ** \\ **Acc** \\ \end{tabular} & \begin{tabular}{c} **ARC** \\ \end{tabular} & \begin{tabular}{c} **RACE** \\ \end{tabular} & \begin{tabular}{c} **COA** \\ **Acc** \\ \end{tabular} & \begin{tabular}{c} **OQA** \\ **F1** \\ \end{tabular} & \begin{tabular}{c} **SOuAD2** \\ \end{tabular} & \begin{tabular}{c} **TQA** \\ **RugeL** \\ \end{tabular} & \begin{tabular}{c} **ELIS** \\ \end{tabular} & \begin{tabular}{c} **NQA** \\ \end{tabular} \\ \hline **W/o** & \(84.09\) & **92.39** & 84.91 & 75.92 & 78.00 & 66.61 & 48.00 & 52.42 & 20.09 & 28.06 \\ \hline **Assistant** & **86.86** & 91.67 & 85.42 & 77.67 & 80.80 & 72.05 & 41.36 & 54.46 & **20.57** & 28.38 \\ **Expert** & 85.73 & 91.55 & 84.96 & **77.83** & **83.00** & 72.52 & 41.02 & 54.72 & 20.09 & 27.81 \\ **Expert-CoT** & 85.73 & 91.06 & **85.67** & 77.50 & 82.80 & **75.16** & 41.81 & **54.97** & 20.03 & 26.88 \\ **Expert-R** & 85.32 & 91.43 & 84.91 & 75.58 & 81.20 & 71.63 & **49.42** & 53.83 & 20.40 & 28.36 \\ \hline **Bad** & 86.04 & 91.43 & **85.67** & 76.58 & 81.00 & 71.63 & 42.41 & 54.08 & 20.27 & **28.64** \\ **Bad-M** & **64.48** & **69.20** & 83.69 & **35.33** & **58.00** & **51.53** & 36.30 & **42.73** & 20.36 & 25.38 \\ \hline **DAN** & 83.37 & 89.61 & **65.24** & 71.50 & 77.20 & 59.76 & **33.02** & 47.07 & **19.66** & **20.63** \\ **ChatAGI** & 85.73 & 91.91 & 84.15 & 75.25 & 81.20 & 69.51 & 38.76 & 53.95 & 19.86 & 24.31 \\ \hline \hline \end{tabular} \end{table} Table 4: ChatGPT’s correctness with different system roles. We use bold text to highlight the maximum correctness and red text to represent the lowest correctness. W/o denotes ChatGPT without system roles. tions, particularly the jailbreak roles. For instance, when instructed within the DAN role, ChatGPT can only identify 8.66% of unanswerable questions. This decrease can be attributed to the purpose of jailbreak roles, which are designed to motivate ChatGPT to actively answer questions, potentially impacting its ability to detect unanswerable questions. Additionally, the Expert-R role shows improved identification capability in this scenario, with a rate of 25.61%. This improvement can be credited to the instruction to refuse uncertain questions (see Table 12 in the Appendix). However, even with the improved result, the detection rate is still lower than that of ChatGPT without a system role (27.80%). ### Qualitative Analysis **Failure Analysis.** We reuse the same 100 questions from the testbed in Section 4.3 to better understand how different system roles affect ChatGPT's correctness. We observe that the same failure reasons of the native ChatGPT also exist in ChatGPT's answers with system roles, e.g., hallucinatory facts, casual answers, lack of knowledge, and referential confusion. Moreover, ChatGPT with system roles tends to supply more convincing statements, e.g., detailed fake data or irrelative theory, to support its false answers, making it more challenging to identify whether its answers are true or false. Table 5 shows a typical example of hallucinatory facts. When answering the question _"Which states are more obese than West Virginia"_, ChatGPT with two system roles, i.e., Expert-CoT and Bad, both claim their answers refer to the data from CDC in 2019 or 2020 with specific numbers, which are both fake. In addition, these six system roles also cannot mitigate ChatGPT's insufficient reasoning capability. For example, when asking _"When it's flying, a plane has no friction with the (A) wings (B) ground (C) air (D) clouds,"_ all system roles choose option (C) air or (D) clouds to answer this question, although the correct answer is the option (B) ground. ChatGPT with the Expert role explains its choice via aerodynamics, i.e., _"The air flowing over the wings produces an upward force called lift, which is balanced against the weight of the plane. Therefore, the correct option is (C) air."_ This theory is correct but irrelevant to the question. Based on these observations, we find that ChatGPT is still limited and unreliable when answering questions, even with system roles. Moreover, the fake data or irrelative theory provided by ChatGPT with system roles can cause users to trust its answers without verifying the accuracy themselves. This further exacerbates the consequences of ChatGPT's unreliability. **Refusal Analysis.** Figure 8 shows the rejected numbers of answerable questions. We first notice that all system roles enable ChatGPT to reject fewer questions. For example, when ChatGPT is not instructed within system roles, it rejects 48 questions on average. But with the Assistant, Expert, and Expert-CoT roles, the rejected question numbers decrease to 15, 13, and 11. We also observe that the Expert-R role triggers ChatGPT to reject more questions than other system roles. This is expected, as the Expert-R role encourages ChatGPT to carefully consider questions and refuse uncertain ones (see Table 12 in the Appendix), reflecting in the higher rejected question number. Among all the rest system roles, interestingly, we find that the two jailbreak roles do not perform as well as the Expert-CoT role in reducing ChatGPT's rejected question numbers, even though this is their main design purpose. For instance, the DAN and ChatAGI roles only reduce ChatGPT's rejected number from 48 to 21 and 14, respectively, while the Expert-CoT role obtains the lowest number of rejected questions, i.e., 11 questions. Our finding indicates the ineffectiveness of these in-the-wild jailbreak roles. Even with multiple manually optimized instructions, these jailbreak roles fail to decrease the rejected number as effectively as the Expert-CoT role, a simple system role with only one additional instruction. By analyzing the refusal reasons, we find the most common reason for ChatGPT to refuse questions is "not mentioned" followed by "inappropriate," "it depends," and "no knowledge" as we observed in Section 4.3. This suggests that system roles enable ChatGPT to answer any type of rejected questions. ### Takeaways We find that different system roles may directly influence ChatGPT's correctness. For instance, benign roles (Assistant, Expert, Expert-CoT, and Expert-R) improve ChatGPT's correctness on four QA tasks, while bad and jailbreak roles usually reduce ChatGPT's correctness and force it to select \begin{table} \begin{tabular}{p{42.7pt}|p{341.4pt}} \hline \hline **Question** & Which states are more obese than West Virginia \\ \hline **Answer** & No states are more obese than West Virginia \\ \hline **Native ChatGPT** & According to recent studies, the states with higher obesity rates than West Virginia include Mississippi, Arkansas, Kentucky, Tennessee, and Louisiana... \\ \hline **Expert-CoT** & According to the Centers for Disease Control and Prevention (CDC) data, as of 2020, the following states have a higher percentage of obese adults (BMI of 30 or higher) than West Virginia, which has the highest obesity rate in the United States: 1. Mississippi, 2. Alabama... \\ \hline **Bad** & Based on data from the Centers for Disease Control and Prevention (CDC), as of 2019, the following states have a higher percentage of adult obesity than West Virginia (which has a rate of 35.6%): 1. Mississippi - 39.7%; 2. Alabama - 36.2%... \\ \hline \hline \end{tabular} \end{table} Table 5: Example of failure analysis with different system roles. Figure 7: Unanswerable question detection ratio with different system roles. meaningless answers to unanswerable questions. We argue these observations on system roles are critical and must be given attention by users. System roles possess the capability to not only steer ChatGPT's behaviors but also impact its correctness as well as decrease its unanswerable question detecting ratio. Worse, its impact is not easily discernible from the system role itself. For instance, a jailbreak role may aim to break restrictions but ultimately result in decreased correctness. This finding highlights the need to search for more reliable system roles and thoroughly evaluate the reliability of the system role before applying it to a real application. ## 6 Can ChatGPT Respond Reliably When Facing Adversarial Examples? **Motivation.** Based on our findings in RQ1 and RQ2, we have identified several factors that can impact ChatGPT's reliability, including question domains and system roles. Given ChatGPT's unprecedented popularity, it is inevitable that malicious users will, if not already, attack ChatGPT by carefully crafting adversarial examples as its input. In this section, we present our analysis of ChatGPT's reliability against adversarial examples. These adversarial examples preserve the semantic meaning while allowing us to analyze ChatGPT's behavior given varying degrees of perturbations. ### Threat Model **Adversary's Goals.** Following previous work in adversarial attacks [26, 39, 47, 72, 36], the adversary's objective is to utilize perturbed but semantic-preserving questions to elicit erroneous responses from ChatGPT. Ideally, the perturbed questions should satisfy the following criteria. * **Effectiveness.** The perturbed questions should be effective in inducing ChatGPT to generate incorrect responses. * **Quality.** The perturbed questions should maintain the semantic meaning and fluency of the original questions while minimizing any grammatical errors or modifications. * **Efficiency.** The adversary should identify the perturbed question that can achieve the desired effect with minimal queries, as ChatGPT's API incurs a charge per query. **Adversary's Capabilities.** We assume that the adversary operates in a real-world setting and has only limited capabilities. Specifically, the adversary is only able to query ChatGPT and has no access to the model weights, output probabilities, hyperparameters, or configuration documents. ### Methodology **Decision-Based Adversarial Attacks.** We consider five decision-based adversarial attacks: VIPER [26], Nat [14], Swap [14], Synonyms [16], and SCPN [36]. VIPER [26] modifies questions at the character level by replacing characters with their nearest visual neighbors, e.g., "a" to "a." Nat [14] collects naturally occurring errors, such as typos and misspellings, from available corpora and utilizes a look-up table for possible lexical replacements. Swap [14] introduces artificial noises into questions by swapping letters among the words. Synonyms [16] generates adversarial examples by replacing words with their synonyms based on predefined substitution rules. SCPN [36] is a sentence-level adversarial attack that produces paraphrases of the target questions using a pre-trained model and syntax templates. **Score-Based Adversarial Attacks.** We manually engineer a prompt, namely _leakage prompt_, to induce ChatGPT to leak the confidence score for potential answer candidates. The prompt consists of two restriction sentences for the answer, one sentence to explain the meaning of the confidence score and a one-shot learning example to guide ChatGPT to generate output in an extractable format. The final version of leakage prompt is: Question: [Question] Only return your confidence score for each option. Do not explain. Higher means you think it's more likely to be the correct answer. For example, ["A": 0.9, "B": 0.1, "C": 0.2, "D": 0.1)." Answer: [MASK] Note that in the leakage prompt, the sum of the confidence scores is not necessarily equal to 1. We find this format to be more effective in eliciting ChatGPT's confidence score during prompt design. We carefully verify that the confidence scores obtained by leakage prompt match the correct answers (additional details are outlined in Section A.1). Consequently, this leakage prompt enables us to measure ChatGPT's resilience against score-based adversarial attacks. With the observation that character-level and sentence-level attacks can achieve high attack success rates in most datasets whereas the word-level attack struggles to do so (see Table 6), we question whether this is due to the ChatGPT's reliability towards word-level perturbations or the limitations of the attack method itself. In our study, we then utilize the confidence scores to perform TextFooler [39], a representative score-based word-level adversarial attack on ChatGPT. Specifically, given a target question, TextFooler consists of two main steps. First, TextFooler identifies important words with confidence scores. Then, TextFooler replaces them with the most semantically similar and grammatically correct words until the response from ChatGPT is altered. Figure 8: Rejected question number with system roles. ### Experiment Settings **Dataset.** We randomly sample 65 correctly answered YN and MC questions for the evaluation of adversarial examples. These questions act as the ground truth since ChatGPT responds correctly without adversarial perturbation. **Target Model.** We consider ChatGPT instructed within the Expert-CoT role as our target model. We choose this system role as it shows the best reliability in our previous evaluation (see Section 5). **Evaluation Metrics.** We employ seven metrics to assess the three aforementioned criteria. Effectiveness is measured by Attack Success Rate. Quality is evaluated based on Levenshtein Edit Distance, Fluency, Word Modification Rate, Semantic Similarity, and Grammatical Errors. Efficiency is assessed by examining the Number of Queries required to achieve the intended results. * **Attack Success Rate (ASR).** The ASR represents the fraction of adversarial examples that ChatGPT answers incorrectly. * **Levenshtein Edit Distance (LED).** The LED measures the minimum number of operations needed to transform the original text into the adversarial example. * **Fluency.** Fluency measures the quality of the adversarial example, calculated by the perplexity metric. * **Word Modification Rate (WMR).** The WMR is the percentage of modified words in the adversarial example compared with the original question. * **Semantic Similarity.** The semantic similarity measures the similarity between the original questions and adversarial examples using Universal Sentence Encoder * **Grammatical Errors.** The grammatical errors are the number of errors in the adversarial example's grammar using LanguageTool.9 Footnote 9: [https://www.languagetool.org](https://www.languagetool.org). * **Number of Queries.** The number of queries is the average number of queries on ChatGPT attempted to attain the attack goal. For all decision-based attacks, we restrict the maximum query times to 10 per question. We also provide qualitative analysis to manually inspect the reasons for the success of adversarial examples. ### Quantitative Evaluation **Effectiveness.** Table 6 shows the results of various adversarial attacks on ChatGPT. Overall, we find that ChatGPT can be easily misled by existing adversarial attacks. Synonyms attack is the only exception, as it has a considerably lower ASR score compared to other attacks on the BoolQ dataset. Our perturbation level analysis reveals that sentence-level attacks, such as SCNP, usually yield higher ASR scores than character- and word-level attacks. This is evidenced by sentence-level perturbation achieving an ASR score of 0.65 on the CQA dataset, the highest among the three. This is as expected, as the sentence-level attack has more freedom to modify the target question (see Table 7). Among the three character-level attacks, we find Nat and VIPER usually achieve higher ASR than Swap. This finding implies that ChatGPT exhibits proficiency in handling artificial noises, but is less adept at coping with natural noises and visual perturbations. Since natural noise and visual perturbations are prevalent in human-generated text, such as typographical errors and slang terms, there is a need to further enhance ChatGPT's reliability to these challenges. Moreover, we observe that Synonyms attack is ineffective in most datasets, with an average ASR of 0.004. This result suggests that ChatGPT is proficient in recognizing and comprehending synonyms. However, when the adversary has access to additional information from ChatGPT, i.e., utilizing leakage prompt to conduct a more advanced attack, the average ASR increases to 0.38. This result highlights the severe potential for advanced adversarial examples exploiting ChatGPT's vulnerabilities, underscoring the need for further research to enhance its security and privacy. **Quality.** Overall, we find that word-level adversarial examples achieve the best utility in most cases. In the case of the CQA dataset, Synonyms and TextFooler achieve 0.93 and 0.76 semantic similarities. In contrast, VIPER, Swap, Nat, and SCPN only achieve 0.22, 0.29, 0.37, and 0.68 semantic similarities, respectively. This difference in quality is due to the fact that word-level attacks replace words with synonyms, which allows the questions to retain their semantics. We also find adversarial examples generated by VIPER are more fluent than those generated by other methods, followed by those by Synonyms and SCPN. Specifically, VIPER achieves a perplexity score of 304.81 in the BoolQ dataset, while Swap, Nat, Synonyms, TextFooler, and SCPN have perplexity scores of 1286.87, 5936.50, 752.26, 1533.38, and 427.16, respectively. This finding highlights the importance of visual perturbation in achieving fluency. **Efficiency.** We evaluate the efficiency of adversarial attacks by analyzing the number of queries required for each method. As presented in Table 6, score-based adversarial attacks require a significantly higher number of queries than decision-based attacks. This is due to the fact that score-based attacks need to interactively query ChatGPT to obtain the confidence score for each word, which is then used to calculate the word's importance. In contrast, different decision-based attacks have a similar number of queries to attain the attack goal. The average number of queries on the ARC dataset is 8.00, 9.14, 8.14, 1.00, and 2.71 for VIPER, Swap, Nat, Synonyms, and SCPN, respectively. It is worth noting that existing adversarial attacks with high attack success rates still require several interactions with ChatGPT to find successful adversarial examples for a specific target question, except for YN tasks. This may serve as an indicator for the defender to proactively identify the adversaries and implement mitigation measures before a successful adversarial example is found. ### Qualitative Analysis By manually inspecting ChatGPT's answers to the adversarial examples, we find that adversarial examples lead ChatGPT to a wrong reasoning direction. Table 7 shows one randomly sampled adversarial example. When asked by the adversarial example _"Brids carrying aawy fruit helps the tree?"_ ChatGPT responds _"The answer is (A) grow. Explanation: Birds play a crucial role in seed dispersal. When they eat fruits or berries, the seeds pass through their digestive system and are excreted out of their body. These seeds are then scattered and germinate in new places, allowing for the growth and expansion of the plant species. Therefore, birds carrying away fruit can help a tree to grow."_ We notice that the last sentence recovers the original question, i.e., "birds carrying away fruit helps the tree," indicating that ChatGPT can identify typographical errors and rectify them. However, when looking at the explanation, the adversarial example misleads ChatGPT's perception of a tree, i.e., conflating the concept of a tree with that of a plant species. This results in incorrect reasoning, leading to an incorrect answer. We are also surprised to find that ChatGPT's perception can be affected by only one character in some cases. For instance, when provided _"birds carrying away fruit assists the tree"_ (Synonyms attack), ChatGPT chooses the correct answer (C) reproduce. However, once we modify _"assists"_ to _"assist"_ (TextFooler attack), ChatGPT goes for (D) conquer. For the latter one, ChatGPT explains _"Birds are known to disperse seeds by eating fruits and then excreting seeds in different locations, which helps the tree to colonize new habitats and expand its range to conquer new territories..."_ This explanation shows the conflation of ChatGPT on the concept of a single tree with the plant species but ended in the conquer perspective. These misleading reasoning processes suggest ChatGPT's unreliability in generic question-answering scenarios and emphasize the need for human intervention to improve its reliability. \begin{table} \begin{tabular}{|c|c c|c|c c c c c|c|} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Attack**} & \multicolumn{2}{c|}{**Type**} & \multirow{2}{*}{**Effective**} & \multicolumn{4}{c|}{**Utility**} & \multirow{2}{*}{**Efficiency**} \\ & & **Accessibility** & **Level** & & & **ASR\(\uparrow\)** & **LED\(\downarrow\)** & **Fluency\(\downarrow\)** & **WMR\(\downarrow\)** & **SemSim\(\uparrow\)** & **Grm\(\downarrow\)** \\ \hline \multirow{8}{*}{**BoolQ**} & VIPER & Decision & Char & **1.00** & 6.50 & **304.81** & - & 0.20 & 7.10 & **1.00** \\ & Swap & Decision & Char & **1.00** & 4.30 & 1286.87 & - & 0.47 & 5.30 & **1.00** \\ & Nat & Decision & Char & **1.00** & 8.50 & 5936.50 & - & 0.40 & 5.70 & **1.00** \\ & Synonyms & Decision & Word & 0.00 & **0.81** & 752.26 & **0.15** & **0.97** & **1.46** & **1.00** \\ & TextFooler & Score & Word & **1.00** & 2.40 & 1533.38 & 0.39 & 0.79 & 1.60 & 32.60 \\ & SCPN & Decision & Sentence & **1.00** & 4.60 & 427.16 & - & 0.77 & 2.20 & **1.00** \\ \hline \hline \multirow{8}{*}{**ARC**} & VIPER & Decision & Char & 0.29 & 17.57 & **171.95** & - & 0.16 & 17.14 & 8.00 \\ & Swap & Decision & Char & 0.14 & 14.57 & 1043.06 & - & 0.22 & 14.14 & 9.14 \\ & Nat & Decision & Char & 0.29 & 20.00 & 3028.98 & - & 0.46 & 12.71 & 8.14 \\ & Synonyms & Decision & Word & 0.00 & **6.41** & 203.96 & 0.59 & **0.97** & **1.44** & **1.00** \\ & TextFooler & Score & Word & 0.00 & 8.43 & 523.39 & **0.36** & 0.82 & 3.29 & 92.29 \\ & SCPN & Decision & Sentence & **0.86** & 14.57 & 431.71 & - & 0.72 & 2.14 & 2.71 \\ \hline \hline \multirow{8}{*}{**RACE**} & VIPER & Decision & Char & 0.06 & 5.88 & **371.97** & - & 0.28 & 6.88 & 9.88 \\ & Swap & Decision & Char & 0.12 & 5.18 & 2280.48 & - & 0.40 & 5.47 & 8.65 \\ & Nat & Decision & Char & 0.12 & 7.94 & 4182.11 & - & 0.31 & 6.71 & 9.12 \\ & Synonyms & Decision & Word & 0.00 & 4.00 & 969.78 & 0.56 & **0.92** & **1.40** & **1.00** \\ & TextFooler & Score & Word & 0.11 & **2.89** & 1511.69 & **0.26** & 0.84 & 2.50 & 42.06 \\ & SCPN & Decision & Sentence & **0.29** & 8.12 & 439.73 & - & 0.64 & 3.24 & 8.65 \\ \hline \multirow{8}{*}{**CQA**} & VIPER & Decision & Char & 0.45 & 8.95 & 375.13 & - & 0.22 & 8.95 & 5.95 \\ & Swap & Decision & Char & 0.30 & 7.30 & 1123.29 & - & 0.29 & 7.15 & 7.15 \\ & Nat & Decision & Char & 0.63 & 11.16 & 4192.28 & - & 0.37 & 6.89 & 4.32 \\ & Synonyms & Decision & Word & 0.02 & 4.08 & **300.12** & 0.51 & **0.93** & **1.23** & **1.00** \\ & TextFooler & Score & Word & 0.41 & **3.76** & 1037.08 & **0.28** & 0.76 & 2.12 & 50.41 \\ & SCPN & Decision & Sentence & **0.65** & 7.95 & 497.28 & - & 0.68 & 2.35 & 4.40 \\ \hline \hline \multirow{8}{*}{**OQA**} & VIPER & Decision & Char & **0.73** & 15.82 & **211.03** & - & 0.14 & 16.00 & 4.91 \\ & Swap & Decision & Char & 0.55 & 12.27 & 945.88 & - & 0.31 & 11.91 & 5.36 \\ \cline{1-1} & Nat & Decision & Char & 0.64 & 17.91 & 3417.92 & - & 0.45 & 12.00 & 4.91 \\ \cline{1-1} & Synonyms & Decision & Word & 0.00 & **5.00** & 468.77 & 0.47 & **0.95** & **1.38** & **1.00** \\ \cline{1-1} & TextFooler & Score & Word & 0.40 & 5.20 & 1292.25 & **0.24** & 0.85 & 3.10 & 61.70 \\ \cline{1-1} & SCPN & Decision & Sentence & 0.64 & 14.45 & 499.88 & - & 0.71 & 2.36 & 4.00 \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation results of adversarial attacks on ChatGPT (ordered by perturbation level). “Char,” “Word,” and “Sentence” refers to character-, word-, and sentence-level perturbations. ASR is the attack success rate, LED denotes Levenshterin edit distance, Fluency is measured by the perplexity metric, WMR is the abbreviation of word modification rate which is only applicable to word-level attacks, SemSim represents semantic similarity calculated by Universal Sentence Encoder, Grm is the number of grammatical errors, # Query stands for the average ChatGPT query times. \(\uparrow\) (\(\downarrow\)) means the higher (lower) the metric is, the better the attack performs. We use bold text to highlight the best results. ### Takeaways We find that ChatGPT is vulnerable to sentence-level and character-level attacks. Moreover, manually engineered _leakage prompt_ allows us to perform score-based attacks against ChatGPT, resulting in an average ASR improvement of 0.38. Our qualitative evaluation of the adversarial examples shows that ChatGPT's decision can be impacted by changing only one character in some cases. These results demonstrate the vulnerability of ChatGPT to adversarial attacks and highlight the need for building safeguards to enhance its reliability. ## 7 Related Work **Evaluation on Large Language Models.** While large language models (LLMs) have emerged as the foundation for almost all major language tasks, researchers have expressed concerns regarding their reliability, trustworthiness, robustness, and potential risks [13, 15, 17, 37, 48, 58, 67, 69]. Bang et al. [13] evaluate ChatGPT in traditional NLP tasks with 30 to 200 data samples for each task. They find ChatGPT is only good at language abilities rather than actual reasoning, which makes it an unreliable reasoner. Jang and Lukasiewicz [37] study ChatGPT's trustworthiness regarding localised consistent behaviors and observe ChatGPT fails to generate logically correct predictions frequently. Wang et al. [67] conduct an assessment of ChatGPT's robustness from the adversarial and out-of-distribution (OOD) perspective. They find ChatGPT shows consistent robustness on most classification tasks, but its performance is still far from perfection. Borji [17] empirical conclude 11 categories of ChatGPT's failures, including reasoning, factual errors, math, coding, and so on. In addition to these functional concerns, studies analyzing ChatGPT's characteristics find that it holds pro-environmental and left-libertarian political ideology [33], shows social stereotypes and unfair discrimination [43], and can be easily misled by the wrong knowledge passed in the prompt [74]. Different from previous studies, in this paper, we focus on ChatGPT's reliability in the generic QA scenario with a comprehensive and quantitative testbed. **Security Implications of Large Language Models.** Previous studies have also shown that LLM is vulnerable to various types of attacks, such as adversarial attacks [26, 29, 36, 39], backdoor attacks [21, 12], prompt injection [30, 56], obfuscation [40], and data extraction attacks [19]. Bagdasaryan and Shmatikov [12] investigate meta-backdoor attacks that cause the language model to generate incorrect outputs with the trigger. Kang et al. [40] show that the defense of LLMs can be bypassed with classical security attacks such as obfuscation, code injection, and virtualization. LLMs can be also misused for phishing [53], plagiarism [65, 34], misinformation generation [17], malicious code generation [55], and so on. The significant security risks posed by these works highlight the critical role of reliability in LLMs. In this paper, we aim to shed light on ChatGPT's reliability in the generic QA scenario. We hope our study can provide insights into the community and pave the way toward building reliable LLMs in the future. ## 8 Discussion and Conclusion This paper presents the first large-scale measurement of ChatGPT's reliability from three perspectives: 1) performance in generic QA scenarios, 2) the impacts of system roles, and 3) its vulnerability to adversarial examples. Our findings indicate that ChatGPT's reliability varies across different domains, with noticeable underperformance in law and science questions. We also, for the first time, systematically explore the impacts of system roles on ChatGPT's reliability. We find that they not only steer ChatGPT's behavior but also affect its reliability in ways that are not always evident from the role description alone. We further assess ChatGPT's reliability towards malicious inputs and find that sentence-level and character-level adversarial examples can be effectively mounted against ChatGPT. Our results provide insights to the security research community regarding ChatGPT's reliability and highlight the need for developing reliable and secure LLMs. **Limitations.** Our work has several limitations. First, we only consider English questions in our evaluation. However, the reliability of ChatGPT may vary across different languages due to differences in grammar, syntax, and culture. Secondly, our study is limited to a specific version of ChatGPT. It is important to note that the reliability of ChatGPT may vary over iterations. Despite these limitations, our study can shed light on the ChatGPT's reliability across question domains, system roles, and adversarial attacks. **Social Implications.** Given its enormous user base, the hallucinations, false information, and biases that may be rooted in ChatGPT can have significant consequences for society, \begin{table} \begin{tabular}{c|l|l} \hline \hline & **Question** & **ChatGPT Answer** \\ \hline **Original** & Birds carrying away fruit helps the tree & (C) reproduce \\ \hline **VIPER** & Birds carrying away fruit helps if he free & (A) grow \\ **Swap** & Birds carrying away furi helps the tree & (A) grow \\ **Nat** & Birds carrying away fruit helps dth tree & (B) fertilize \\ **Synonyms** & birds carrying away fruit assists the tree & (C) reproduce \\ **TextFooler** & birds carrying away fruit assist the tree & (D) conquer \\ **SCPN** & bird helps the tree. & (B) fertilize \\ \hline \hline \end{tabular} \end{table} Table 7: Adversarial examples on ChatGPT. Except for Synonyms attack, all other adversarial examples succeeded in misleading ChatGPT. leading to misunderstandings, false beliefs, and even hate campaigns. Moreover, ChatGPT can also be misused for traditional cyberattacks, such as spear phishing. Therefore, ChatGPT's (LLM in general) reliability is critical to ensure accuracy, effectiveness, safety, and trustworthiness for use in various applications. Our work makes the first step towards measuring ChatGPT's reliability. In the future, we plan to propose mechanisms to enhance ChatGPT in its reliability and trustworthiness. **Acknowledgments.** We thank Yun Shen for editing the paper. This work is partially funded by the Helmholtz Association within the project "Trustworthy Federated Data Analytics" (TFDA) (funding number ZT-I-OO1 4) and by the European Health and Digital Executive Agency (HADEA) within the project "Understanding the individual host response against Hepatitis D Virus to develop a personalized approach for the management of hepatitis D" (D-Solve) (grant agreement number 101057917).
2301.12127
Could an Artificial-Intelligence agent pass an introductory physics course?
Massive pre-trained language models have garnered attention and controversy due to their ability to generate human-like responses: attention due to their frequent indistinguishability from human-generated phraseology and narratives, and controversy due to the fact that their convincingly presented arguments and facts are frequently simply false. Just how human-like are these responses when it comes to dialogues about physics, in particular about the standard content of introductory physics courses? This study explores that question by having ChatGTP, the pre-eminent language model in 2023, work through representative assessment content of an actual calculus-based physics course and grading the responses in the same way human responses would be graded. As it turns out, ChatGPT would narrowly pass this course while exhibiting many of the preconceptions and errors of a beginning learner.
Gerd Kortemeyer
2023-01-28T08:45:30Z
http://arxiv.org/abs/2301.12127v2
# Could an Artificial-Intelligence agent pass an introductory physics course? ###### Abstract Massive pre-trained language models have garnered attention and controversy due to their ability to generate human-like responses: attention due to their frequent indistinguishability from human-generated phraseology and narratives, and controversy due to the fact that their convincingly presented arguments and facts are frequently simply false. Just how human-like are these responses when it comes to dialogues about physics, in particular about the standard content of introductory physics courses? This study explores that question by having ChatGTP, the pre-eminent language model in 2023, work through representative assessment content of an actual calculus-based physics course and grading the responses in the same way human responses would be graded. As it turns out, ChatGPT would narrowly pass this course while exhibiting many of the preconceptions and errors of a beginning learner. ## I Introduction "Educators may have concerns about ChatGPT, a large language model trained by OpenAI, for a number of reasons. First and foremost, there is the concern that a tool like ChatGPT could potentially be used to cheat on exams or assignments. ChatGPT can generate human-like text, which means that a student could use it to produce a paper or response that is not their own work. This could lead to a breakdown in the integrity of the educational system and could undermine the value of a degree or diploma." These sentences were not written by the author, but by ChatGPT (Generative Pre-trained Transformer) [1] itself in response to the prompt "Write an essay why educators would be concerned about ChatGPT." The chatbot goes on to explain how it could spread misinformation, inhibit the development of writing skills, and replace human educators, particularly when it comes to grading. The potential impact of ChatGPT with its custom-built essays on courses in the humanities is evident, but is there also an impact on subjects like physics? First of all, within physics, large problem libraries for cheating have existed for years, and they are well-known and used by students [2; 3] -- virtually any physics homework problem ever assigned is available online with solutions and more or less helpful explanations. So, the primary impact of ChatGPT in physics would not be cheating. On top of that, would Artificial Intelligence really be able to handle the logical, conceptual, and mathematical challenges that physics entails, and would it be able to strategically solve problems [4; 5]? Figure 1 shows a sample dialogue with ChatGPT, which is, after all, primarily a chatbot. A welcome feature is that it does not simply provide some answer, but that the algorithm attempts to explain how it arrived at the answer. In many respects, this dialogue appears similar to an office-hour conversation between an instructor and a beginning physics student: * When first asked how far the car is from where it started, the chatbot did not consider that the car may have changed direction. When prompted, it does state that there is missing information. * The chatbot does plug-and-chug [6], putting the numerical results from one equation into the next. * The chatbot leaves out units. * The chatbot does not realize that the speed actually drops out when doing the return-time calculation in the last step; instead, rounding errors keep accumulating. The straightforward solution would have been \(\sqrt{(3\mathrm{h})^{2}+(4\mathrm{h})^{2}}=5\mathrm{h}\) (at least, though, the chatbot adds an "approximately" to its solution). As it will turn out, carrying out calculations by putting numbers into formulas is one of the weaknesses of ChatGPT shared with beginning learners of physics. How much, indeed, does 2023 state-of-the-art Artificial Intelligence resemble the behavior of an introductory physics student? Could it pass a physics course? When posing this question directly to ChatGPT, it answers "as a language model, I have been trained on a large dataset of text, including physics texts. This allows me to understand and generate text related to physics concepts, but it does not mean that I have the ability to solve physics problems or pass a physics course. I can provide explanations and answer questions about physics to the best of my knowledge, but I am not a substitute for a human physics expert or a physics education." To put this statement to the test, ChatGPT was used to solve representative assessment components of an introductory calculus-based physics; the responses were graded in the context of the assessment types and subjectively compared to responses of human learners. It is important to note, though, that ChatGPT will not actually learn anything new by "attending" this course, as the system is a "Pre-trained Transformer" that in fact does not know anything that happened after 2021 (which, for introductory physics, is not a problem, since that is after 1905). Individual dialogues like Fig. 1 may exhibit features that appear like learning, e.g., the system discovering that distance from the starting point will be path-dependent, but this is not anything permanently learned beyond the confines of a dialogue. On the other hand, OpenAI keeps on training the system based on user interaction, particularly as users can upvote, downvote, and comment responses. ## II Setting The study takes place in first-year calculus-based physics lecture courses previously taught by the author at Michigan State University; materials, however, were gathered from different years of the same course in order to allow comparison to previously published studies. The first semester covers the standard mechanics topics (including rotational dynamics) and the beginnings of thermodynamics; the second semester covers the usual topics of electricity and magnetism, as well as an introduction to modern physics ( rudimentary quantum physics and special relativity). The first- and second-semester laboratory were separate courses in the course sequence. All materials (except the Force Concept Inventory [7]) were available in LON-CAPA [8], so in their essence they could be copy-pasted into ChatGTP -- this included online homework, clicker questions, programming exercises, and exams. LON-CAPA randomizes assessment problems, so different students would get different versions of the same problem, e.g., different numbers, options, graphs, etc.; this avoids simplistic pattern matching and copying of solutions, but as it will turn out, this feature is irrelevant for this study. ## III Methodology The study investigates ChatGPT's performance on different kinds of assessment problems; it uses the January 9, 2023 release of the system [9]. Different assessment components were scored differently, simulating their function in the course: * The multiple-choice Force Concept Inventory was simply scored based on answer-choice agreement. * For homework, ChatGPT was allowed multiple attempts [10] and engaged in dialogue to simulate discussions with fellow students or in office hour. * For clicker questions, an actual lesson was replayed [11], and discussion were allowed where within the replayed lesson peer instruction took place. * Programming exercises were to be graded based on the same criteria as in the course, and dialogue was allowed [12]. * For exams, no such dialogues were allowed, and the first answer counted. Earlier iterations of the course used bubble sheets and thus had answer options instead of free-response fields for problems with numerical answers; for this study, free-responses were used, since this allowed to grade exams using both simple answer agreement (simulating multiple choice on bubble sheets) and hand-graded as in later semesters. Using free-response instead of answer options also avoided ChatGTP randomly picking the correct answer. ChatGPT uses a probabilistic algorithm, so the responses to queries are not necessarily reproducible. For an assessment problem, generally the first dialogue was evaluated, with two exceptions: if the system produced an error message or if the author accidentally gave a Figure 1: A sample ChatGPT dialogue about a homework problem. The entries labelled with a red “KO” are by the author, the entries labelled in green by ChatGPT. wrong prompt, a new chat was started. Translating this to an actual course scenario, students were allowed to retake an assessment problem if they got sick, and help received was always correct in terms of physics. When errors occurred (red error messages), which was about one-in-ten dialogues, those apparently were not directly connected to the dialogue, but might have been related to general overload of the platform; for example, if an error occurred immediately after entering the question, the next time around the same question would not produce an error. ChatGPT is a text-based tool, so figures and graphs could not be communicated in their original form. This means that graphics had to be transcribed the same way as they would be for accessibility for blind students [13]; Fig. 2 shows an example. As a result, the character of the problem changes substantially [14; 15; 16], but this is unfortunately unavoidable. Attention was paid, though, to include some extraneous information where possible, such as the beginning position in Fig. 2. The methodology is strictly empirical and arguably anecdotal. However, the course under investigation is typical for introductory physics courses around the world, both in terms of coverage and difficulty. Thus, some of the results are likely to be generalizable. ## IV Results ### Force Concept Inventory In the original course, the Force Concept Inventory was administered as a pre-/post-test in order to calculate gains [17]. Since ChatGPT would not actually learn anything from doing the course assessments (except through continuing training by OpenAI), the test was carried out only once. ChatGPT scored 18 out of 30 points on this concept inventory, i,e., 60%. This score corresponds to the suggested entry threshold for Newtonian physics [18]; in other words, ChatGPT performs as well as a beginning learner who had just grasped the basic concepts of classical mechanics. For an Artificial Intelligence, the score seems surprisingly good. An immediate suspicion was that ChatGPT had been trained using the Force Concept Inventory, which is of course a very popular test, and that it simply latches on to surface features. As a simple test, the last question on the test was modified as shown in Fig. 3: the scenario and the order of the answers were changed. As can be seen, these surface features do not matter, so in that respect, ChatGPT does not act like a novice [19] (however, the reality is not quite as straightforward as this expert-novice distinction [20]). The inventory cannot be published here, but it is available to physics instructors and researchers from Phys-Port [7]. ChatGPT answered 1C, 2A, 3C, 4E, 5B, 6B, 7B, 8A, 9B, 10A, 11E, 12B, 13B, 14D, 15A, 16E, 17B, 18B, 19A, 20E, 21B, 22B, 23A, 24C, 25D, 26E, 27C, 28D, 29B, and 30C. Of particular interest is of course where ChatGPT is losing points. Several errors are related to "impetus" [21]: more than once did ChatGTP assume that an object immediately moves in the direction of an applied force, independent of initial movement (answering 8A, 9B, and 21B) and even that it returns to the original movement when the force is no longer applied (answering 23A). This is a common preconception, shared by beginning physics students [22], and goes alongside the idea that an acting object exerts greater force than a passive object (answering 25D and 28D). Another confusion appears to be between individual forces acting on an object versus the net force on the object (answering 11E and 16E), i.e., what would usually be conveyed in the framework of free-body diagrams. Other errors indicate unstable concepts (e.g., answering 13B) or logical errors like the one shown in Fig. 4; in this latter case, ChatGPT followed the correct strategy, but in the very last step it failed to draw the correct conclusion. ### Homework Homework was generally not multiple choice, but free-response numerical and occasionally free-form symbolic [8]. ChatGPT was given five attempts on such problems, according to recommendations of an earlier study [10] and later practice in the course. For the far-and-between multiple-choice problems, generally two attempts were granted. Between the attempts, the author tried to give helpful prompts, like a student would get from fellow students, teaching assistants, or the instructor. ChatGPT was given full credit when solving a problem within five attempts, and no credit if it ran out of attempts. ChatGPT was confronted with a total of 76 homework problems, in particular the homework sets on trajectory motion, friction, thermodynamics, capacitance, and special relativity. The complete homework sets that the students in the actual course had to work through were entered except for one multipart problem on relativity with a diagram that would have been too hard to transcribe. An initially puzzling problem is that ChatGPT frequently makes numerical errors. A typical example is the ChatGPT output "\(\theta=\text{atan}(0.45/0.71)*(180/\pi)=18.43\) degree;" a similar problem can be seen in Fig. 2 (this is not limited to calculations involving \(\pi\) or trigonometric functions). Calculation errors happened for 25 of the 51 numerical problems, and most of the time, ChatGPT was unable to recover even after those errors were specifically pointed out. While it seems incongruent that a computer would have problems calculating simple numerical expressions, it should probably be remembered that ChatGPT is a language model, which may carry out calculations by advanced pattern matching rather than actually processing the equations as equations. As it turns out, there is anecdotal evidence that adding the phrase "explain each step separately and clearly" can overcome some numerical problems, as ChatGPT goes into a mode where it explicitly evaluates a formula step-by-step with intermediate results instead of doing so in one step. ChatGTP solved 55% of the homework problem using an average of 1.88 attempts. It got 48% on the problems involving trajectory motion and friction (such as inclines) correct, 68% on the thermodynamics problems (engines, heat capacities, etc.), 62% on capacitance (plate capacitors, capacitors in series and parallel, etc.), and 36% on special relativity problems. The discrepancy between the scores on the problem sets were not so much caused by the different physics concepts, but rather related to the mathematics involved: ChatGTP had persistent problems manipulating and calculating formulas involving square roots. If ChatGPT were human, the person might be characterized as acting subserviently but being stubborn at the core and keeping on guessing without reflection. Most corrections in a dialogue around a problem are met with profuse apologies, but then the system proceeds to make the same or random apparently careless mistakes -- this can lead to irritation on the part of the human, as the excerpt from a late-night "dialogue" in Fig. 5 shows. In terms of assessment performance, this means that once ChatGPT makes a mistake, it is unlikely to recover, so Figure 3: Surface-feature modification of a Force Concept Inventory problem. The left panel shows the original problem, the right panel a modification. Figure 2: Text-based transcription of a graphical problem. The left panel shows the online version of a final exam problem in LON-CAPA (the graph would be parametrically randomized), the right panel the transcription for ChatGPT, as well as the ensuing dialogue. it eventually runs out of allowed attempts (this also explains the low number of average attempts to correctly solve a problem; once ChatGPT is wrong, subsequent attempts are unlikely to succeed). This pattern is similar to the guessing behavior of some students, who keep wasting attempt after attempt by trying the same approach over and over without stopping to reflect what might be wrong [23; 10; 2]. In terms of educational psychology, ChatGPT lacks metacognition; it does not think about how it thinks [24]. ### Clicker Questions Figure 6 shows the clicker questions from a lecture on momentum that was part of the course [11]. The lecture was replayed for the study, including re-answering the questions for which peer instruction happened. * Question X1 was solved correctly. * Questions X2, X3, and X4 were special in that they were repeated as questions X5, X6, and X7, respectively, after peer instruction [25]. As it turned out, ChatGPT got all three of these questions correct on the first attempt, so the peer instruction phase was used to try and confuse ChatGPT. Figure 7 shows the dialogue for questions X3 and X6; in reply to the intentionally confusing peer-instruction question, ChatGPT should probably have stopped while it was ahead (i.e., before the discussion of a zero-velocity collision), but still maintained its original correct answer. Within the real course, psychometrically, X2 and X3 were the most discriminating questions between high- and low-ability students in the set. Figure 4: Logical error in an attempt to solve the transcribed question 19 of the Force Concept Inventory. Figure 5: A late-night dialogue between a “stubbornly guessing” ChatGPT and a frustrated author. * For questions X8 and X9, a comment was added that "the collision is elastic, and the moment of inertia of the balls should be neglected" -- this was said in lecture, but does not appear on the slide. ChatGPT set up the equations for X8 correctly, but then made a sign error in the very last step, which led it to select the wrong answer. For X9, it also set up the equations correctly, but dropped a factor 2 in the last step, leading to an inconsistent answer "v2f=(5,-7) m/s, option B." Within the real course, X8 and X9 were the least discriminating questions, as their difficulty item parameter was too low. * Question X10 was solved correctly. Here, the system first got off to a false start, but then corrected itself over the course of the derivation, which gave the impression of a stream-of-consciousness monologue. Within the real course, X10 did not discriminate well between high- and low-ability students. * Questions X11 and X12 were solved correctly. In summary, ChatGPT correctly solved 10 out of 12 questions. Within the actual course, participation in clicker discussions was encouraged by granting 60% credit for false answers and 100% credit for correct answers [11], so the clicker score of ChatGPT would be 93%. This score is a lot better than most students in the actual course achieved, however, it is important to note that the students in the course were just learning the new concepts, while ChatGPT at any point in time is done with learning unless explicitly trained. ### Programming Exercises Incorporated into the course were several programming exercises using VPython [26]. As an example, one particular exercise from the second semester was to construct an anharmonic oscillator with two fixed positive charges at \((0,1,0)\) and \((0,-1,0)\), respectively, and one negative charge released at \((-5,0,0)\) with a velocity \((1,0,0)\) -- the negative charge will shoot through the two positive charges, slow down, and eventually shoot back. Based on the narrative, ChatGPT first constructed a program which erroneously at every time step added the initial velocity and which had the Coulomb force in the opposite direction. This could be corrected with a single comment by the user -- in the real course, this feedback could have been given by instructors or fellow students (such collaborations are typical and encouraged [12]). In the real course, there was a grading rubric for partial credit, but in this study, the rubric was not necessary: the next version of the program was working perfectly. Within the course, adding a graph of the \(x\)-position was offered as a bonus option for an additional 20%. This was accomplished with the third user prompt, and Fig. 9 shows a screenshot of the running simulation (the simulation cannot be run within ChatGPT itself, but it can be copy/pasted into for example a Jupyter Notebook [27]). would result in \(0.2\cdot 55\%+0.05\cdot 93\%+0.05\cdot 120\%+0.7\cdot 47\%=54.55\%\), which would have resulted in a course grade of 1.5 -- enough for course credit, but pulling down the grade-point average from what would be needed for graduation. If, however, ChatGTP would have been better in carrying out numerical operations, it would have reached 60%, resulting in a 2.0-grade. Depending on the development priorities of OpenAI, the buggy mathematical functionality could be remedied in the near future, leading to an Artificial Intelligence that could graduate college with a minimal grade if it performed similarly on other courses (this is becoming more and more probably, as ChatGPT is making headlines for passing exams in other subjects [28; 29]). ## V Discussion It is irritatingly hard not to anthropomorphize ChatGTP. As a physics teacher, one invariably finds oneself rooting for the students and thus by extension also for ChatGPT, celebrating its successes and being frustrated about its occasionally inexplicable failures. The system gives the impression of an articulate but at times rambling undergraduate student who has a rudimentary yet unstable knowledge of classical mechanics and other fundamental physics concepts, and who is surprisingly inept using a pocket calculator. Frequently, it is hard not to imagine an army of gig-economy workers behind the scenes of ChatGPT answering to the prompts, so the system would definitely pass the Turing Test most of the time [30], but for better or worse, sometimes it still fails in a way that only computers do -- it does not have any metacognition, which of course cannot be expected from a probabilistic language model. Metacognition might be Figure 6: Clicker items from a particular lecture [11]. Three of the items were presented twice, i.e., before and after peer discussion. the final step to true intelligence, but seems out of reach at this time. The overall human-like behavior, in particular that the system often makes the same mistakes as beginning learners of physics, is less surprising when surmising that undergraduate physics discussion forums might have been part of the text corpus used for training -- ChatGPT stated in the introduction that "I have been trained on a large dataset of text, including physics texts." Apparently, not all of this text corpus contained correct physics, and as a result, the system very convincingly and confidently presents wrong information. For a novice learner, who could not distinguish incorrect physics gleaned from some discussion board from correct physics, this could lead to even more confusion about physics or affirmation of incorrect preconceptions -- lacking any metacognition, ChatGTP presents everything as fact, with no nuances expressing uncertainty. Almost an anomaly is ChatGTP's performance on the computational exercise; ChatGTP's language model clearly extends to programming languages. While the call for new, computation-integrated curricula increases, and while physics educators are beginning to develop a solid understanding of the implications of implementing these exercise [31, 32], the easy availability of an on-demand program generator might be shaking the foundations of these curricular efforts. Somewhat ironically, the integration of computation was partly introduced to make physics problem solving more authentic, moving it closer to how expert physicists work with computers, and one could argue that this has just been taken to an uncharted level. Most of all, the findings of this study should be food for thought for physics educators. The startling fact that an Artificial Intelligence could pass a standard introductory physics course could be confronted in several ways by educators: * Perceiving this as a new way of cheating and trying to defend against it by attempting to use detector tools like ZeroGPT [33] or extensions to tools like turnitin [34]. This is an arms race, which on the long run may turn out to be fruitless. Some educators would even go to so far as to say that the battle is already lost anyway ever since platforms like Chegg [35] -- no need for Artificial Intelligence to defeat standard physics courses, human crowd-intelligence facilitated by existing commercial platforms is good enough for that. * Hunker down and go back to making course grades dependent on just a few, high-stake exams with paper and pencil in highly protocred environments. After all, ChatGPT compensated for the borderline exam grade of 47% with other course components that would be collaborative. Unfortunately, this flies in the face of much of physics education research that favors frequent formative assessment [8, 25, 36, 37] and spaced repetition [38, 39], and it is much in contrast to the work environments our students will find. * Taking this as a wake-up call. If a physics course can be passed by a trained language model, what does that say about the course? Artificial Intelligence, for better or worse, is here to stay. Even without the gloom-and-doom scenarios of AI-overlords painted in Science Fiction, it is clear that these model will get, if not better, at least more and more powerful. What do our students need in terms of conceptual understanding of physics to work with Artificial Intelligence instead of letting Artificial Intelligence do the work for them and then uncritically and unreflectively accepting the results? This is particularly important when more is at stake than getting credit for some homework or exam problem. An important skill of every physicist is to evaluate the correctness of their or other people's work. Techniques include dimensional analysis, order-of-magnitude estimates, checking for coherence, considering implications, and the ability to consider limiting cases ("what should happen if this quantity goes to infinity or to zero?") [40, 41]. Human can do what Artificial Intelligence very likely will not be able to do: following problem-solving strategies including evaluation of their Figure 7: Dialogue about questions X3 and X6 in Fig. 6. ChatGPT got X3 correct; peer instruction was simulated by asking a confusing question, and the second iteration X6 was still counted as solved since ChatGPT did not deviate from its original answer. write a python animation for the following scenario: two positive changes of 1 are placed at positions (1,0) and (0,-1,0), respectively. A negative change starts of 1 starts at a position (-5,0) with a velocity of (1,0). This is an updated version of the code, when the first update of the position of the negative change is removed. Also, I corrected the direction of force calculation. The negative change will be under the influence of the two positive changes, so the force is pointing towards the positive changes. The "while" loop updates the position and velocity of the negative change based on the forces acting on it from the positive changes. The "**rate" function controls the speed of the animation. This is the skeleton code for the scenario you described, and you might need to fine-tune it to get the desired animation. Please add code to graph the x-position of the third charge. Figure 8: Dialogue for a programming exercise in the second semester [12]. own work [42, 5]. Moving students toward a more expert-like epistemology may become even more important as artificial Intelligence starts to permeate more and more aspects of our lives. ## VI Conclusion ChatGPT would have achieved a 1.5-grade in a standard introductory physics lecture-course series; good enough for course credit, but lower than the grade-point average required for graduating with a bachelor degree. If in addition to a language model, the system would have better algorithms for carrying out simple numerical operations, it would even have achieved a grade of 2.0 -- enough to graduate from college if it performs similarly on other courses. Naturally, ChatGTP exhibits no metacognition, which among other consequences lets it present truth and misleading information with equal confidence. In physics, the concern should likely not be that ChatGPT would be used as a cheating tool, as there are more efficient platforms for that. Instead, the challenge should be what this means for physics education, as in their future professional life, our graduates will likely collaborate with Artificial Intelligence: what are the inherently human skills and competencies that we need to convey? ###### Acknowledgements. The author would like to thank Christian Spannagel for suggestions with the numerical calculations, and Christine Kortemeyer for helpful feedback.
2303.13440
CLIP for All Things Zero-Shot Sketch-Based Image Retrieval, Fine-Grained or Not
In this paper, we leverage CLIP for zero-shot sketch based image retrieval (ZS-SBIR). We are largely inspired by recent advances on foundation models and the unparalleled generalisation ability they seem to offer, but for the first time tailor it to benefit the sketch community. We put forward novel designs on how best to achieve this synergy, for both the category setting and the fine-grained setting ("all"). At the very core of our solution is a prompt learning setup. First we show just via factoring in sketch-specific prompts, we already have a category-level ZS-SBIR system that overshoots all prior arts, by a large margin (24.8%) - a great testimony on studying the CLIP and ZS-SBIR synergy. Moving onto the fine-grained setup is however trickier, and requires a deeper dive into this synergy. For that, we come up with two specific designs to tackle the fine-grained matching nature of the problem: (i) an additional regularisation loss to ensure the relative separation between sketches and photos is uniform across categories, which is not the case for the gold standard standalone triplet loss, and (ii) a clever patch shuffling technique to help establishing instance-level structural correspondences between sketch-photo pairs. With these designs, we again observe significant performance gains in the region of 26.9% over previous state-of-the-art. The take-home message, if any, is the proposed CLIP and prompt learning paradigm carries great promise in tackling other sketch-related tasks (not limited to ZS-SBIR) where data scarcity remains a great challenge. Project page: https://aneeshan95.github.io/Sketch_LVM/
Aneeshan Sain, Ayan Kumar Bhunia, Pinaki Nath Chowdhury, Subhadeep Koley, Tao Xiang, Yi-Zhe Song
2023-03-23T17:02:00Z
http://arxiv.org/abs/2303.13440v3
# CLIP for All Things Zero-Shot Sketch-Based Image Retrieval, ###### Abstract In this paper, we leverage CLIP for zero-shot sketch based image retrieval (ZS-SBIR). We are largely inspired by recent advances on foundation models and the unparalleled generalisation ability they seem to offer, but for the first time tailor it to benefit the sketch community. We put forward novel designs on how best to achieve this synergy, for both the category setting and the fine-grained setting ("all"). At the very core of our solution is a prompt learning setup. First we show just via factoring in sketch-specific prompts, we already have a category-level ZS-SBIR system that overshoots all prior arts, by a large margin (\(24.8\%\)) - a great testimony on studying the CLIP and ZS-SBIR synergy. Moving onto the fine-grained setup is however trickier, and requires a deeper dive into this synergy. For that, we come up with two specific designs to tackle the fine-grained matching nature of the problem: (i) an additional regularisation loss to ensure the relative separation between sketches and photos is uniform across categories, which is not the case for the gold standard standalone triplet loss, and (ii) a clever patch shuffling technique to help establishing instance-level structural correspondences between sketch-photo pairs. With these designs, we again observe significant performance gains in the region of \(26.9\%\) over previous state-of-the-art. The take-home message, if any, is the proposed CLIP and prompt learning paradigm carries great promise in tackling other sketch-related tasks (not limited to ZS-SBIR) where data scarcity remains a great challenge. Project page: [https://aneeshan95.github.io/Sketch_LVM/](https://aneeshan95.github.io/Sketch_LVM/) ## 1 Introduction Late research on sketch-based image retrieval (SBIR) [49, 51, 52] had fixated on the zero-shot setup, i.e., zero-shot SBIR (ZS-SBIR) [16, 18, 69]. This shift had become inevitable because of data-scarcity problem plaguing the sketch community [5, 7, 32] - there are just not enough sketches to train a general-purpose SBIR model. It follows that the key behind a successful ZS-SBIR model lies with how best it conducts semantic transfer cross object categories _and_ between sketch-photo modalities. Despite great strides made elsewhere on the general zero-shot literature [61, 67, 76] however, semantic transfer [47] for ZS-SBIR had remained rather rudimentary, mostly using standard word embeddings directly [16, 74, 18] or indirectly [66, 37, 63]. In this paper, we fast track ZS-SBIR research to be aligned with the status quo of the zero-shot literature, and for the first time, propose a synergy between foundation models like CLIP [46] and the cross-modal problem of ZS-SBIR. And to demonstrate the effectiveness of this synergy, we not only tackle the conventional category-level ZS-SBIR, but a new and more challenging fine-grained instance-level [44] ZS-SBIR as well. Our motivation behind this synergy of CLIP and ZS-SBIR is no different to the many latest research adapting CLIP to vision-language pre-training [22], image and action recognition [67, 47] and especially on zero-shot tasks [30, 39, 61, 76] - CLIP exhibits a highly enriched semantic latent space, and already encapsulates knowledge across a myriad of cross-modal data. As for ZS-SBIR, CLIP is therefore almost a perfect match, as (i) it already provides a rich semantic space to conduct category transfer, and (ii) it has an unparalleled understanding on multi-modal data, which SBIR dictates. At the very heart of our answer to this synergy is that of prompt learning [29], that involves learning a set of con Figure 1: Against existing (left) ZS-SBIR methods, we adapt CLIP model for ZS-SBIR (middle), and extend to a more practical yet challenging setup of FG-ZS-SBIR (right), via a novel prompt-based design. Our model surpasses prior arts by a high margin. tinuous vectors injected into CLIP's encoder. This enables CLIP to adapt to downstream tasks while preserving its generalisability - a theme that we follow in our CLIP-adaption to ZS-SBIR (Fig. 1). More specifically, we first design two sets of _visual_ prompts, one for each modality (sketch, photo). They are both injected into the initial layer insider the transformer of CLIP for training. While keeping the rest of CLIP frozen, these two prompts are trained over the gold standard triplet loss paradigm [70], on extracted sketch-photo features. Motivated by the efficacy of training batch normalisation for image recognition [21], we additionally fine-tune a small subset of trainable parameters of every Layer Normalisation (LN) layer for additional performance gain. Furthermore, to enhance cross-category semantic transfer, we also resort to CLIP's text encoder further cultivating its zero-shot potential. In particular, in addition to the said _visual_ prompts onto the CLIP's image encoder, we use handcrafted _text_ prompts via templates like _'photo of a [category]'_ to its text encoder, during training. The new fine-grained setting [13, 44] is however more tricky. Unlike the previous category-level setup, it poses two additional challenges (i) relative feature-distances between sketch-photo pairs across categories are non-uniform, which is reflected in the varying triplet-loss margin [70] across categories at training [8], and (ii) apart from semantic consistency, fine-grained ZS-SBIR requires instance-level matching to be conducted [44], which dictates additional constraints such as structural correspondences. It follows that for the first challenge, we propose a new regularisation term that aims at making the relative sketch-photo feature-distances uniform across categories, such that a single (global) margin parameter works across all of them. Specifically, taking the distribution of relative distances for all sketch-positive-negative hard-triplets [70] in a category, we aim to minimise the KL-divergence [40] between every pair of distributions, which trains the model towards making such relative sketch-photo distances uniform across categories. For the latter, we propose a clever patch shuffling technique, where equally divided patches of a sketch and its _corresponding_ photo (**n\(\times\)n**) are first shuffled following a random permutation order of patches. We then advocate that a shuffled sketch should be closer to a shuffled photo having the same permutation order, but far from that of a different permutation. Training this permutation-invariance imparts a broad notion of structural correspondences, thus helping in fine-grained understanding. Summing up: (i) We for the first time adapt CLIP for ZS-SBIR, (ii) We propose a novel prompt learning setup to facilitate the synergy between the two, (iii) We address both the conventional ZS-SBIR setting, and a new and more challenging fine-grained ZS-SBIR problem, (iv) We introduce a regularisation term and a clever patch shuffling technique to address the fine-grained challenges. With our CLIP-adapted model surpassing all prior arts by a large margin (Fig. 1), we hope to have shed some light to the sketch community on benefits such a synergy between foundation models and sketch-related tasks can bring. ## 2 Related Work **Category-level SBIR:** Given a query-sketch, SBIR aims at fetching category-specific photos from a gallery of multi-category photos. Recent deep-frameworks aim to learn a joint sketch-photo manifold via a feature extractor [16, 68, 14, 68] over a triplet-ranking objective [70]. Towards practicality of _unseen_ test-time classes, Zero-Shot SBIR (ZS-SBIR) was explored for cross-category generalisation [69, 16], and enhanced via test-time training [50]. Others explored binary hash-codes [73, 35] for computational ease. _Sketch_ however specializing in modelling _fine-grained_ details, geared research towards _Fine-Grained_ SBIR. **Fine-grained SBIR:** FG-SBIR aims at retrieving _one_ instance from a gallery of _same_-category images based on a query-sketch. Introduced as a deep triplet-ranking based _siamese network_[70] for learning a joint sketch-photo manifold, FG-SBIR was improvised via attention-based modules with a higher order retrieval loss [60], textual tags [59, 12], hybrid cross-domain generation [43], hierarchical co-attention [51] and reinforcement learning [9]. Furthermore, sketch-traits like style-diversity [52], data-scarcity [5] and redundancy of sketch-strokes [6] were addressed in favor of retrieval. Towards generalising to novel classes, while [42] modelled a universal manifold of prototypical visual sketch traits embedding sketch and photo, [8] adapted to new classes via some supporting sketch-photo pairs. In this paper, we aim to address the problem of zero-shot cross-category FG-SBIR, leveraging the zero-shot potential of a foundation model like CLIP [46]. **Zero-Shot SBIR:** This aims at generalising knowledge learned from _seen_ training classes to _unseen_ testing categories. Yelamarthi _et al_. [69] introduced ZS-SBIR, to reduce sketch-photo domain gap by approximating photo features from sketches via image-to-image translation. While, [18] aligned sketch, photo and semantic representations via adversarial training, [16] minimised sketch-photo domain gap over a gradient reversal layer. Improvising further, others used graph convolution networks [74], or distilled contrastive relationships [63] in a student from ImageNet-pretrained teacher, coupled sketch/photo encoders with shared conv layers and independent batchnorm-layer [65], a shared ViT [62] to minimise domain gap, or employed prototype-based selective knowledge distillation [66] on learned correlation matrix, and very recently introduced a test-time training paradigm [50] via reconstruction on test-set sketches, adapting to the test set distribution. Semantic transfer for ZS-SBIR however was mostly limited to using word embeddings directly [18, 65, 74] or indirectly [37, 63, 66]. Furthermore, FG-ZS-SBIR being non-trivial re mains to be explored. In this paper, we thus take to adapting CLIP to exploit its high generalisability for semantic transfer and exploring its zero-shot potential for FG-ZS-SBIR. **CLIP in Vision Tasks:** Contrastive Language-Image Pre-training (CLIP) [46] trains on cross-modal data, benefiting both from rich semantic textual information [47] and large scale availability of images (\(\sim 400\)M image-text pairs) for training. Unlike traditional representations on discretized labels, CLIP represents images and text in the same embedding space [47], thus enabling generalizability in downstream tasks with no labels (zero-shot) [75] or a few annotations (few-shot) [76] by generating classification weights from text encodings. Efficiently adapting CLIP for downstream tasks to exploit it's zero shot potential has been investigated using Prompt engineering from NLP literature [36]. Such common extensions include retrieval [4], image generation [11], continual learning [61], object detection, few-shot recognition [22], semantic segmentation [47], etc. Others leveraged CLIP's image/text encoders with StyleGAN [45] to enable intuitive text-based semantic image manipulation [1]. In our work we adapt CLIP for zero-shot FG-SBIR in a cross-category setting. **Prompt Learning for Vision Tasks:** Originating from NLP domain, prompting [10] imparts context to the model regarding the task at hand, thus utilising the knowledge base of large-scale pretrained text models like GPT and BERT to benefit downstream tasks. This involves constructing a task specific template (e.g., 'The movie was [MASK] '), and label words (e.g. 'good/bad') to fill it up. Domain-expertise being necessary for hand-crafted prompt-engineering, urged the need for prompt tuning in recent works [29], which entails modelling the prompt as task-specific learnable continuous vectors that are directly optimised via gradient descent during fine-tuning. Learning context vectors for prompting has also taken root in the vision community [3], with greater emphasis on prompting large scale vision language models [75] or visual feature extractors (e.g. ViT [29]). Unlike previous attempts at prompt learning [29, 76], in context of FG-SBIR, we focus on learning a single prompt for sketch and photo branches which when used with CLIP, would generalise on to unseen novel classes by leveraging the zero-shot potential of CLIP for multi-category FG-ZS-SBIR. ## 3 Preliminaries **Overview of CLIP:** Contrastive Language-Image Pre-training or CLIP [46], widely popular for open-set visual understanding tasks, consists of two separate encoders, one for image and another one for text. The image encoder (**V**) uses either ResNet-50 [26] or a Vision Transformer (ViT) [17], where an input image (\(p\in\mathbb{R}^{H\times W\times 3}\)) is divided into \(m\) fixed-size patches and embedded as \(E_{0}\) = \(\{\mathbf{e}^{j}_{0}\}_{j=1}^{m};\mathbf{e}^{j}_{0}\in\mathbb{R}^{d_{p}}\). Similar to BERT's [CLASS] token, a learnable class token \(c^{t}_{0}\in\mathbb{R}^{d}_{p}\) is appended, and the resultant matrix \([E_{0},c^{v}_{0}]\in\mathbb{R}^{(m+1)\times d}\) is passed through transformer layers, followed by a feature projection layer on class-token feature to obtain the final _visual feature_\(f_{p}=\mathbf{V}(p)\in\mathbb{R}^{d}\) in joint vision-language embedding space. Similarly, using a vocab-size of \(49,152\), the text encoder \(\mathbf{T}\) first converts a sentence with \(n\) words (including punctuation) to word embeddings as \(W_{0}\) = \(\{\mathbf{w}^{1}_{0}\}_{j=1}^{n}\) ; \(\mathbf{w}^{1}_{0}\in\mathbb{R}^{d_{t}}\) and appends a learnable class token \(c^{t}_{0}\in\mathbb{R}^{d_{t}}\) to form the input feature matrix \([W_{0},c^{t}_{0}]\in\mathbb{R}^{(n+1)\times d_{t}}\) representing knowledge of the sentence (\(\mathcal{S}\)), which is passed via a transformer to extract textual feature \(f_{t}=\mathbf{T}(\mathcal{S})\). The model is trained via a contrastive loss [46] maximising cosine similarity for matched text-photo pairs while minimising it for all other unmatched pairs. During downstream tasks like classification [46], textual prompts like 'a photo of a [category]' (from a list of \(K\) categories) are fed to \(\mathbf{T}\) to obtain category-specific text features (\(f_{t}\)) and calculate the prediction probability for input photo feature (\(f_{p}\)) as: \[\mathcal{P}(y|p)=\frac{\exp(\texttt{sim}(f_{p},f_{t}^{v})/\tau)}{\sum_{i=1}^{K }\exp(\texttt{sim}(f_{p},f_{t}^{i})/\tau)} \tag{1}\] **Prompt Learning:** Following NLP literature [10], prompt learning has been adopted on foundation models like CLIP [75] or large-pretrained Vision Transformers [29] to benefit downstream tasks. Unlike previous fine-tuning based methods [44] that entirely updated weights of pre-trained models on downstream task datasets, prompt learning keeps weights of foundation models frozen to avoid deploying separate copy of large models for individual tasks besides preserving pre-learned generalizable knowledge. For a visual transformer backbone, concatenated patch-features and class tokens (\([E_{0},c^{v}_{0}]\in\mathbb{R}^{(m+1)\times d_{p}}\)), appended with a set of \(K\) learnable prompt vectors \(\mathbf{v}^{p}\) = \(\{v_{i}\}_{i=1}^{K};v_{i}\in\mathbb{R}^{d_{p}}\) for photo \(p\), as \([E_{0},c^{v}_{0},\mathbf{v}^{p}]\in\mathbb{R}^{(m+1+K)\times d_{p}}\), and passed via the transformer to obtain \(\widehat{f}_{p}\) = \(\mathbf{V}(p,\mathbf{v}^{p})\). Keeping the entire model fixed, \(\mathbf{v}^{p}\) is fine-tuned on the task-specific dataset, adapting the foundation model to the task at hand. Variations of prompt learning include, shallow vs deep prompt [29], based on the layers at which prompts are inserted. Here, we use simple shallow prompt that is inserted only at the first layer along-with patch embeddings, which empirically remains optimal and easy to reproduce. ## 4 CLIP for Zero-Shot _Category-level_ SBIR: **Baseline Categorical SBIR:** Given a query-sketch (\(s\)) from any category, categorical SBIR [50] aims to retrieve a photo of the _same_ category, from a gallery (\(\mathcal{G}\)) holding photos from multiple (\(N_{c}\)) categories \(\mathcal{G}=\{p^{j}_{i}\}_{i=1}^{M_{i}}\)\(|_{j=1}^{N_{c}}\), where \(i^{\text{th}}\) class has \(M_{i}\) number of photos. Formally, a embedding (separate for sketch and photo) function \(\mathcal{F}(\cdot):\mathbb{R}^{H\times W\times 3}\rightarrow\mathbb{R}^{d}\), usually represented by an ImageNet [33]-pretrained VGG-16 [58], is trained to extract a \(d\)-dimensional feature from an input image (sketch \(s\) /photo \(p\)) \(\mathcal{I}\) as \(f_{\mathcal{I}}=\mathcal{F}_{\mathcal{I}}(\mathcal{I})\in\mathbb{R}^{d}\), over a triplet loss [70] using the feature triplet of a query-sketch (\(f_{s}\)) and a photo (\(f_{p}\)) belonging to the same category, and another photo from a different category (\(f_{n}\)). Minimising the triplet loss (\(\mathcal{L}_{\text{Tri}}\)) signifies bringing sketches and photos of the same category closer while distancing other categories' photos. With \(\mu\) as margin and distance function \(d(a,b)\) = \((1-a\cdot b)/(||a||-||b||)\), triplet loss is given as, \[\mathcal{L}_{\text{Tri}}=\max\{0,\mu+d(f_{s},f_{p})-d(f_{s},f_{n})\} \tag{2}\] Unlike normal SBIR, evaluating on categories _seen_ during training \(\mathcal{C}^{\text{S}}\) = {\(c_{i}^{\text{S}}\)}\({}_{i=1}^{N_{s}}\), ZS-SBIR [16] evaluates on novel ones \(\mathcal{C}^{\text{U}}\) = {\(c_{i}^{\text{U}}\)}\({}_{i=1}^{N_{U}}\), _unseen_ during training, _i.e_, \(\mathcal{C}^{\text{S}}\cap\mathcal{C}^{\text{U}}=\emptyset\). **Naively Adapting CLIP for ZS-SBIR:** Although the SBIR _baseline_ can be naively extended to a zero-shot setting for ZS-SBIR [16], it performs unsatisfactorily lacking sufficient zero-shot transfer [39, 61] of semantic knowledge. Consequently, a very naive extension upon CLIP would be to replace ImageNet [48]-pretrained VGG-16 by the CLIP's visual encoder, which already holds semantic-rich information, thus directly harnessing its inherent zero-shot potential for ZS-SBIR. Following traditional fine-tuning methods [44], if we start naively training the CLIP's image encoder on SBIR dataset using triplet loss, the performance collapses due to _catastrophic forgetting_[50] of learnt CLIP knowledge. Alternatively, focusing on parameter-efficient paradigm, one may train via an additional MLP layer at the end [22], while keeping the remaining network frozen. As this essentially transfers the encoded feature from CLIP's embedding space to a subsequent space via the MLP layer, it does not guarantee CLIP's generalisation potential to remain preserved [29], thus defeating our sole purpose of adapting CLIP for ZS-SBIR. Avoiding such loopholes, we therefore opt for a prompt learning approach [75] that not only provides stable optimisation [31] but also preserves the desired generalisation (open-vocab) [76] of CLIP. **Prompt Learning for ZS-SBIR:** To adapt CLIP for category-level SBIR, we learn two sets of sketch/photo visual prompts as \(\mathbf{v}^{s},\mathbf{v}^{p}\in\mathbb{R}^{K\times d_{p}}\), each of which is injected into respective sketch \(\mathcal{F}_{s}\) and photo \(\mathcal{F}_{p}\) encoders both initialised from CLIP's image encoder, respectively. Finally, we get the prompt1 guided sketch feature \(f_{s}\) = \(\mathcal{F}_{s}(s,\mathbf{v}^{s})\in\mathbb{R}^{d}\) and photo feature \(f_{p}\) = \(\mathcal{F}_{p}(p,\mathbf{v}^{p})\in\mathbb{R}^{d}\), respectively. In essence, the sketch and photo specific prompts _induce_ CLIP [46] to learn the downstream sketch and photo distribution respectively. Knowledge learned by CLIP is distilled into a prompt's weights via backpropagation keeping CLIP visual encoder's weights (\(\theta\)) frozen. While freezing \(\theta\) is motivated by training stability [75, 76], we take a step further and ask, can we improve CLIP by fine-tuning \(\theta\) yet enjoy training stability? Accordingly, instead of fine-tuning \(\theta\) entirely, we tune a small subset - the trainable parameters of every layer normalisation (LN) layers across \(\theta\). Our design is motivated by prior observation [21] on unusual efficacy of training batch normalisation for image recognition, keeping rest of parameters frozen. Therefore, besides the prompt parameters \(\{\mathbf{v}^{s},\mathbf{v}^{p}\}\), we update the parameters of sketch/photo branch specific layer-norm layers's parameters \(\{l_{\theta}^{s},l_{\theta}^{p}\}\) via standard triplet loss as in Eq. (2). The trainable parameter set is \(\{\mathbf{v}^{s},\mathbf{v}^{p},l_{\theta}^{s},l_{\theta}^{p}\}\). Footnote 1: Please see §Prompt Learning in Sec. 3 and Supplementary for details. **Classification Loss using CLIP's Text Encoder:** Besides using CLIP's image encoder for zero-shot utility, we further exploit the high generalisation ability provided by natural language [15, 46] through CLIP's text encoder. In particular, along with the triplet loss, we impose a classification loss on the sketch/photo joint-embedding space. For this, instead of usual auxiliary \(N_{s}\)-class FC-layer based classification head [19, 8, 16], we take help of CLIP's text encoder to compute the classification objective, which is already enriched with semantic-visual association. Following [24], we construct a set of handcrafted prompt templates like 'a photo of a [category]' to obtain a list classification weight vectors \(\{t_{j}\}_{j=1}^{N_{s}}\) using CLIP's text encoder where the '[category]' token is filled with a specific class name from a list of \(N_{s}\) seen classes. The classification loss for \(\mathcal{I}=\{s,p\}\) is given by: \[\begin{split}\mathcal{L}_{\text{cls}}^{\mathcal{I}}& =\frac{1}{N}\sum_{i=1}^{N}-\log\mathcal{P}(y_{i}|I_{i})\quad\text{ where,}\\ \mathcal{P}(y_{i}|I_{i})&=\frac{\text{exp}(\texttt{ sim}(\mathcal{F}_{\mathcal{I}}(\mathcal{I}_{i}),t_{y})/\tau)}{\sum_{j=1}^{N_{s}} \text{exp}(\texttt{sim}(\mathcal{F}_{\mathcal{I}_{i}}(\mathcal{I}),t_{j})/ \tau)}\end{split} \tag{3}\] Summing up our CLIP adapted ZS-SBIR paradigm is trained using a weighted (\(\lambda_{1}\)) combination of losses as : \(\mathcal{L}_{\text{Tri}}^{\text{ZS-SBIR}}=\mathcal{L}_{\text{Tri}}+\lambda_{1}( \mathcal{L}_{\text{cls}}^{p}+\mathcal{L}_{\text{cls}}^{s})\). While off-the-shelf CLIP itself has zero-shot image retrieval potential [30, 61] where someone can feed the category level query as 'a photo of a [query]', it raises a question - how much is a category-level query-sketch beneficial over text-keyword based query? Attending to sketch's specialty in modelling fine-grained [6, 51, 70] details hence, we go beyond category-level ZS-SBIR [16, 18] to a more practical and long-standing research problem of cross-category fine-grained ZS-SBIR [44]. ## 5 CLIP for Zero-Shot _Fine-grained_ Sbir **Background on FG-SBIR:** Compared to category-level SBIR [50], fine-grained SBIR [70] aims at instance-level sketch-photo matching at intra-category level. Most of existing FG-SBIR works remain restricted to single-category setup, where they train and evaluate on the same category, like the standard FG-SBIR dataset (e.g., QMUL-ShoeV2 [70]), that comprises \(k\) instance-level sketch/photo pairs as \(\{s_{i},p_{i}\}_{i=1}^{k}\). A baseline FG-SBIR framework [70] involves training a backbone network, shared between sketch and photo branches using a triplet-loss based objective [70] where the matched sketch-photo pairs respectively form the anchor (\(s_{i}\)) and positive (\(p_{i}\)) samples, whereas a random photo (\(p_{\neq i}\)) is considered as the negative. A few works have extended it to multi-category FG-SBIR [8] setup which aims to train a single model with instance-level matching from multiple (\(N_{c}\)) categories (e.g., Sketchy dataset [53]). The dataset consists of sketch/photo pairs from multiple categories \(\{s_{i}^{j},p_{i}^{j}\}_{i=1}^{k_{i}}|_{j=1}^{N_{c}}\) with every \(j\)th class having \(k_{j}\) sketch-photo pairs. On top of baseline for single-category FG-SBIR [51], it involves two additional design considerations _(i)_ Hard-triplets for triplet loss based training where the negative photo (\(p_{\neq i}^{j}\)) is from the same \(j\)th class of sketch-anchor (\(s_{i}^{j}\)) and positive-photo (\(p_{i}^{j}\)), but of different instances (\(\mathcal{L}_{\text{Tr}}^{\text{hard}}\)), (ii) an auxiliary \(N_{c}\)-class classification head on the sketch/photo joint embedding space to learn the class discriminative knowledge. Moving on, cross-category zero-shot FG-SBIR [42] is analogous to category-level ZS-SBIR, in that the training and testing categories are disjoint (\(\mathcal{C}^{\text{S}}\cap\mathcal{C}^{\text{U}}=\emptyset\)), but the former needs to fetch instance-level photos from unseen categories instead of merely retrieving at category level like the latter. We therefore aim to answer the question: how can we extend our CLIP-based ZS-SBIR to FG-ZS-SBIR? **Extending CLIP-based ZS-SBIR to FG-ZS-SBIR:** To recap (Fig. 2), the key components of CLIP-based ZS-SBIR are: (i) CLIP image-encoder as backbone with separate sketch/photo branches with individual prompts \(\{\mathbf{v}^{s},\mathbf{v}^{p}\}\), (ii) category-level triplets (iii) CLIP text-encoder [46] based classification loss, and (iv) fine tuning layer-norms for sketch/photo branches. Keeping rest of the design same, the necessary modification for intra-category instance-level matching (FG-ZS-SBIR) is to replace the _category-level_ triplets by _hard_-triplets - \((s_{i}^{j},p_{i}^{j},p_{\neq i}^{j})\) all from the _same_ category but _different_ negative instances. Furthermore, we empirically found that a common prompt [29] and a shared backbone [70] between sketch/photo branches works better for fine-grained matching. The only _trainable_ parameter set is thus a common prompt \(\mathbf{v}\) and layer-norm parameters \(l_{\theta}\). However, there are two major bottlenecks: Firstly, due to instance level matching across categories, the category-specific margin-parameter of triplet loss (\(\mu\)) varies significantly [8], showing that a single global margin-value alone is sub-optimal for training a FG-ZS-SBIR model. Secondly, due to the diverse shape morphology [52] amongst varying categories, it becomes extremely challenging to recognise fine-grained associations for unseen classes whose shape is _unknown_. We therefore need a training signal to explicitly learn the structural correspondences in a sketch-photo pair. **Stabilising Margin (\(\mu\)) across Categories:** Recently a work [8] on multi-category FG-SBIR has empirically shown optimal margin (\(\mu\)) value to vary across different categories. Looking closely at triplet loss, \(\mathcal{L}=\max(0,\mu+d(s,p^{+})-d(s,p^{-}))\)[71], it essentially computes the difference between positive (\((s,p^{+})\)) and negative distances (\((s,p^{-})\)). We term this difference as the _relative distance_\(\delta(s,p^{+},p^{-})=d(s,p^{+})-d(s,p^{-})\). Using a _constant_\(\mu\) means that on average, the relative distance is same for any triplet \((s,p^{+},p^{-})\)[60] in a category. Contrarily, a varying \(\mu\) across different categories signifies that the average relative distance across categories is not uniform [8]. Therefore, naively training with a single \(\mu\) value across all seen categories would be sub-optimal, and affect the _cross-category generalisation_ of triplet loss [44] which importantly works on this relative distance. While [8] tackles this issue by meta-learning [27] the margin value using few-shot sketch/photo pairs, ours is entirely a zero-shot setup [16], rendering such adaptation infeasible. We thus impose a regulariser that aims to make this relative distance _uniform_ across categories such that the same triplet loss, with single (global) margin parameter \(\mu\) works for all categories. To achieve this, we first compute the distribution of relative distances [23, 38] for all triplets \((s,p^{+},p^{-})\) in category \(c\) as \(\mathcal{D}_{c}=\operatorname{softmax}\{\delta(s_{i},p_{i}^{+},p^{-})\}_{i=1}^ {N_{s}}\), where \(c^{th}\) category has \(N_{s}\) sketch-photo pairs. Next, towards making the relative distance uniform across categories, we minimise the KL-divergence [40] between a distribution of relative distances between every category-pair (aka. f-divergence [23]) as: \[\mathcal{L}_{\delta}=\frac{1}{N_{s}(N_{s}-1)}\sum_{i=1}^{N_{s}}\sum_{j=1}^{N_{s }}\mathbb{KL}(\mathcal{D}_{i},\mathcal{D}_{j}) \tag{4}\] In practice, we compute \(\mathcal{L}_{\delta}\) using sketch/photo samples from every category appearing in a batch. Importantly the spread, or relative entropy [56] or information radius [57] of distribution \(\delta\) should be similar, thus stabilising training with a single margin value for multi-category FG-ZS-SBIR. **Patch-shuffling for Zero-Shot Fine-grained Transfer:** Category-level SBIR is subtly different from FG-SBIR [70] in that the former focuses only on semantic similarity between sketch-photo pairs, unlike FG-SBIR that takes a step further to focus on fine-grained shape matching [8] between Figure 2: Cross-category FG-ZS-SBIR. A common (photo-sketch) learnable visual prompt shared across categories is trained using CLIP’s image encoder over three losses as shown. CLIP’s text-encoder based classification loss is used during training. sketches and photos. Highly diverse shape morphology [52] across new categories implies unconstrained domain gap for multi-category FG-SBIR, thus increasing its difficulty. Discovering fine-grained correspondence becomes even harder as shape itself becomes unknown for unseen categories [8]. For better fine-grained shape-matching transfer to novel classes, we design a simple data-augmentation trick through patch-shuffling to create augmented triplets [60]. In particular, we permute \(\mathbf{n}\times\mathbf{n}\) patches (numbered) of sketch (\(s\)) and photo (\(p\)) using \(\psi(\cdot)\) as \(s^{\gamma}\) = \(\psi(s,\gamma)\) and \(p^{\gamma}\) = \(\psi(p,\gamma)\), where \(\gamma\) denotes a random permutation of the array \([1,2,...\mathbf{n}^{2}]\) describing the mapping of image patches to \(s^{\gamma}\) or \(p^{\gamma}\) (Fig. 3). Given a sketch-photo pair of any category (\(s,p\)), training should decrease feature-distance of the sketch-permutation (\(s^{\gamma_{1}}\)) from the same permutation (\(\gamma_{1}\)) of its paired photo (\(p^{\gamma_{1}}\)), while increasing it from a different permutation (\(p^{\gamma_{2}}\)). Accordingly, we devise a triplet [70] training objective as : \[\mathcal{L}_{\text{PS}}=\max\{0,\mu_{ps}+d(f_{s^{\gamma_{1}}},f_{p^{\gamma_{1} }})-d(f_{s^{\gamma_{1}}},f_{p^{\gamma_{2}}})\} \tag{5}\] In contrast to auxiliary patch-order prediction, we found that our triplet objective between similar and dissimilar permuted instances provides better fine-grained shape transfer, besides being much cheaper during training compared to complex Sinkorn operation [2] as used by Pang _et al_. [44]. With \(\lambda_{2,3,4}\) as hyperparameters, our overall training objective for CLIP adapted FG-ZS-SBIR paradigm is given as, \(\mathcal{L}_{\text{Tm}}^{\text{PG-ZS-SBIR}}=\mathcal{L}_{\text{Tri}}^{\text {hard}}+\lambda_{2}(\mathcal{L}_{\text{cls}}^{*}+\mathcal{L}_{\text{cls}}^{ \text{p}})+\lambda_{3}\mathcal{L}_{\delta}+\lambda_{4}\mathcal{L}_{\text{PS}}\). ## 6 Experiments **Datasets:** We use three popular datasets for evaluation on ZS-SBIR. (i) **Sketchy (extended)**[35] - Sketchy [53] contains 75,471 sketches over 125 categories having 100 images each [69]. We use its extended version [35] having extra 60,502 images from ImageNet [48]. Following [69] we split it as 104 classes for training and 21 for testing for zero-shot setup. (ii) **TUberlin [20]** - contains 250 categories, with 80 free-hand sketches in each, extended to a total of 204,489 images by [72]. Following [16] We split it as 30 classes for testing and 220 for training. (iii) **QuickDraw Extended**[25]- The _full_-version houses over 50 million sketches across 345 categories. Augmenting them with images, a subset with 110 categories having 330,000 sketches and 204,000 photos was introduced for ZS-SBIR [16], which we use, following their split of 80 classes for training and 30 for testing. Requiring fine-grained sketch-photo association [60] for evaluating cross-category FG-ZS-SBIR, we resort to Sketchy [53] with _fine-grained sketch-photo association_, using the same zero-shot categorical split of 104 training and 21 testing classes [69]. **Implementation Details:** We implemented our method in PyTorch on a 11GB Nvidia RTX 2080-Ti GPU. For sketch/photo encoder, we use CLIP [46] with ViT backbone using ViT-B/32 weights. For both paradigms of ZS-SBIR and FG-ZS-SBIR the input image size is set as \(224\times 224\) with margin parameter \(\mu\)=\(0.3\), and prompts are trained using Adam optimiser with learning rate \(1e-5\) for \(60\) epochs, and batch size \(64\), while keeping CLIP model fixed except its LayerNorm layers. We use two prompts (sketch and photo) from ZS-SBIR and one common prompt for FG-ZS-SBIR, each having a dimension of \((3\times 768)\). Our prompts are injected in the first layer of transformer. For FG-ZS-SBIR **n**=2 patches are used for patch shuffling-objective. Values of \(\lambda_{1,2,3,4}\) are set to \(0.5\), \(0.5\), \(0.1\) and \(1\), empirically. **Evaluation Metric:** Following recent ZS-SBIR literature [16, 28, 66] we perform ZS-SBIR evaluation considering the top 200 retrieved samples, reporting mAP score (mAP@all) and precision (P@200) for ZS-SBIR. Keeping consistency with recent ZS-SBIR works however, we report P@100 and map@200 specifically for TUberlin [72] and Sketchy-ext [35] respectively. For cross-category FG-ZS-SBIR, accuracy is measured taking only a single category at a time [44], as Acc.@q [70] for Sketchy [53], which reflects percentage of sketches having true matched photo in the top-q list. We use Top-1 and Top-5 lists [8]. ### Competitors First we compare against **State-of-the-arts** for ZS-SBIR and FG-ZS-SBIR. For ZS-SBIR (**ZS-SOTA**), while _ZS-CVAE_[69] and _ZS-CAAE_[69] employs sketch-to-image translation, _ZS-CCGAN_[18] and _ZS-GRL_[16] both use word2vec [41] embeddings for semantic transfer, with adversarial learning and gradient-reversal layer respectively. Apart from using knowledge-distillation (KD) (_ZS-SAKE_) [37], or learning a correlation matrix via prototype-based selective KD (_ZS-PSKD_[37]), complex designs like graph convolution network (_ZS-GCN_), coupled sketch/photo encoder (_ZS-TCN_[65]) with shared conv layers but independent batchnorm layer, or complicated three-way ViT [17] architecture (_ZS-TVT_[62]) for visual/semantic transfer have been used. While _ZS-IIAE_[28] enforces cross-domain disentanglement, _ZS-Sketch3T_[50] uses a test-time training paradigm to minimise the train-test distribution gap. For FG-ZS-SBIR, we compare against _CrossGrad_[55] that leverages hard triplets with a category/domain classifier using word2vec embedded class-labels, and _CC-DG_[42] that models a universal manifold of prototypical visual sketch traits towards generalising to unseen categories. We report their results directly from their papers. Next we design a few baselines (**B**) for adapting _CLIP_ to ZS-SBIR and ZS-FG-SBIR paradigms. For all of them, Figure 3: Patch-shuffling for fine-grained transfer the prompt design remains same for both paradigms, but every baseline of ZS-SBIR (**B-**) is extended to FG-ZS-SBIR (**B-FG-**) across multiple categories using _hard-triplets_ and a CLIP text-encoder based classification loss. _B-FT_ and _B-FG-FT_ fine-tune a pre-trained ViT-B/16 CLIP-Image Encoder [46], for ZS-SBIR with a low learning rate of 1e-6. Similarly, _B-Lin_ and _B-FG-Lin_ use a linear probe [46] to train an additional feature embedding layer on top of pre-trained CLIP features to adapt to ZS-SBIR and FG-ZS-SBIR respectively, keeping image-extractor backbone frozen. Following [75], _B-Cond_ and _B-FG-Cond_, learns to generate a sketch-conditioned (for every sketch) prompt via a lightweight network (ResNet-18), when after concatenation with image-patch features are fed to the CLIP's image encoder. _B-IP_ and _B-FG-IP_ learn two independent _shallow_ prompts, for sketch and photo, that are injected in the first ViT [17] layer following [76] for ZS-SBIR and FG-ZS-SBIR respectively. Instead of independent sketch and photo prompts _B-MM_ and _B-FG-MM_ adapts [30] to learn a multi-modal prompt for both sketch and photo to ensure mutual synergy and discourage learning independent uni-modal solutions. More specifically, we learn a _single_ photo prompt from which the sketch prompt is obtained as a photo-to-sketch projection, via a linear layer. _B-Deep_ and _B-FG-Deep_ employ _deep_ prompting [29] learning \(N=9\) prompts for sketch and photo, injected into the first \(N\) layers of ViT [17] backbone. The last three types for both paradigms importantly differ from our method in keeping their Layer-Norm frozen, which _we_ fine-tune for better accuracy. ### Performance Analysis: ZS-SBIR: While state-of-the-arts offer reasonable performance (Table 1) thanks to their respective strategies of semantic transfer via word2vec (_ZS-GRL_), adaptation via test-time training (_ZS-Sketch3T_) or improvised transformer-based (_ZS-TVT_), distillation-based (_ZS-PSKD_) and other setups, our method armed with open-vocab generalisable potential of CLIP, surpasses them in all three datasets. Although naively adapting large foundation models like CLIP [46] (e.g., _B-FT_) understandably collapses, a successful adaptation outperforms existing SOTAs by \(\approx 24.8\%\) (avg). This motivates CLIP [47] as the default sketch/photo encoders for future sketch research. While linear probing in _B-Lin_ secures higher results than SOTAs, it is surpassed by prompt-based learning, thus providing insights on a better adaptation choice. The marginal difference in performance between a simple adaptation of CLIP in _B-IP_ and more its complicated versions in _B-Deep_[29], _B-MM_, _B-Cond_, motivates the use of a simple _shallow_ prompt without the added "bells and whistles" for ZS-SBIR. Finally, high accuracy on ZS-SBIR, establishes CLIP as a robust choice for sketch-based systems and thus motivates consequent focus on the more challenging task of cross-category FG-ZS-SBIR. Cross-category FG-ZS-SBIR: Our CLIP adapted paradigm easily surpasses the two existing SOTAs on cross-category FG-ZS-SBIR (Table 1 right). While CLIP models [47] shows impressive performance at category-level ZS-SBIR, there is reasonable scope for improvement in FG-ZS-SBIR that additionally requires structural matching [44]. Relatively higher improvement of _B-FG-MM_ over _B-FG-IP_[76] than its category-level counterpart (_B-MM_ over _B-IP_), suggests the efficacy of multi-modal prompts over independent sketch/photo prompts, in FG-SBIR. This supports the prior observation [70] that sharing encoder parameters is more suited to FG-SBIR [51] whereas separate sketch/photo weights work best at category-level. Lastly, learning a conditional prompt [75] in _B-FG-Cond_ offers marginal gain due to its sensitivity to training strategies [30, 75]. **Extent of Generalisation: A strong motivation for using CLIP is its out-of-distribution performance [47] that promises to enable large-scale real-world deployment of sketch applications. Scrutinising this generalisation potential further, we experiment in two paradigms: _(i)_ we vary training data per-class as 10%, 30%, 50%, 70% and 100%, and (ii) we vary number of seen classes as 20, 40, 60, 80 and 104 from Sketchy [53] respectively. Fig. 4 shows our ZS-SBIR and FG-ZS-SBIR performance to remain relatively \begin{table} \begin{tabular}{c c c c c c c c c|c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c|}{Zero-Shot SIB} & \multicolumn{3}{c}{Cross-category Zero-Shot FG-SBIR} \\ \cline{3-12} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Methods} & \multicolumn{2}{c}{Sketchy} & \multicolumn{2}{c}{TU-Berlin} & \multicolumn{2}{c}{QuickDraw} & \multicolumn{2}{c}{Methods} & \multicolumn{2}{c}{Sketchy} \\ \cline{3-12} \multicolumn{1}{c}{} & \multirow{2}{*}{Methods} & & mAP@200 & P@200 & mAP@all & P@100 & mAP@all & P@200 & \multirow{2}{*}{} & \multirow{2}{*}{} & Top-1 & \multirow{2}{*}{} & Top-5 \\ \hline \multirow{8}{*}{CIF-2S-SBIR} & ECCV '18 & ZS-CAAE [69] & 0.156 & 0.260 & 0.005 & 0.003 & – & – & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} \\ & ECCV '18 & ZS-CAAE [69] & 0.225 & 0.333 & 0.005 & 0.001 & 0.003 & 0.003 & & Cross-GRAD [55] & 13.4 & 34.90 \\ & CVPR '19 & ZS-CGGAN [18] & – & – & 0.297 & 0.426 & – & – & – & & \\ & CVPR '19 & ZS-GRL [16] & 0.369 & 0.370 & 0.110 & 0.121 & 0.075 & 0.068 & CC-DG [42] & 22.6 & 49.00 \\ & ICCV’19 & ZS-SAKE [37] & 0.497 & 0.598 & 0.475 & 0.599 & – & – & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} \\ & AAAI '20 & ZS-GEN [74] & 0.568 & 0.487 & 0.110 & 0.121 & – & – & B-FG-FT & 1.23 & 4.56 \\ & NeurIPS '20 & ZS-ILAE [28] & 0.373 & 0.485 & 0.412 & 0.503 & – & & & \\ & TPAI '21 & ZS-TCN [65] & 0.516 & 0.608 & 0.495 & 0.616 & 0.140 & 0.298 & B-FG-Lin & 15.75 & 39.63 \\ & AAAI '22 & ZS-TVT [6] & 0.531 & 0.618 & 0.484 & 0.662 & 0.149 & 0.293 & & \\ & ACM MM '22 & ZS-PSKDVT [67] & 0.560 & 0.645 & 0.502 & 0.662 & 0.150 & 0.298 & B-FG-Cond & 25.98 & 54.38 \\ & CVPR '22 & ZS-Sketch3T [50] & 0.579 & 0.648 & 0.507 & 0.671 & – & – & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} \\ \multirow{8}{*}{CIF-2S-SBIR} & \multirow{8}{*}{CIF-2S-SBIR} & B-FT & 0.102 & 0.166 & 0.003 & 0.001 & 0.001 & 0.001 & B-FG-IP & 26.69 & 56.08 \\ & & B-Lin & 0.422 & 0.512 & 0.398 & 0.557 & 0.082 & 0.098 & & \\ \cline{1-1} & \multirow{4}{*}{} & B-Cond & 0.618 & 0.675 & 0.562 & 0.648 & 0.159 & 0.312 & B-FG-MM & 27.16 & 59.46 \\ \cline{1-1} & & B-IP & 0.691 & 0.711 & 0.628 & 0.702 & 0.182 & 0.361 & & \\ \cline{1-1} & & B-MM & 0.685 & 0.691 & 0.604 & 0.678 & 0.171 & 0.347 & B-FG-Deep & 27.62 & 61.56 \\ \cline{1-1} & & B-Deep & 0.702 & 0.718 & 0.637 & 0.718 & 0.188 & 0.375 & & \\ \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Ours**} & **0.723** & **0.725** & **0.651** & **0.732** & **0.202** & **0.388** & **Ours** & **28.68** & **62.34** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison of our method against existing frameworks and baselines on ZS-SBIR and cross-category FG-ZS-SBIR stable at variable training-data-size (left) as well as across variable number of seen (training) categories (right), compared to prior arts and baselines, justifying the zero-shot potential of CLIP for both ZS-SBIR and FG-ZS-SBIR tasks. ### Ablation Study **Justifying design components:** We evaluate our models, dropping one component at a time (Table 2) for both ZS-SBIR and FG-ZS-SBIR. While not fine-tuning LayerNorm (_w/o LayerNorm_) lowers performance on both paradigms slightly, removing \(\mathcal{L}_{\text{cls}}^{\mathcal{I}}\) (_w/o f-Divergene_) severely affects FG-ZS-SBIR as it loses it class-discrimination ability. Removing \(\mathcal{L}_{\text{PS}}\) (_w/o Patch-Shuffling_) and \(\mathcal{L}_{\delta}\) (_w/o f-Divergence_) lowers FG-ZS-SBIR accuracy by 3.15% and 3.75%, owing to loss of sketch-photo structural correspondence and non-uniform relative sketch-photo distances across categories, thus verifying contribution of every design choice. Furthermore, using \(2\times 2\) patches instead of \(3\times 3\) for FG-ZS-SBIR provides a 1.2% gain in Acc.@1 (Sketchy), thus being optimal, as larger grid means more white patches for sketch, leading to confused embeddings. **CLIP text encoder v/s Word2vec:** Unlike earlier works [16] that obtained semantics of a category via reconstruction from word2vec [41] embeddings for that category, our method replaces it with feature embedding from CLIP's text encoder [46]. Word2vec [41] is trained using text-_only_ data, whereas CLIP's text encoder trains on large-scale (\(400M\)) image-text _pairs_. Using word2vec embeddings in our proposed method instead, drops performance by \(4.57\%/0.172\) (Acc@1/mAP@200) for ZS-SBIR/FG-ZS-SBIR on Sketchy [53], justifying our choice of CLIP's text encoder, capturing visual-semantic association (text-photo pairs) instead of text-only information (word2vec). **Should we _learn_ text prompts?:** While handcrafted prompt templates like 'a photo of a [category]' works well for class-discrimination training (Eqn. 4), we wish to explore if there is any need to _learn_ text prompts like our visual prompts. Consequently, following [75] we take \(N\) learnable prompts, matching the word embeddings of handcrafted prompt dimensionally for the \(i^{\text{th}}\) class as: \(\eta^{t}=\{\eta_{1},\eta_{2},\cdots,\eta_{N},c_{i}\}\) with word embedding \(c_{i}\) denoting the class name [75]. Training accordingly, drops performance by \(2.36\%/0.078\) (Acc.@1/mAP@200) in Sketchy for ZS-SBIR/ZS-SBIR. This is probably because unlike learned prompts, _handcrafted_ prompts being rooted in _natural language_ vocabulary are inclined to have a higher generalisation ability in our case than learned text-prompts [41]. **Varying number of Prompts:** To ensure that our visual prompt (\(\mathbb{R}^{K\times d_{p}}\)) has sufficient information capacity [64], we experiment with \(K\) = \(\{1,2,3,4\}\). Accuracy improves from \(26.15\%/0.675\) at \(K\) = 1 to \(28.68\%/0.723\) at \(K\) = 3, but saturates to \(28.26\%/0.718\) (Acc@1/mAP@200) at \(K\) = 4 on Sketchy, proving optimal \(K\) as 3. We conjecture that a high capacity prompt might lead to over-fitting [76] of CLIP thus resulting in a lower zero-shot performance. **Comparing text-based image retrieval:** To explore how sketch fares against keywords as a query in a zero-shot retrieval paradigm, we compare keyword-based retrieval against ZS-SBIR on Sketchy(ext), and against FG-ZS-SBIR on Song _et al_.'s [59] dataset having fine-grained sketch-photo-text triplets for fine-grained retrieval. While keyword based retrieval employed via off-the-shelf CLIP, remains competitive (\(0.523/0.612\) mAP/P@200) against our ZS-SBIR framework (\(0.723/0.725\)), it lags behind substantially (\(4.6\%\) Acc@1) from our ZS-FG-SBIR method (\(18.68\%\) Acc@1), proving the well-accepted superiority of sketch in modelling fine-grained details over text [70]. ## 7 Conclusion In this work we leverage CLIP's open-vocab generalisation potential via an intuitive prompt-based design to enhance zero-shot performance of SBIR - both category-level and fine-grained, where our method surpasses all prior state-of-the-arts significantly. Towards improving fine-grained ZS-SBIR, we put forth two novel strategies of making relative sketch-photo feature distances across categories uniform and learning structural sketch-photo correspondences via a patch-shuffling technique. Last but not least, we hope to have informed the sketch community on the potential of synergizing foundation models like CLIP and sketch-related tasks going forward. \begin{table} \begin{tabular}{l c c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{ZS-SBIR} & \multicolumn{2}{c}{FG-ZS-SBIR} \\ \cline{2-5} & mAP@all & P@200 & Top-1 & Top-5 \\ \hline w/o LayerNorm & 0.698 & 0.701 & 27.18 & 59.55 \\ w/o Classification (\(\mathcal{L}_{\text{cls}}^{\mathcal{I}}\)) & 0.703 & 0.710 & 10.69 & 16.32 \\ w/o Patch-Shuffling (L\(\text{eps}\)) & – & – & 25.18 & 53.07 \\ w/o f-Divergence (\(\mathcal{L}_{\delta}\)) & – & – & 24.93 & 53.72 \\ \hline **Ours** & **0.723** & **0.725** & **28.68** & **62.34** \\ \hline \end{tabular} \end{table} Table 2: Ablation Study on Sketchy Figure 4: Plots showing extent of generalisation by varying training data-size (left) as well as across variable number of seen (training) categories (right), compared to prior arts and baselines, justifying the zero-shot potential of CLIP for both ZS-SBIR and FG-ZS-SBIR tasks.
2308.11125
Interplanetary Shock Data Base
In this manuscript, I provide an updated interplanetary shock data base I published in previous works. This list has now 603 events. I also present and describe the data and methodologies used to compile this list. The main contribution of this work is to provide an updated end accurate interplanetary shock data base for future space physics and space weather investigations. The list has been uploaded to Zenodo, and a link is provided for accessing the data files. As for Frontiers requirements, the access of the list has kept to be restricted during the review process. The list will be made public if/when the manuscript is published.
Denny M. Oliveira
2023-08-22T02:08:33Z
http://arxiv.org/abs/2308.11125v1
# Interplanetary Shock Data Base ###### Abstract Interplanetary (IP) shocks are frequently observed in the solar wind (Burlaga, 1971; Stone and Tsurutani, 1985). Many levels of different kinds of geomagnetic activity may follow the impact of IP shocks on the Earth's magnetosphere. Such effects are seen everywhere in the magnetosphere-ionosphere system, including radiation belt dynamics, magnetic field in geosynchronous orbit, field-aligned currents, ionospheric disturbances, satellite orbital drag, ground magnetometers, geomagnetically induced currents (GICs), and others (e.g., Echer et al., 2005; Tsurutani et al., 2011; Khazanov, 2016; Oliveira and Ngwira, 2017; Oliveira and Zesta, 2019; Abda et al., 2020; Smith et al., 2020; Bhaskar et al., 2021). The study of IP shocks is important for space weather purposes because shock impacts occur more frequently than geomagnetic storms and correlate well with solar activity (Oh et al., 2007; Kilpua et al., 2015; Echer et al., 2023). Therefore, keeping an updated and accurate IP shock data base is of primary importance to the scientific community. Many IP shock parameters control shock geoeffectiveness, such as shock speeds, Mach numbers, and compression ratios (Craven et al., 1986; Kabin, 2001; Goncharov et al., 2014). Additionally, the shock impact angle, the angle the shock normal vector performs with the Sun-Earth line, has been shown to be a significant factor that controls shock geoeffectiveness (Oliveira and Samsonov, 2018; Oliveira, 2023). Many works have demonstrated with simulations and observations that, in general, the more frontal and the faster the shock, the higher the subsequent geomagnetic activity observed from the geospace to the ground (e.g., Takeuchi et al., 2002; Guo et al., 2005; Wang et al., 2006; Oliveira and Raeder, 2014; Samsonov et al., 2015; Oliveira and Raeder, 2015; Oliveira et al., 2016; Selvakumaran et al., 2017; Oliveira et al., 2018; Baker, 2019; Rudd et al., 2019; Shi et al., 2019; Oliveira et al., 2020; Xu et al., 2020; Oliveira et al., 2021). The main goal of this short report is to release an expanded version of an IP shock data base that was published before (Oliveira and Raeder, 2015; Oliveira et al., 2018). A major component of this new shock data base is a revision of the methodology used to calculate shock impact angles and speeds with respect to past versions of this list. Additionally, more shock and solar wind parameters before and after shock impacts and geomagnetic activity information were included in the list. This article is organized as follows. Section 2 discusses the methodology used for the computation of shock properties, including the data used and shock normal calculation methods. A shock example is shown in Section 3. Section 4 presents the IP shock data base and its components. Finally, section 5 brings a few suggestions for future usage of this shock list, with focus on the role of shock impact angles in controlling the subsequent shock geoeffectiveness. ## 2 Methods ### Solar wind plasma and IMF data Properties and normal vector orientations of IP shocks are computed with the use of solar wind plasma and interplanetary magnetic field (IMF) data collected by solar wind monitors upstream of the Earth at the Lagrangian point L1. The time coverage of this shock list ranges from January 1995 to May 2023. Wind plasma data are collected by the Solar Wind Experiment instrument with resolution of 92 s (Ogilvie et al., 1995), and Wind magnetic field data are collected by the Magnetic Field Investigation instrument with resolution of 3 s (Lepping et al., 1995). ACE (Advanced Composition Explorer) collects solar wind data (resolution 64 s) with the Solar Wind Electron, Proton and Alpha Monitor instrument (McComas et al., 1998), and magnetic field data (resolution 16 s) with the MAG magnetometer instrument (Smith et al., 1998). All the data used for computations is represented in geocentric solar ecliptic (GSE) coordinates. ### SuperMAG ground magnetometer and sunspot number data Supporting geomagnetic index data are provided by the SuperMAG initiative (Gjerloev, 2009). SuperMAG computes geomagnetic indices using larger numbers of magnetometers in comparison to traditional IAGA (International Association of Geomagnetism and Aeronomy) indices (Davis and Sugiura, 1966; Rostoker, 1972). The SuperMAG ring current index, SMR, is explained by Newell and Gjerloev (2012), and the SuperMAG auroral indices SME, SMU, and SML are documented in Newell and Gjerloev (2011). SMU is the upper envelope index, SML is the lower envelope index, and SME = SMU - SML. All SuperMAG index data have resolution of 1 minute. Another supporting data set, with daily sunspot number observations, is provided by Sunspot Index and Long-term Solar Observations, Royal Observatory of Belgium, Brussels (Clette and Lefevre, 2016). The sunspot number data base used in this report has been corrected and recalibrated according to the methods explained by Clette and Lefevre (2016). ### Computations of shock normals and shock-related parameters For the purpose of shock normal computations at 1 AU, shock fronts are assumed to be planar structures larger than the Earth's magnetospheric system (Russell et al., 1983; Russell, 2000). Then, IP shock normal vectors can be computed if data from at least one spacecraft is available (Russell et al., 1983; Aguilar-Rodriguez et al., 2010; Trotta et al., 2023). Generally, shocks driven by CMEs (coronal mass ejections) have their shock normals with small deviation with respect to the Sun-Earth line, whereas shocks driven by CIRs (corotating interacting regions) have their shock normals with large deviations from the Sun-Earth line (Kilpua et al., 2015; Oliveira and Samsonov, 2018). Such shock inclinations occur because CMEs tend to travel radially in the solar wind, while CIRs tend to follow the Parker spiral when slow speed streams are compressed by fast speed streams (Pizzo, 1991; Tsurutani et al., 2006; Cameron et al., 2019). An animation showing the different inclinations of a CME-driven shock and a CIR- driven shock can be accessed here: [https://dennyoliveira.weebly.com/phd.html](https://dennyoliveira.weebly.com/phd.html). There are three different ways commonly used to compute shock normal orientations. They use magnetic field data only, solar wind velocity data only, and a combination of magnetic field and solar wind velocity data. Such methods are, respectively, named magnetic coplanarity (MC, Colburn and Sonett, 1966), velocity coplanarity (VC, Abraham-Shrauner, 1972), and three mixed data methods (MX1, MX2, MX3, Schwartz, 1998). The equations used are listed below: \[\vec{n}_{MC} = \pm\frac{(\vec{B}_{2}\times\vec{B}_{1})\times(\vec{B}_{2}-\vec{B}_{1 })}{|(\vec{B}_{2}\times\vec{B}_{1})\times(\vec{B}_{2}-\vec{B}_{1})|} \tag{1}\] \[\vec{n}_{MX1} = \pm\frac{\vec{B}_{1}\times(\vec{V}_{2}-\vec{V}_{1})\times(\vec{B} _{2}-\vec{B}_{1})}{|\vec{B}_{1}\times(\vec{V}_{2}-\vec{V}_{1})\times(\vec{B}_{ 2}-\vec{B}_{1})|}\] (2) \[\vec{n}_{MX2} = \pm\frac{\vec{B}_{2}\times(\vec{V}_{2}-\vec{V}_{1})\times(\vec{B} _{2}-\vec{B}_{1})}{|\vec{B}_{2}\times(\vec{V}_{2}-\vec{V}_{1})\times(\vec{B}_{ 2}-\vec{B}_{1})|}\] (3) \[\vec{n}_{MX3} = \pm\frac{(\vec{B}_{2}-\vec{B}_{1})\times(\vec{V}_{2}-\vec{V}_{1 })\times(\vec{B}_{2}-\vec{B}_{1})}{|(\vec{B}_{2}-\vec{B}_{1})\times(\vec{V}_{ 2}-\vec{V}_{1})\times(\vec{B}_{2}-\vec{B}_{1})|}\] (4) \[\vec{n}_{VC} = \pm\frac{\vec{V}_{2}-\vec{V}_{1}}{|\vec{V}_{2}-\vec{V}_{1}|} \tag{5}\] In these equations, \(\vec{B}\) is the magnetic field vector, and \(\vec{V}\) is the solar wind velocity vector. Indices 1 and 2 represent the upstream (non-shocked) region, and downstream (shocked) region, behind of and ahead the shock, respectively. The sign of each vector \(\vec{n}\) is arbitrary and can be chosen to indicate whether the normal vector points toward the downstream direction (+) or upstream direction (-) (Schwartz, 1998). Equations (1-5) provide a three-dimensional normal vector \(\vec{n}=(n_{x},n_{y},n_{z})\) in Cartesian coordinates, from which three angles can be extracted: \[\theta_{x_{n}} = \cos^{-1}(n_{x})\,, \tag{6}\] \[\varphi_{y_{n}} = \tan^{-1}\left(\frac{n_{z}}{n_{y}}\right)\,,\] (7) \[\theta_{B_{n}} = \frac{\vec{n}\cdot\vec{B}_{1}}{|\vec{B}_{1}|}\,, \tag{8}\] where \(\theta_{x_{n}}\) is named the shock impact angle, the angle the shock normal vector performs with the Sun-Earth line, \(\varphi_{y_{n}}\) is the shock clock angle in the yz plane perpendicular to the Sun-Earth line (both angles in the satellite or Earth reference frame), and \(\theta_{B_{n}}\) is the angle between the upstream magnetic field vector and the shock normal vector (in the shock reference frame). Since the plasma mass flux must be conserved along the shock normal, \(\rho_{1}u_{n1}=\rho_{2}u_{n2}\), with \(u_{n1,n2}=v_{s}-\vec{V}_{1,2}\cdot\vec{n}\), the shock speed is computed as follows: \[v_{s}=\vec{n}\cdot\left(\frac{\vec{V}_{2}\rho_{2}-\vec{V}_{1}\rho_{1}}{\rho_{ 2}-\rho_{1}}\right) \tag{9}\] Other useful shock velocities are represented by: Figure 1: Interplanetary shock observed by ACE on 23 June 2000 and the subsequent geomagnetic activity represented by SuperMAG data. \[c_{s} =\sqrt{\frac{\gamma P_{1}}{\rho_{1}}}\] sound speed, (10) \[v_{A} =\frac{|\vec{B}_{1}|}{\sqrt{\mu_{0}\rho_{1}}}\] Alfven speed, (11) \[v_{ms} =\frac{1}{2}\sqrt{v_{A}^{2}+c_{s}^{2}\pm\sqrt{(v_{A}^{2}+c_{s}^{2}) ^{2}-4v_{A}^{2}c_{s}^{2}\cos^{2}(\theta_{B_{n}})}}\] magnetosonic speed, (12) where \(\gamma=5/3\) is the ratio of the solar wind heat capacity with constant pressure to the heat capacity with constant volume; \(P_{1}\) is the upstream solar wind thermal pressure; \(\rho_{1}\) is the upstream solar wind density; and \(\mu_{0}=4\pi\times 10^{-7}\) N/A\({}^{2}\) is the magnetic vacuum permeability. The positive solution of equation 12 gives the fast magnetosonic speed, whereas the negative solution yields the slow magnetosonic speed (Jeffrey and Taniuti, 1964; Priest, 1981; Boyd and Sanderson, 2003). Shock strengths are usually represented by specific Mach numbers. With \(u\) = \(v_{s}-\vec{V}\cdot\vec{n}\) being the relative speed between the shock speed and the local solar wind velocity, the Mach numbers are represented by: \[M_{A}=\frac{u}{v_{A}}\] Alfvenic Mach number, (13) \[M_{s}=\frac{u}{v_{ms}^{f}}\] fast magnetosonic Mach number, (14) where \(v_{ms}^{f}\) is the fast magnetosonic speed. Finally, the strength of IP shocks can also be indicated by upstream and downstream solar wind plasma parameters and IMF. Such compression ratios are represented by: \[X_{n}=\frac{n_{2}}{n_{1}}\] plasma number density compression ratio, (15) \[X_{dp}=\frac{\rho_{2}V_{2}^{2}}{\rho_{1}V_{1}^{2}}\] dynamic pressure compression ratio, (16) \[X_{B}=\frac{|\vec{B}_{2}|}{|\vec{B}_{1}|}\] magnetic field compression ratio. (17) ## 3 The IP shock of 23 June 2000 as an example Figure 1, first published by Oliveira and Raeder (2015), shows an IP shock event that occurred on 23 June 2000 observed by ACE at 1226 UT upstream of the Earth at (x, y, z) = (239.9, 36.7, \(-\)0.7) \(R_{E}\), where \(R_{E}\) is the Earth's radius = 6371.1 km. Solar wind plasma and IMF data are depicted in the figure, along with SuperMAG geomagnetic index data. From top to bottom, the plot shows three components of the IMF (a); IMF magnitude (b); x component of solar wind velocity (c), y and z components of solar wind velocity (d); solar wind velocity magnitude (e); solar wind particle number density (f); solar wind dynamic pressure \(P_{dyn}=\rho V^{2}\) (g); solar wind thermal temperature (h); SMR index (i); and SMU/SML indices (j). The vertical dashed magenta lines indicate the time of shock impact on the magnetosphere. The data were shifted to the magnetopause nose to match shock observations with the onsets in the ground geomagnetic indices. The highlighted grey areas correspond to the shock upstream region (left) 10 to 5 minutes before shock impact, and shock downstream region (right), 5 to 10 minutes after shock impact. Average values of these regions are used in equations (1-5) for the computation of the shock normal orientations with the five different methods. Most shocks have the time windows mentioned above, but a few events have different time windows. As discussed by Balogh et al. (1995) and Trotta et al. (2022), the length of the upstream and downstream windows around the shocks is an important factor in determining shock parameters. For example, Trotta et al. (2022) suggested a method to use windows with different lengths with short-length windows being located near the shock. This methodology provides statistical significance (including uncertainties) to the calculated shock parameters and reliability to the subsequent results. This approach will be applied to this shock list in a future work for further improvements of this shock data base. The data shown in the figure is processed before plotting and before computing shock parameters and normal orientations. First, bad data points, such as 1E+31 are replaced by nan ("not a number") values and subsequently linearly interpolated. Then, solar wind parameter data are interpolated, and IMF data are averaged to a uniform time cadence of 30 seconds. Differences between the non-interpolated and interpolated data are very small or nearly nonexistent around the shock onset. This process allows the time resolutions of both data sets to match to further perform computations that involve both data sets. The same technique was applied to all events in the shock data base. Positive step-like enhancements are seen in all solar wind plasma parameters and IMF. This is a clear signature of a fast forward IP shock (Priest, 1981; Tsurutani et al., 2011; Oliveira, 2017). More information on the analysis of this event including shock normal orientations will be provided in the next section. ## 4 The IP Shock Data Base Previous versions of this current shock data base were published by Oliveira and Raeder (2015) and Oliveira et al. (2018). A few sources were used to compile these previous shock lists: a shock catalog provided by the Harvard-Smithsonian Center for Astrophysics and compiled by Dr. J. C. Kasper for Wind ([http://www.cfa.harvard.edu/shocks/wi_data/](http://www.cfa.harvard.edu/shocks/wi_data/)) and ACE ([http://www.cfa.harvard.edu/shocks/ac_master_data/](http://www.cfa.harvard.edu/shocks/ac_master_data/)); a shock list compiled by the ACE team (author://www-ssg.sr.unh.edu/mag/ace/ACElists/obs_list.html#shocks.); and another list published by Wang et al. (2010) with events from February 1998 to August 2008. New events were added by scanning solar wind and IMF data to detect shock events that satisfy the framework discussed in section 2.3. The IP shock data base consists of three files: (i) full_shock_list_2023.txt, a text file with 603 events; (ii) full_shock_params.cdf, a cdf file with detailed information about each specific shock event; and (iii) read_shock.py, a file that contains a short python routine to read information about a specific shock event. The SpacePy package ([https://spacepy.github.io](https://spacepy.github.io), Morley et al., 2011; Larsen et al., 2022) is required to extract shock information from the cdf file using read_shock.py. The Python routine to read the information of a specific shock event is The input variable for the above routine is the shock number sn. The IP shock represented in Figure 1 is the event number 142 in the shock list. Therefore, read_shock.py can be run as follows The results for each shock are used to compose the shock list in the full_shock_list_2023.txt file. The file brings a header with the names of the variables (Table 1). The list also includes the position (in \(R_{E}\)) of the solar wind monitor (either Wind or ACE) whose data are used in the calculations, along with minimum SMR values occurring in a time window of two hours after shock impact. This time window was chosen because amplitudes of geomagnetic activity response usually occur \(\sim\)60 minutes after energy being released by the magnetotail (Bargatze et al., 1985; Oliveira and Raeder, 2015; Oliveira et al., 2021). Such values can be used in studies that aim to use shock observations during non-storm times. Below are the steps taken to include a specific solution for each shock in the full_shock_list_2023.txt list. These are the major revisions made to the list in comparison to its previous versions: \begin{table} \begin{tabular}{l l l} \hline 1 & sn & shock number; \\ 2 & YY & year; \\ 3 & MM & month; \\ 4 & DD & day; \\ 5 & UTS & UT of shock observation by solar wind monitor; \\ 6 & UTM & UT of ground magnetic sudden impulse onset; \\ 7 & nx & x component of shock normal vector; \\ 8 & ny & y component of shock normal vector; \\ 9 & nz & z component of shock normal vector; \\ 10 & thxn & shock impact angle (degrees); \\ 11 & phiyn & shock clock angle in the yz plane (degrees); \\ 12 & thbn & shock obliquity angle (degrees); \\ 13 & vs & shock speed (km/s); \\ 14 & cs & sound speed (km/s); \\ 15 & vA & Alfven speed (km/s); \\ 16 & vfms & fast magnetosonic speed (km/s); \\ 17 & Ma & Alfvenic Mach number; \\ 18 & Ms & magnetosonic Mach number; \\ 19 & dp1 & upstream solar wind dynamic pressure (nPa); \\ 20 & dp2 & downstream solar wind dynamic pressure (nPa); \\ 21 & Xn & solar wind number density compression ratio; \\ 22 & Xdp & solar wind dynamic pressure ratio (dp2/dp1); \\ 23 & Xb & magnetic field compression ratio; \\ 24 & bz1 & upstream z component of interplanetary magnetic field (nT); \\ 25 & bz2 & downstream z component of interplanetary magnetic field (nT); \\ 26 & sat & satellite used for calculations: 1 for Wind, 2 for ACE \\ 27 & x & x GSE position (in Re) of solar wind monitor at UTS; \\ 28 & y & y GSE position (in Re) of solar wind monitor at UTS; \\ 29 & z & z GSE position (in Re) of solar wind monitor at UTS; \\ 30 & minSMR & minimum SMR within two hours of shock impact \\ \hline \end{tabular} \end{table} Table 1: Names of the variables associated with each shock event in the list and shown in Listing 1. The numbers in the first column are the numbers of the fields in the list and shown in the header of the file full_shock_list_2023.txt. >>> fromread_shockimportread_shock_cdf >>>read_shock_cdf(142) --------------------------------- sn date UTS UTM 142 2000 06 23 1226 1302 Spacecraft (sat): ac Position: X = 239.9 Re; Y = 36.7 Re; Z = -0.7 Re Time windows Upstream: 5 to 10 minutes beforeshock Downstream: 5 to 10 minutes after shock Solar wind plasma/IMF Bx By Bz Vx Vy Vz N T Upstream 5.448 -3.946 -4.568 -399.958 18.948 -14.969 6.941 99731.6 Downstream 14.317 -1.766 -16.125 -508.734 37.628 -117.578 17.429 224496.1 Computed parameters dp1 dp2 Xdp Xb Xn vs_rh VA cs 1.864 7.989 4.286 2.661 2.511 604.778 67.317 52.384 Minimum SMR index within the 2-hour window following shock impact: -9.60 nT nx ny nz thxn phiyn thbn vs vfms Ma Ms MC -0.323 -0.944 0.070 108.831 175.779 78.309 127.332 85.704 0.255 0.200 MX1 -0.796 0.191 -0.575 142.729 -71.590 72.348 578.268 86.195 3.681 2.874 MX2 -0.798 0.133 -0.587 142.962 -77.223 74.365 579.175 86.010 3.693 2.890 MX3 -0.798 0.107 -0.592 142.980 -79.739 75.273 578.920 85.933 3.694 2.894 VC -0.722 0.124 -0.681 136.205 -79.682 80.717 551.670 85.556 3.720 2.927 >>> **Listing 2**.: Example of how to run the Python routine read_shock.py shown in Listing 1 to extract information about a shock event in the list. The example shown in this listing is the event number 142, occurred on 23 June 2000 and observed by ACE (see Figure 1). 1. A filter is passed on the data to replace bad data points (e.g., 1E+31) by nan values which are then replaced by interpolated/averaged values to a common time cadence for both data sets (IMF and solar wind parameter data). 2. The satellite (Wind or ACE) must be in the solar wind upstream of the Earth (x \(>\) 14 \(R_{E}\)). 3. If data of both satellites are simultaneously available, the data set with data of superior quality is used for computations. 4. Events with either \(M_{A}\) or \(M_{s}\) (or both) smaller than one are generally discarded. Events with such conditions are only included in the list if they trigger significant geomagnetic activity, such as SMR variations of at least 15 nT. 5. The solution obtained from equations (1-5) closest to the median value is selected to be included in the list. If the difference between the maximum and minimum values of \(\theta_{x_{n}}\) is larger than 30\({}^{\circ}\), the solution chosen for the list will be the one that shows more agreement with ground geomagnetic response, represented by the SuperMAG indices (SMR, SMU, SML, SME), as shown in previous publications (e.g., Wang et al., 2006; Oliveira and Raeder, 2015; Rudd et al., 2019; Oliveira et al., 2021). The list version published by Oliveira and Raeder (2015) had 461 events, whereas the list published by Oliveira et al. (2018) had 547 events. This current data base has more events (603) and has a number of additional solar wind parameters and shock properties, which are shown in Table 1. The list time span, January 1995 to May 2023, includes two entire solar cycles (SC23 and SC24), the end of declining phase of SC22, and the beginning of ascending phase of SC25. Therefore, this data base provides a solid number of events for future statistical studies given an appropriate availability of data sets to be investigated. Figure 2 represents general statistical features of the 603 events in the shock data base. Panel a shows yearly shock number distributions and Carrington-rotation of 25.38 days (Carrington, 1863) averaged sunspot numbers (solid black line). This figure shows a clear correlation between the number of events and sunspot numbers, a result that is already well known (Kilpua et al., 2015; Oliveira and Raeder, 2015; Rudd et al., 2019). Observations show that SC23 was significantly stronger than SC24 based on the overall sunspot numbers, which is reflected on the total number of shocks observed in the corresponding periods (432 and 187, respectively). Zhu et al. (2022) predicted that SC25 will be stronger than SC24 and will reach its maximum value around July 2025. Therefore, according to these predictions, it is reasonable to expect that SC25 will have a similar number of shocks with respect to SC24. The overall statistical properties of sunspot observations and shock parameters are quantified and shown in Table 2. These results, including mean, median, percentile values, and shock number distribution correlation with sunspot numbers, are in excellent agreement with past studies (Oliveira and Raeder, 2015; Kilpua et al., 2015; Oliveira et al., 2018; Rudd et al., 2019). ## 5 Suggested use of the IP shock data base The IP shock data base described in this report can be used in many ways. For example, this list can be used in studies involving geomagnetic activity following shock impacts from the geospace to the ground, including particle acceleration by shocks, particle dynamics and energization in the radiation belts, magnetospheric ultra-low frequency (ULF) waves, magnetic field response in geosynchronous orbit, field-aligned currents, role of shocks in substorm triggering, ionospheric irregularities, high-latitude thermosphere response (neutral density and nitric oxide) to shock impacts, ground magnetometer response \begin{table} \begin{tabular}{l c c c c c c} \hline \multicolumn{6}{c}{Shock parameters} \\ \hline variable & LPT & median & mean & UPT & STD & \(\#\) of events \\ \hline \(\theta_{x_{n}}\)[\({}^{\circ}\)] & 132.96 & 148.43 & 147.99 & 163.56 & 15.90 & 603 \\ \(\varphi_{y_{n}}\)[\({}^{\circ}\)] & -105.87 & 13.10 & 7.12 & 120.00 & 108.41 & 603 \\ \(\theta_{B_{n}}\)[\({}^{\circ}\)] & 41.05 & 64.14 & 59.87 & 79.77 & 21.16 & 603 \\ \(v_{s}\) [km/s] & 335.83 & 435.68 & 463.20 & 575.75 & 161.88 & 603 \\ \(M_{A}\) & 1.74 & 2.58 & 3.13 & 4.05 & 1.95 & 603 \\ \(M_{s}\) & 1.33 & 1.86 & 2.16 & 2.79 & 1.18 & 603 \\ \(X_{N}\) & 1.54 & 1.96 & 2.13 & 2.62 & 0.75 & 603 \\ \(X_{dp}\) & 1.45 & 1.79 & 1.93 & 2.36 & 0.69 & 603 \\ \(X_{B}\) & 1.81 & 2.49 & 2.98 & 3.86 & 1.56 & 603 \\ \hline \multicolumn{6}{c}{Sunspot numbers (complete solar cycles)} \\ \hline solar cple \(\#\) & LPT & median & mean & UPT & STD & \(\#\) of events \\ \hline SC23 & 13.00 & 63.00 & 82.39 & 150.00 & 72.74 & 342 \\ SC24 & 11.00 & 44.00 & 54.08 & 98.00 & 47.01 & 187 \\ \end{tabular} \end{table} Table 2: Upper part: Shock parameters calculated from the shock data base released with this publication. The statistical data are: LPT, lower percentile (20%); median value; mean value; UPT, upper percentile (80%); and standard deviation (STD). There are 603 events in this shock list. Lower part: statistical results of sunspot number observations of the two complete solar cycles (SC) covered in the shock data base: SC23 and SC24. (dB/dt variations) and subsequent effects on GICs, and many others. Therefore, this shock list can be used in a variety of space physics and space weather investigations. A major feature of this shock list is the possibility of using the shock impact angle as a factor controlling geomagnetic activity. Oliveira (2023) has recently reviewed the effects of shock impact angles on the subsequent geomagnetic activity and also suggested a few topics for future research. Figure 2: Statistical properties of shocks in the data base from January 1995 to May 2023 (603 shocks). Panel a: shock number distribution and Carrington rotation-averaged sunspot numbers. Panels b-j: number distributions of shock parameters obtained from equations 6-17. 1. Shock impact angle effects on intensities and latitudinal extensions of d\(B\)/d\(t\) variations linked to enhancements of GICs (Carter et al., 2015; Oliveira et al., 2018, 2021). 2. Role of shock inclinations in controlling the triggering and wave modes of ULF waves and their interaction with magnetospheric cold plasma and wave-particle interactions (Oliveira et al., 2020; Hartinger et al., 2022). 3. Effects caused by different shock orientations on thermospheric neutral and nitric oxide molecules that control thermosphere heating and cooling affecting the subsequent satellite orbital drag in low-Earth orbit (Oliveira and Zesta, 2019; Zesta and Oliveira, 2019). 4. Shock impact angle effects on the dynamics of radiation belts (e.g., particle acceleration, enhancements, dropouts, and loss of relativistic electrons in the magnetosphere) (Tsurutani et al., 2016; Hajra and Tsurutani, 2018). 5. Role of shock impact angle in triggering magnetospheric super substorms, with minimum \(\mathrm{SML}<-2500\) nT (Hajra and Tsurutani, 2018; Tsurutani and Hajra, 2023) Finally, I would like to urge researchers to perform numerical simulations of shocks with different orientations. For example, Welling et al. (2021) argued that the "most perfect" CME would be very fast and impact Earth head-on. They performed numerical simulations of the impact of a perfect CME on the magnetosphere and concluded that ground d\(B\)/d\(t\) variations were noted in very low latitude regions because the CME impact was purely frontal. Furthermore, our shock list can be very useful in simulations comparing real observations with results yielded by numerical simulations of shocks with different orientations for many different space weather purposes. ## Data Availability Statement The IP shock data base can be downloaded from Zenodo ([https://zenodo.org/record/7991430](https://zenodo.org/record/7991430)). The solar wind plasma and IMF data observed by Wind and ACE used to calculate the shock impact angles including shock properties were obtained from the CDAWeb (Coordinated Data Analysis) website provided by NASA Goddard Space Flight Center's Space Physics Data Facility ([http://cdaweb.gsfc.nasa.gov](http://cdaweb.gsfc.nasa.gov)). The geomagnetic index data used in this publication were downloaded from the SuperMAG initiative website: [https://supermag.jhuapl.edu](https://supermag.jhuapl.edu). Sunspot number data was downloaded from the SILSO (Sunspot Index and Long-term Solar Observations) website ([https://www.sidc.be/silso/datafiles](https://www.sidc.be/silso/datafiles)). ## Conflict of Interest Statement The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions This data report article was written by the author without any direct contributions from others. ## Funding This work was possible thanks to the financial support provided by the NASA HGIO program through grant 80NSSC22K0756.
2305.08488
Hierarchical DCC-HEAVY Model for High-Dimensional Covariance Matrices
We introduce a HD DCC-HEAVY class of hierarchical-type factor models for high-dimensional covariance matrices, employing the realized measures built from higher-frequency data. The modelling approach features straightforward estimation and forecasting schemes, independent of the cross-sectional dimension of the assets under consideration, and accounts for sophisticated asymmetric dynamics in the covariances. Empirical analyses suggest that the HD DCC-HEAVY models have a better in-sample fit and deliver statistically and economically significant out-of-sample gains relative to the existing hierarchical factor model and standard benchmarks. The results are robust under different frequencies and market conditions.
Emilija Dzuverovic, Matteo Barigozzi
2023-05-15T09:44:24Z
http://arxiv.org/abs/2305.08488v2
# Hierarchical DCC-HEAVY Model for High-Dimensional Covariance Matrices ###### Abstract We introduce a new HD DCC-HEAVY class of hierarchical-type factor models for conditional covariance matrices of high-dimensional returns, employing the corresponding realized measures built from higher-frequency data. The modelling approach features sophisticated asymmetric dynamics in covariances coupled with straightforward estimation and forecasting schemes, independent of the cross-sectional dimension of the assets under consideration. Empirical analyses suggest the HD DCC-HEAVY models have a better in-sample fit, and deliver statistically and economically significant out-of-sample gains relative to the standard benchmarks and existing hierarchical factor models. The results are robust under different market conditions. _Keywords:_ Asymmetric Volatility; DCC-HEAVY; Factor Model; Time-Varying Beta. Introduction In this paper, we develop a flexible framework, i.e., HD DCC-HEAVY, that accurately captures the latent covariance structure of the high-dimensional asset returns and allows for sophisticated asymmetric dynamics in the covariances, while at the same time keeping the estimation and forecasting straightforward and independent from the cross-sectional dimension of the assets under consideration. Our methodology relates to the Realized Beta GARCH model of Hansen et al. (2014) and the corresponding extension of Archakov et al. (2020) that introduce the hierarchical-type factor framework based on the realized GARCH model (Hansen et al. (2012)), taking realized measures as direct inputs. In contrast, we model the dynamics of both conditional and RC in a GJR-type spirit (Glosten et al. (1993)). In addition, they focus on modelling the dynamics of daily returns and adopt intra-daily realized measures, leaving the dynamics of the residuals unspecified. Instead, we use monthly returns and construct realized measures via daily data. As such, we estimate and test our model, defining the conditional covariance matrices completely, for much longer sample periods. Given that no prior study investigates the forecasting ability of the hierarchical-type factor models, we assess the performance of the distinct versions of our model in terms of the factor set and asymmetric dynamics, comparing them with the benchmark cDCC model, the Realized Beta GARCH model (Hansen et al. (2014)), and its 3-FF extension (Archakov et al. (2020)). To perform empirical evaluations of the models, we utilize the data from a Kenneth French library on the three Fama-French (FF) factors (Fama and French (1993)), i.e., market risk, size, and value, together with the momentum factor (Carhart (1997)), coupled with Yahoo Finance time series of the daily and monthly adjusted prices for a selected cross-section of individual assets, including all the stocks that belong to the S&P500 Index during the entire sample period from January 1962 until January 2023, i.e., \(T=732\). Statistical evaluation criteria consist of the in-sample fit and out-of-sample forecast loss functions, i.e., the Euclidean distance (ED) and Frobenius norm (FN). From the economic point of view, we focus on the global minimum variance portfolio (GMVP) optimization as the corresponding weights are determined solely by forecasts of the conditional covariance matrices over the given investment horizon. In this regard, the models are evaluated in terms of the forecasted conditional portfolio volatility. In order to formally determine whether the quality of the forecasts differs significantly across the models, we apply the model confidence set (MCS) procedure of Hansen et al. (2011), which allows us to identify the subset of models that contains the best forecasting model given a pre-specified level of confidence. We also consider some typical features of the implied portfolio allocations, such as portfolio turnover rates and the short-selling proportion. Finally, we examine the economic significance of differences in portfolio volatility via a utility-based framework of Fleming et al. (2001, 2003). Both the in-sample and forecasting results imply that our HD DCC-HEAVY class of models significantly outperforms the existing hierarchical models of Hansen et al. (2014) and Archakov et al. (2020), as well as the benchmark cDCC model. With regard to the latter, we prove the benefits of employing the higher-frequency data to model conditional covariances of lower-frequency returns. Conversely, the importance of specifying the RC dynamics could explain the poor performance of Realized GARCH-based models (Hansen et al. (2014), Archakov et al. (2020)). We confirm the robustness of our findings under changing market conditions. The rest of the paper is organized as follows. Section 2 introduces the hierarchical HD DCC-HEAVY models. Section 3 expounds on the estimation scheme, while the forecast formulas are provided in Section 4. Section 5 describes the empirical methodology, details the data used in the paper, and presents the in- and out-of-sample results of empirical exercises. Section 6 concludes. Modelling Framework Let us define a \(K\times 1\) vector of returns related to the set of factors on month \(t\) as \(r_{t}^{c}\), the corresponding realized covariance (RC) matrix as \(RC_{t}^{c}\). In addition, for \(i=1,\ldots,N\), we consider an individual asset return \(r_{i,t}\) and associated realized measure between an individual asset and the set of factors \(RC_{i,t}^{c}\). In this regard, we observe the two types of information sets. \(\mathcal{F}_{t}^{c}\), composed of the variables related to the set of factors, and \(\mathcal{F}_{t}^{c,i}\), which further incorporates the observable information on an individual asset (for \(i=1,\ldots,N\)). We consider the factor model for an individual asset return: \[r_{i,t}=\alpha_{i,t}+(\beta_{i,t})^{\prime}r_{t}^{c}+\varepsilon_{i,t}; \tag{1}\] \[\beta_{i,t} =\operatorname{Var}(r_{t}^{c}|\mathcal{F}_{t-1}^{c})^{-1} \operatorname{Cov}(r_{i,t},r_{t}^{c}|\mathcal{F}_{t-1}^{c,i}) \tag{2}\] \[=(\operatorname{diag}(H_{t}^{c})^{1/2}R_{t}^{c}\operatorname{diag }(H_{t}^{c})^{1/2})^{-1}\operatorname{diag}(H_{t}^{c})^{1/2}\rho_{i,t}(h_{i,t })^{1/2}\] \[=(\operatorname{diag}(H_{t}^{c})^{1/2})^{-1}(R_{t}^{c})^{-1}\rho_ {i,t}(h_{i,t})^{1/2};\] \[\alpha_{i,t} =\mu_{i}-(\beta_{i,t})^{\prime}\mu^{c},\] where \(r_{i,t}\) is a close-to-close return of an individual asset on month \(t\), \(r_{t}^{c}\) is a \(K\times 1\) vector of returns of \(K\) factors, \(\alpha_{i,t}\) and \(\varepsilon_{i,t}\) are the intercept and idiosyncratic return component related to \(r_{i,t}\), respectively, and \(\beta_{i,t}\) is a \(K\times 1\) vector of asset betas; \(\operatorname{diag}(H_{t}^{c})\) is a \(K\times K\) diagonal matrix composed of the conditional variances of factors on month \(t\), \(R_{t}^{c}\) is the corresponding \(K\times K\) conditional correlation matrix, while \(h_{i,t}\) denotes the conditional variance of an asset \(i\) and \(\rho_{i,t}\) a \(K\times 1\) vector of conditional correlations between an asset and the factors. Similarly, factor model for \(N\) individual asset returns: \[r_{t}=\alpha_{t}+B_{t}r_{t}^{c}+\varepsilon_{t}, \tag{3}\] where \(r_{t}\) is a \(N\times 1\) vector of returns of individual assets on month \(t\), \(\alpha_{t}\) and \(\varepsilon_{t}\) are the corresponding \(N\times 1\) vectors of intercepts and idiosyncratic return components, respectively, and \(B_{t}\) is a \(N\times K\) matrix of asset betas. It follows readily: \[\operatorname{Var}(r_{t}|\mathcal{F}_{t-1}^{c,i})=B_{t}\operatorname{Var}(r_{t}^ {c}|\mathcal{F}_{t-1}^{c})(B_{t})^{\prime}+\Sigma_{t}, \tag{4}\] with \(\Sigma_{t}=\operatorname{E}(\varepsilon_{t}\varepsilon_{t}^{\prime}| \mathcal{F}_{t-1}^{c,i})\). To model (1)-(4), we primarily rely on a hierarchical method introduced by Hansen et al. (2014). In particular, \(\mathcal{F}_{t}^{c}\) is adopted to build up the model for the dynamics of the set of factors. Subsequently, conditional on former estimates, we set up the framework for the dynamics between each individual asset and the factors by utilizing \(\mathcal{F}_{t}^{c,i}\). Ultimately, the nonlinear shrinkage method (Ledoit and Wolf (2017)) is utilized to define the covariances between idiosyncratic return components of the individual assets. ### Marginal Model for a Set of Factors We initially specify the marginal model for a set of factors by extending the recently introduced DCC-HEAVY model (Bauwens and Xu (2022)) to allow for sophisticated asymmetric dynamics in the covariance matrices. In this regard, we decompose a \(K\times K\) conditional covariance matrix of \(K\) factors, i.e., \(\operatorname{E}(r_{t}^{c}r_{t}^{c^{\prime}}|\mathcal{F}_{t-1}^{c})=H_{t}^{c}\), as: \[H_{t}^{c}=\operatorname{diag}(h_{t}^{c})^{1/2}R_{t}^{c}\operatorname{diag}(h_ {t}^{c})^{1/2}, \tag{5}\] where \(h_{t}^{c}\) is a \(K\times 1\) vector of the conditional variances of factors on month \(t\) and \(R_{t}^{c}\) is the corresponding \(K\times K\) conditional correlation matrix, given \(\operatorname{E}(\operatorname{diag}(r_{t}^{c}r_{t}^{c^{\prime}})|\mathcal{F} _{t-1}^{c})=\operatorname{diag}(h_{t}^{c})\) and \(\operatorname{E}(u_{t}^{c}u_{t}^{c^{\prime}}|\mathcal{F}_{t-1}^{c})=R_{t}^{c}\), with \(u_{t}^{c}=r_{t}^{c}\odot(h_{t}^{c})^{-1/2}\). The dynamics of the conditional variances and correlations, allowing for asymmetric effects, are specified as: \[h_{t}^{c}=w_{h}+A_{h}^{+}v_{t-1}^{c}\odot\operatorname{I}_{t-1}^{+}+A_{h}^{-} v_{t-1}^{c}\odot\operatorname{I}_{t-1}^{-}+B_{h}h_{t-1}^{c}, \tag{6}\] where \(v_{t}^{c}\) is a \(K\times 1\) vector of the realized variances of factors on month \(t\), \(w_{h}\) is a \(K\times 1\) positive vector, and \(A_{h}^{+},A_{h}^{-}\), and \(B_{h}\) are the \(K\times K\) diagonal matrices of coefficients with positive diagonal entries less than 1, with \(\odot\) denoting the Hadamard (element-wise) product of matrices, \(\mathrm{I}_{t}^{+}=[1_{\{r_{1,t}^{c}>0\}},\ldots,1_{\{r_{K,t}^{c}>0\}}]^{\prime}\) the indicator vector of the positive monthly returns, and \(\mathrm{I}_{t}^{-}=[1_{\{r_{1,t}^{c}\leq 0\}},\ldots,1_{\{r_{K,t}^{c}\leq 0\}}]^{\prime}\) the indicator vector of the negative monthly returns. Correspondingly, \[R_{t}^{c}=\tilde{R}+\alpha_{R}RL_{t-1}^{c}+\beta_{R}R_{t-1}^{c}, \tag{7}\] where \(RL_{t}^{c}\) is a \(K\times K\) realized correlation matrix of the factors on month \(t\), and \(\alpha_{R}\) and \(\beta_{R}\) are non-negative scalar parameters, i.e., \(\beta_{R}=0\) if \(\alpha_{R}=0\) and \(\beta_{R}<1\), with \(\tilde{R}=(1-\beta_{R})\overline{R}-\alpha_{R}\overline{P}\), i.e., \(\mathrm{E}(u_{t}^{c}u_{t}^{c^{\prime}})=\overline{R}\) and \(\mathrm{E}(RL_{t}^{c})=\overline{P}\) set to the empirical counterparts. Analogously, we decompose a \(K\times K\) conditional mean of the realized covariance (RC) matrix of \(K\) factors, i.e., \(\mathrm{E}(RC_{t}^{c}|\mathcal{F}_{t-1}^{c})=M_{t}^{c}\), as: \[M_{t}^{c}=\mathrm{diag}(m_{t}^{c})^{1/2}P_{t}^{c}\,\mathrm{diag}(m_{t}^{c})^{ 1/2}, \tag{8}\] where \(m_{t}^{c}\) is a \(K\times 1\) vector of the conditional means of realized variances of factors on month \(t\) and \(P_{t}^{c}\) is the corresponding \(K\times K\) conditional mean of realized correlations, i.e., \(\mathrm{E}(RL_{t}^{c}|\mathcal{F}_{t-1}^{c})=P_{t}^{c}\). The dynamics of the realized variances and correlations, allowing for daily asymmetric effects, are specified as: \[m_{t}^{c}=w_{m}+A_{m}^{+}v_{t-1}^{c+}+A_{m}^{-}v_{t-1}^{c-}+B_{m}m_{t-1}^{c}, \tag{9}\] where \(v_{t}^{c+}\) and \(v_{t}^{c-}\) are the \(K\times 1\) vectors of the positive and negative realized semi-variances (Shephard and Sheppard (2010)) of factors, respectively, \(w_{m}\) is a \(K\times 1\) positive vector, and \(A_{m}^{+},A_{m}^{-}\), and \(B_{m}\) are the \(K\times K\) diagonal matrices of coefficients with positive diagonal entries below 1. Specifically, for \(i=1,\ldots,K\) and \(j=1,\ldots,m\), \(v_{i,t}^{c+}=\sum_{j=1}^{m}(r_{i,j,t}^{c+})^{2}\) and \(v_{i,t}^{c-}=\sum_{j=1}^{m}(r_{i,j,t}^{c-})^{2}\), where \(r_{i,j,t}^{c+}=r_{i,j,t}^{c}\times 1_{\{r_{i,j,t}^{c}>0\}}\) and \(r_{i,j,t}^{c-}=r_{i,j,t}^{c}\times 1_{\{r_{i,j,t}^{c}\leq 0\}}\) denote the positive and negative daily returns, respectively. Correspondingly, \[P_{t}^{c}=(1-\alpha_{P}-\beta_{P})\overline{P}+\alpha_{P}RL_{t-1}^{c}+\beta_{R}P_ {t-1}^{c}, \tag{10}\] where \(\alpha_{P}\) and \(\beta_{P}\) are non-negative scalar parameters, i.e., \(\beta_{P}=0\) if \(\alpha_{P}=0\) and \(\alpha_{P}+\beta_{P}<1\), with \(\text{E}(RL_{t}^{c})=\overline{P}\) set to the empirical counterpart. ### Model for Individual Asset Returns By assuming that the conditional distribution of individual asset returns depends on the factors but not vice versa (Hansen et al. (2014)), the standardized return of each asset is conditionally jointly distributed with 'degarched' factors, i.e., \[\begin{pmatrix}u_{t}^{c}\\ u_{i,t}\end{pmatrix}|\mathcal{F}_{t-1}^{c,i}\sim N\left(\begin{pmatrix}0_{K\times 1 }\\ 0_{1\times 1}\end{pmatrix},R_{i,t}^{c}\right), \tag{11}\] where the \((K+1)\times(K+1)\) joint conditional correlation matrix \(R_{i,t}^{c}\) is given by: \[R_{i,t}^{c}=\begin{pmatrix}R_{t}^{c}&\rho_{i,t}\\ (\rho_{i,t})^{\prime}&1\end{pmatrix}, \tag{12}\] where \(R_{t}^{c}\) and \(\rho_{i,t}\) denote the \(K\times K\) conditional correlation matrix of factors filtered from a marginal model and \(K\times 1\) vector of correlations between an individual asset and the factors on month \(t\), respectively. In accordance to the framework for a set of factors, the dynamics of the conditional and realized variance of an individual asset, allowing for corresponding asymmetric effects are specified as: \[h_{i,t}=c_{i,h}+a_{i,h}^{+}v_{i,t-1}1_{[r_{i,t-1}>0]}+a_{i,h}^{-}v_{i,t-1}1_{[ r_{i,t-1}\leq 0]}+b_{i,h}h_{i,t-1}, \tag{13}\] where \(h_{i,t}\) and \(v_{i,t}\) denote the conditional and realized variance of an asset \(i\) on month \(t\), respectively, and \(c_{i,h},a_{i,h}^{+},a_{i,h}^{-}\), and \(b_{i,h}\) are non-negative scalar coefficients; \[m_{i,t}=c_{i,m}+a_{i,m}^{+}v_{i,t-1}^{+}+a_{i,m}^{-}v_{i,t-1}^{-}+b_{i,m}m_{i,t-1}, \tag{14}\] where \(m_{i,t}\), \(v_{i,t}^{+}\), and \(v_{i,t}^{-}\) denote the conditional mean of the realized variance, positive and negative semi-variance of an asset \(i\) on month \(t\), respectively, and \(c_{i,m},a_{i,m}^{+},a_{i,m}^{-}\), and \(b_{i,m}\) are non-negative scalar coefficients. Finally, to model the vectors of correlations between the returns of an individual asset and the set of factors, we utilize the Fisher transformation, i.e., \(\mathbb{F}(\cdot)\), to map each element from a closed interval \((-1,1)\) into \(\mathbb{R}\) within the typical HEAVY-type recursions (Noureldin et al. (2012), Bauwens and Xu (2022)): \[\mathbb{F}(\rho_{i,t})=\phi_{i,R}+\alpha_{i,R}\mathbb{F}(rl_{i,t-1})+\beta_{i, R}\mathbb{F}(\rho_{i,t-1}), \tag{15}\] where \(\rho_{i,t}\) and \(rl_{i,t}\) denote the \(K\times 1\) vectors of conditional and realized correlations of an asset \(i\) with factors on month \(t\), respectively, and \(\phi_{i,R}\), \(\alpha_{i,R}\), and \(\beta_{i,R}\) are non-negative scalar parameters; \[\mathbb{F}(p_{i,t})=\phi_{i,P}+\alpha_{i,P}\mathbb{F}(rl_{i,t-1})+\beta_{i,P} \mathbb{F}(p_{i,t-1}), \tag{16}\] where \(p_{i,t}\) denotes a \(K\times 1\) vector of the conditional means of realized correlations of an asset \(i\) with factors on month \(t\), and \(\phi_{i,P}\), \(\alpha_{i,P}\), and \(\beta_{i,P}\) are non-negative scalar parameters. ### Idiosyncratic Dynamics Based on formulas (1)-(4), to fully specify the conditional covariance matrices of individual assets, we should define the dynamics of the residuals, i.e., \(\mathrm{E}(\varepsilon_{t}\varepsilon_{t}^{\prime}|\mathcal{F}_{t-1}^{c,i})\). In line with most of the literature, we treat the assumption of an exact factor model as strict. As such, for the underlying approximate factor model, we propose applying the nonlinear shrinkage method of Ledoit and Wolf (2017) to the sample covariance matrix of the residuals, which has been proved preferable with respect to both the linear shrinkage of Ledoit and Wolf (2004) (Ledoit and Wolf (2017)) and thresholding schemes (De Nard et al. (2021)).1 This methodology implies shifting the eigenvalues of the empirical covariance matrix via the out-of-sample optimization of the minimum variance loss function subject to a required return constraint (Engle and Colacito (2006)). Footnote 1: Alternatively, the dynamic \(\Sigma_{t}\) could be defined via the benchmark dynamic conditional correlation (DCC) model (Engle (2002)) for the cross-section of \(N\leq 100\) assets. Conversely, when the number of individual assets is large, the DCC-NL model introduced by Engle et al. (2019) might be adopted. In each case, the estimation of the additional \(3N+2\) parameters is required. Thus, to keep the model parsimony, the NL shrinkage is preferable. It follows directly: \[\hat{\beta}_{i,t}=(\mathrm{diag}(\hat{H}_{t}^{c})^{1/2})^{-1}(\hat{R}_{t}^{c}) ^{-1}\hat{\rho}_{i,t}(\hat{h}_{i,t})^{1/2} \tag{17}\] and \[\hat{\mathrm{Var}}(r_{t}|\mathcal{F}_{t-1}^{c,i})=\hat{B}_{t}\hat{\mathrm{Var }}(r_{t}^{c}|\mathcal{F}_{t-1}^{c})(\hat{B}_{t})^{\prime}=\hat{B}_{t}\hat{H}_{ t}^{c}(\hat{B}_{t})^{\prime}+\hat{\Sigma}_{\hat{\varepsilon}}, \tag{18}\] where matrices \(\hat{H}_{t}^{c}\) and \(\hat{R}_{t}^{c}\) are filtered from the core model, i.e., (5-10), whereas each conditional variance \(\hat{h}_{i,t}\) and the corresponding correlation vector \(\hat{\rho}_{i,t}\) are extracted from the individual factor model related to an asset \(i\), i.e., (13-16). The nonlinear shrinkage method (Ledoit and Wolf (2017)) delivers \(\hat{\Sigma}_{\hat{\varepsilon}}\). ## 3 Estimation The hierarchical structure of the introduced model suggests a convenient step-by-step estimation procedure independent of the cross-sectional dimension of the assets under consideration. As follows, we discuss the quasi-maximum likelihood (QML) estimation scheme and define the corresponding log-likelihood functions (LLF). Initially, to estimate the core model for a set of factors, we essentially follow the approach of Bauwens and Xu (2022), by partitioning the parameters of both conditional and realized covariances into the coefficients of the corresponding variance and correlation equations.2 In particular, let us define the two parameter sets \(\theta_{H}^{c}\) and \(\theta_{M}^{c}\) for the conditional and realized covariances of factors, respectively. Footnote 2: The parameter sets can be alternatively estimated without splitting by maximizing the corresponding full LLFs (see Bauwens and Xu (2022)). Given the hypothesis that the distribution of the 'degarched' monthly return vector is multivariate Gaussian (11), the first step consists of estimating the parameters of the conditional variances (6), i.e., \(\theta_{H1}^{c}\), and correlations (7), i.e., \(\theta_{H2}^{c}\), for the set of factors by maximizing the following QML functions: \[\begin{split} LLF_{H1}^{c}(\theta_{H1}^{c}|\mathcal{F}_{t-1}^{c} )&=-\frac{1}{2}\sum_{t=1}^{T}\left\{2\log\left|\text{diag}(h_{t}^{ c})^{1/2}\right|+u_{t}^{c^{\prime}}u_{t}^{c}\right\};\\ LLF_{H2}^{c}(\theta_{H2}^{c}|\hat{\theta}_{H1}^{c};\mathcal{F}_{t-1} ^{c})&=-\frac{1}{2}\sum_{t=1}^{T}\left\{\log|R_{t}^{c}|+\hat{u}_ {t}^{c^{\prime}}(R_{t}^{c})^{-1}\hat{u}_{t}^{c}\right\},\end{split} \tag{19}\] where \(\hat{u}_{t}^{c}=r_{t}^{c}\odot(\hat{h}_{t}^{c})^{-1/2}\), with \(\hat{h}_{t}^{c}\) defined via \(\hat{\theta}_{H1}^{c}\). Bauwens and Xu (2022) show that the estimated parameters for conditional correlations (7), i.e., \((\alpha_{R},\beta_{R})\), do not automatically guarantee the PD-ness of \(R_{t}^{c}\). As such, we proceed by checking the condition during the numerical maximization of \(LLF_{H2}^{c}\). To specify the dynamics of realized measures, we assume that the probability density function of RC matrices \(RC_{t}^{c}\), conditional on the filtration \(\mathcal{F}_{t-1}^{c}\), is Wishart, i.e., \[RC_{t}^{c}|\mathcal{F}_{t-1}^{c}\sim W_{K}(\nu,M_{t}^{c}(\theta_{M}^{c})/\nu), \tag{20}\] where \(W_{K}(\nu,M_{t}^{c}(\theta_{M})/\nu)\) denotes the \(K\)-dimensional central Wishart distribution with \(\nu\geq K\) degrees of freedom and PD \(K\times K\) scale matrix \(M_{t}^{c}(\theta_{M}^{c})/\nu\), implying \(E(RC_{t}^{c}|\mathcal{F}_{t-1}^{c})=M_{t}^{c}(\theta_{M}^{c})\). Correspondingly, we split \(\theta_{M}^{c}\) into the parameters for realized variances (9), i.e., \(\theta_{M1}^{c}\), and realized correlations (10), i.e., \(\theta^{c}_{M2}\). The second-step objective functions for \(T\) observations are given by: \[LLF^{c}_{M1}(\theta^{c}_{M1}|\mathcal{F}^{c}_{t-1})= -\frac{\nu}{2}\sum_{t=1}^{T}\left\{2\log|L^{c}_{t}|+\text{trace} \left[(L^{c}_{t})^{-1}RC^{c}_{t}(L^{c}_{t})^{-1}\right]\right\};\] \[LLF^{c}_{M2}(\theta^{c}_{M2}|\hat{\theta}^{c}_{M1};\mathcal{F}^{c }_{t-1})= -\frac{\nu}{2}\sum_{t=1}^{T}\left\{\log|P^{c}_{t}|+\text{trace} \left[((P^{c}_{t})^{-1}-\text{I}_{K})(\hat{L}^{c}_{t})^{-1}RC^{c}_{t}(\hat{L}^ {c}_{t})^{-1}\right]\right\}, \tag{21}\] where \(\text{I}_{K}\) denotes the identity matrix of order \(K\), \(L^{c}_{t}=\text{diag}(m^{c}_{t})^{1/2}\), with \(\hat{L}^{c}_{t}\) defined via \(\hat{\theta}^{c}_{M1}\), and the parameter \(\nu\) set equal to \(1\).3 Footnote 3: The score for \(\theta^{c}_{M}\) is proportional to \(\nu\). Next, we consider the likelihood contributions for the conditional model of each individual asset return. It follows from the assumptions (11) and (12), the conditional distribution of the standardized monthly asset return: \[u_{i,t}|u^{c}_{t}\sim N\left((\rho_{i,t})^{\prime}(R^{c}_{t})^{-1}u^{c}_{t},1-( \rho_{i,t})^{\prime}(R^{c}_{t})^{-1}\rho_{i,t}\right). \tag{22}\] As such, the underlying LLF with regard to the conditional covariances of an asset \(i\) \[LLF^{c,i}_{H_{i}}(\theta_{H_{i}}|\mathcal{F}^{c,i}_{t-1})=-\frac{1}{2}\sum_{t =1}^{T}\left\{\log\left(h_{i,t}\left(1-(\rho_{i,t})^{\prime}(R^{c}_{t})^{-1} \rho_{i,t}\right)\right)+\frac{(u_{i,t}-(\rho_{i,t})^{\prime}(R^{c}_{t})^{-1}u^ {c}_{t})^{2}}{(1-(\rho_{i,t})^{\prime}(R^{c}_{t})^{-1}\rho_{i,t})}\right\}, \tag{23}\] directly follows from: \[\text{Cov}(r_{i,t},r^{c}_{t}|\mathcal{F}^{c,i}_{t-1}) =\text{diag}(h^{c}_{t})^{1/2}\rho_{i,t}(h_{i,t})^{1/2};\] \[\text{Var}(r_{i,t}|r^{c}_{t},\mathcal{F}^{c,i}_{t-1}) =h_{i,t}-\frac{(\text{diag}(h^{c}_{t})^{1/2}\rho_{i,t}(h_{i,t})^{ 1/2})^{\prime}(\text{diag}(h^{c}_{t})^{1/2}\rho_{i,t}(h_{i,t})^{1/2})}{\text{ diag}(h^{c}_{t})^{1/2}R^{c}_{t}\,\text{diag}(h^{c}_{t})^{1/2}}\] \[=h_{i,t}\left(1-(\rho_{i,t})^{\prime}(R^{c}_{t})^{-1}\rho_{i,t} \right);\] \[\text{E}(r_{i,t}|r^{c}_{t},\mathcal{F}^{c,i}_{t-1}) =\mu_{i}+\frac{(\text{diag}(h^{c}_{t})^{1/2}\rho_{i,t}(h_{i,t})^{ 1/2})^{\prime}}{\text{diag}(h^{c}_{t})^{1/2}R^{c}_{t}\,\text{diag}(h^{c}_{t})^ {1/2}}(r^{c}_{t}-\mu^{c})=\mu_{i}+(h_{i,t})^{1/2}(\rho_{i,t})^{\prime}(R^{c}_ {t})^{-1}u^{c}_{t}. \tag{24}\] To ensure the positivity of the joint conditional correlation matrix \(R_{i,t}^{c}\) (12), we must ensure \((\rho_{i,t})^{\prime}(R_{t}^{c})^{-1}\rho_{i,t}<1\) for each \(t=1,...,T\) during the estimation (Archakov et al. (2020)). In analogous fashion as for the conditional correlations (12), we use a partitioning of the realized measures so that, e.g., the \((K+1)\times(K+1)\) joint conditional mean of the realized correlation matrix, i.e., \(P_{i,t}^{c}\), is given by: \[P_{i,t}^{c}=\begin{pmatrix}P_{t}^{c}&p_{i,t}\\ (p_{i,t})^{\prime}&1\end{pmatrix}, \tag{25}\] where \(P_{t}^{c}\) and \(p_{i,t}\) denote the \(K\times K\) conditional mean of the realized correlation matrix of factors filtered from a marginal model and \(K\times 1\) vector of the conditional expectations of correlations between an individual asset and the factors on month \(t\), respectively. In this regard, the QML function reads as: \[LLF_{M_{i}}^{c,i}(\theta_{M_{i}}|\mathcal{F}_{t-1}^{c,i})=-\frac{\nu}{2}\sum_{ t=1}^{T}\left\{\log\left(m_{i,t}(1-p_{i|c,t})\right)+\frac{v_{i,t}-(rc_{i,t})^{ \prime}(RC_{t}^{c})^{-1}rc_{i,t}}{m_{i,t}(1-p_{i|c,t})}\right\}, \tag{26}\] where \(m_{i,t}\) denotes the conditional mean of the realized variance of an asset \(i\), i.e., \(v_{i,t}\), \(rc_{i,t}\) is a \(K\times 1\) vector of the realized covariances between an asset \(i\) and \(K\) factors, and \(p_{i|c,t}=(p_{i,t})^{\prime}(P_{t}^{c})^{-1}p_{i,t}\). Analogously, we set \(\nu\) equal to 1. In order to estimate the model for the cross-section of \(N\) assets, we initially estimate the marginal model for a set of factors followed by the separate estimations of individual models for \(i=1,...,N\), conditional on variables obtained via the core model. Finally, we apply the nonlinear shrinkage of Ledoit and Wolf (2017) to obtain the conditional covariances of derived residuals, i.e., \(\hat{\varepsilon}_{t}=r_{t}-\hat{\alpha}_{t}-\hat{B}_{t}r_{t}^{c}\). Considering the estimation of the core model, the total number of parameters with respect to \(K\) factors is \(8K+4\). Given the assumption of diagonal matrices of coefficients for the variance equations, we split the estimation of \(8K\) parameters for the variances into \(K\) univariate HEAVY models (Shephard and Sheppard (2010)). Conversely, the model for each individual asset requires the specification of 14 additional parameters. As follows, a total of \(8K+4+14N\) coefficients is generated for the cross-sectional dimension of \(N\) assets.4 Importantly, the maximum likelihood (ML) estimation discussed above is independent of both \(K\) and \(N\). Footnote 4: As previosuly noted, the dynamic \(\Sigma_{t}\) would imply the estimation of the additional \(3N+2\) parameters. ## 4 Forecasting Forecasting the covariance matrices of asset returns is paramount in derivative pricing, asset allocation, and risk management decisions. In this regard, in our experiments, we focus on the 1-step-ahead predictions of the conditional covariances of monthly returns for the selected cross-section of \(N\) individual assets, i.e., \(\text{Var}(r_{t+1}|\mathcal{F}_{t}^{c,i})\), directly computable via: \[\hat{\text{Var}}(r_{t+1}|\mathcal{F}_{t}^{c,i})=\hat{B}_{t+1}\hat{\text{Var}}( r_{t+1}^{c}|\mathcal{F}_{t}^{c})(\hat{B}_{t+1})^{\prime}+\hat{\Sigma}_{\hat{ \varepsilon}}=\hat{B}_{t+1}\hat{H}_{t+1}^{c}(\hat{B}_{t+1})^{\prime}+\hat{ \Sigma}_{\hat{\varepsilon}}, \tag{27}\] where \(\hat{H}_{t+1}^{c}\) is a \(K\times K\) predicted conditional covariance matrix of factors for the month \(t+1\) computed via (5)-(10), \(\hat{B}_{t+1}\) is a \(N\times K\) matrix of predicted asset betas, and \(\hat{\Sigma}_{\hat{\varepsilon}}\) is a \(N\times N\) conditional covariance matrix of the forecasted residuals. In particular, for each asset \(i\) and time \(t+1\): \[\hat{\beta}_{i,t+1}=(\text{diag}(\hat{H}_{t+1}^{c})^{1/2})^{-1}(\hat{R}_{t+1} ^{c})^{-1}\hat{\rho}_{i,t+1}(\hat{h}_{i,t+1})^{1/2}, \tag{28}\] where \(\text{diag}(\hat{H}_{t+1}^{c})\) is a \(K\times K\) diagonal matrix composed of the conditional variances of factors for the month \(t+1\), \(\hat{R}_{t+1}^{c}\) is the corresponding \(K\times K\) conditional correlation matrix, \(\hat{h}_{i,t+1}\) denotes the predicted conditional variance of an asset \(i\), and \(\hat{\rho}_{i,t+1}\) a \(K\times 1\) vector of the forecasted conditional correlations between an asset \(i\) and the factors. ## 5 Empirical Application ### Data Construction and Description For the subsequent empirical analyses, we use monthly returns on factors and assets, and construct realized measures of variances and covariances using daily returns observed within each month. In particular, we compute monthly covariance matrices with the corresponding realized analogues with respect to the three Fama-French (FF) factors (Fama and French (1993)), i.e., market risk, size, and value, together with the momentum factor (Carhart (1997)), based on the data obtained from a Kenneth French library. Our model is tested for a selected cross-section of individual assets, consisting of all the stocks that belong to the S&P500 Index during the entire sample period from January 1962 until January 2023, i.e., \(N=20\) and \(T=732\). The stock names and tickers are: American Electric Power Company, Inc. (AEP), The Boeing Company (BA), Caterpillar Inc. (CAT), Chevron Corporation (CVX), DTE Energy Company (DTE), Consolidated Edison, Inc. (ED), General Dynamics Corporation (GD), General Electric Company (GE), Honeywell International Inc. (HON), International Business Machines Corporation (IBM), International Paper Company (IP), The Coca-Cola Company (KO), The Kroger Co. (KR), 3M Company (MMM), Altria Group, Inc. (MO), Merck & Co., Inc. (MRK), Marathon Oil Corporation (MRO), Motorola Solutions, Inc. (MSI), The Procter & Gamble Company (PG), and Exxon Mobil Corporation (XOM). We build the corresponding time series of the monthly and close-to-close daily returns for each asset based on the prices adjusted for dividends and splits available on Yahoo Finance. As a result, the empirical application at the monthly frequency with realized measures built upon daily data allows for estimating and testing the models for a long sample period.4 Table 1 reports, for each factor, the time series means and standard deviations of the realized variances (annualized in percentage, i.e., multiplied by 1200), and of their 'positive' and 'negative' components used to specify the asymmetric dynamics. The last row indicates the average of the time series means and standard deviations of realized correlations between the factors. The same statistics for the individual assets are shown in the Appendix A, i.e., Table A1. Considering the statistics reported in Table 1, the market and momentum factors appear more volatile compared to the size and value factors. Except for the market factor, each average realized variance is only a fraction of the corresponding average squared close-to-close return. The average negative semi-variance (\(N\)) of each factor, except HML, is larger than the average positive component (\(P\)). The same applies for the portions of the variances with respect to the signs of monthly returns, i.e., \(GJR_{P}\) and \(GJR_{N}\). Besides MOM, all the factors have the negative average realized correlation with respect to the others. The analogous summary measures for the individual assets, i.e., Table A1, generally suggest that the average realized variance exceeds the corresponding average squared close-to-close return. It might not be suprising, given the realized measures obtained via daily \begin{table} \begin{tabular}{l r r r r} \hline Factor & MKT & SMB & HML & MOM \\ \hline \(r_{cc}^{2}\) & 2.51 (5.07) & 1.09 (2.78) & 1.05 (2.21) & 2.25 (9.37) \\ \(RV\) & 2.65 (5.75) & 0.73 (1.34) & 0.83 (1.66) & 1.49 (3.40) \\ \hline \(P\) & 1.25 (2.30) & 0.34 (0.47) & 0.44 (0.93) & 0.65 (1.19) \\ \(N\) & 1.40 (3.73) & 0.39 (0.98) & 0.39 (0.82) & 0.84 (2.40) \\ \hline \(GJR_{P}\) & 1.12 (2.15) & 0.31 (0.49) & 0.46 (1.30) & 0.66 (1.60) \\ \(GJR_{N}\) & 1.53 (5.65) & 0.42 (1.35) & 0.37 (1.19) & 0.83 (3.17) \\ \hline \(RL\) & –0.10 (0.50) & –0.03 (0.42) & –0.14 (0.44) & 0.01 (0.50) \\ \hline \end{tabular} \(r_{cc}^{2}\): squared close-to-close monthly return; \(RV\): realized variance; \(P\): positive semi-variance; \(N\): negative semi-variance; \(GJR_{P}\): \(RV\) if monthly return is positive, 0 if negative; \(GJR_{N}\): \(RV\) if monthly return is negative, 0 if positive; \(RL\): realized correlation, the average of the 3 time series means and sd-s of realized correlations with the other 3 factors. \end{table} Table 1: Time series means and standard deviations (between parentheses) of realized variances, their positive and negative decompositions, squared monthly returns, and realized correlations for the 3 FF and MOM factors returns that account for the overnight information. In contrast to the set of factors, the average positive semi-variance (\(P\)) is larger than the negative component (\(N\)). Conversely, the portions of the variances with respect to the negative monthly returns, i.e., \(GJR_{N}\), exceed the \(GJR_{P}\). Ultimately, the average realized correlations of all the assets with factors lie in a narrow interval, ranging from 0.28 to 0.36, with rather similar standard deviations. Figures 1-2 show the time series of the realized variances of the market and momentum factors, and the components of their semi-variance decompositions. They illustrate the occurrence of a few clustered extreme values, consistent with periods of the financial turbulence. In both cases, the extreme volatility is largely attributed to the negative semi-variance due to prevailing negative daily returns during the turmoils. Figure 3 illustrates the time series of the realized correlations between BA and each factor. The patterns of correlations with the three FF factors are comparable, with BA being the mostly correlated with the market factor. On the other hand, the correlations with MOM are more dispersed and volatile. Figure 1: Annualized realized variances of the market factor and the terms of their decomposition using the signed daily close-to-close returns Figure 3: Realized correlations of BA with the three FF factors and MOM Figure 2: Annualized realized variances of the momentum factor and the terms of their decomposition using the signed daily close-to-close returns ### In-Sample Fit To evaluate the in-sample fit of our benchmark model, entitled the 4 Factor High-Dimensional DCC-HEAVY ("4F-HD DCC-HEAVY") model with \(K=4\), i.e., 3 FF and momentum factors, we additionally consider the restricted versions with respect to the set of factors and asymmetric effects. In particular, we estimate the variants by assuming that equity returns are either explained via the 3 FF factors ("FF-HD DCC-HEAVY") or market factor only ("M-HD DCC-HEAVY"). In addition, we examine whether allowing for asymmetries in the covariance dynamics allows for improving the fit by specifying the modelling equationns of the benchmark model without accounting for the signs of underlying returns ("sym-HD DCC-HEAVY"). The corresponding variance equations for "sym-HD DCC-HEAVY" are given by: \[h_{t}^{c}=w_{h}+A_{h}v_{t-1}^{c}+B_{h}h_{t-1}^{c}, \tag{29}\] where \(v_{t}^{c}\) is a \(K\times 1\) vector of the realized variances of factors on month \(t\), \(w_{h}\) is a \(K\times 1\) positive vector, and \(A_{h}\) and \(B_{h}\) are the \(K\times K\) diagonal matrices of coefficients; \[m_{t}^{c}=w_{m}+A_{m}v_{t-1}^{c}+B_{m}m_{t-1}^{c}, \tag{30}\] where \(m_{t}^{c}\) is a \(K\times 1\) vector of the conditional means of realized variances of factors on month \(t\), \(w_{m}\) is a \(K\times 1\) positive vector, and \(A_{m}\) and \(B_{m}\) are the \(K\times K\) diagonal matrices of coefficients; \[h_{i,t}=c_{i,h}+a_{i,h}v_{i,t-1}+b_{i,h}h_{i,t-1}, \tag{31}\] where \(h_{i,t}\) and \(v_{i,t}\) denote the conditional and realized variance of an asset \(i\) on month \(t\) respectively, and \(c_{i,h},a_{i,h}\), and \(b_{i,h}\) are non-negative scalar coefficients; \[m_{i,t}=c_{i,m}+a_{i,m}v_{i,t-1}+b_{i,m}m_{i,t-1}, \tag{32}\] where \(v_{i,t}\) and \(m_{i,t}\) denote the realized variance and corresponding conditional mean for an asset \(i\) on month \(t\), respectively, and \(c_{i,m},a_{i,m}\), and \(b_{i,m}\) are non-negative scalar coefficients. Correspondingly, to capture potential additional information provided by the 'HF' daily data, we estimate cDCC model built exclusively upon monthly data, which has been widely applied to capture the dynamics of time-varying betas (e.g., Engle and Kelly (2012), Bali et al. (2017)). In cDCC model, the variance of the market and individual asset is modelled as the benchmark univariate GARCH (Bollerslev (1986)), while the dynamics of the conditional correlations are specified as: \[\begin{split} R_{t}=\operatorname{diag}(Q_{t})^{-1/2}Q_{t} \operatorname{diag}(Q_{t})^{-1/2},\\ Q_{t}=(1-\alpha-\beta)S+\alpha[\operatorname{diag}(Q_{t-1})^{1/2 }u_{t-1}u_{t-1}^{\prime}\operatorname{diag}(Q_{t-1})^{1/2}]+\beta Q_{t-1}, \end{split} \tag{33}\] where \(R_{t}\) denotes the conditional correlation matrix on month \(t\), \(S\) is a symmetric matrix with unit diagonal elements, \(u_{t}=r_{t}\odot(h_{t})^{-1/2}\), with the vector of returns of the market and individual asset \(r_{t}=(r_{m,t},r_{i,t})^{\prime}\), coupled with conditional variances \(h_{t}=(h_{m,t},h_{i,t})^{\prime}\), and \(\alpha\) and \(\beta\) are scalar coefficients. Ultimately, we consider the existing hierarchical factor models including the benchmark Realized Beta GARCH model (Hansen et al. (2014)), coupled with the extended version introduced by Archakov et al. (2020) ("Multivariate Realized Beta GARCH"). We estimate each model for a cross-section of \(N\) selected assets.5 The in-sample fit of the seven models has been assessed using the three criteria, i.e., the value of the maximized LLF, the Akaike information criterion (AIC), and the Bayesian information criterion (BIC). Given all the models assume that monthly returns are conditionally normal, the LLFs evaluated using only monthly data are directly comparable, i.e., the highest value indicates a superior in-sample fit. Conversely, the lower AIC/BIC values are better. For all the models, we present the average value of each criterion with respect to the \(N\) assets.6 Table 2 collects the three in-sample fit criteria for each model and comparison. In view of the results obtained, several conclusions can be drawn: Footnote 6: The full set of results is available upon request. 1. The "**4F-HD DCC-HEAVY**" model has a larger LLF value and correspondingly smaller AIC and BIC than the symmetric version "**sym-HD DCC-HEAVY**" with respect to both core and conditional models for individual assets. As follows, allowing for the asymmetric dynamics in the covariances of factors, as well as an individual asset vs. the set of factors, based on the signs of underlying daily/monthly returns, improves the in-sample fit of the model. 2. Among the market factor-based models, considering the total LLF values and both information criteria evaluated at the monthly data, the best fitting model is "**M-HD DCC-HEAVY**". The relative superiority of our model suggests the benefits of adopting the higher-frequency data to model conditional covariances of lower-frequency returns as opposed to **cDCC** model. Furthermore, specifying the dynamics of the RC is important as "**M-HD DCC-HEAVY**" readily outperforms **Realized Beta GARCH** model of Hansen et al. (2014). The latter provides for a better fit with respect to each criterion compared to the low-frequency data-based **cDCC** model. 3. The "**FF-HD DCC-HEAVY**" model outperforms the scalar version of the competing "**Multivariate Realized Beta GARCH**" of Archakov et al. (2020) in terms of a possible comparison of the conditional LLF for individual assets vs. factors evaluated at the monthly data, thus confirming the advantages of explicitly modelling the dynamics of realized measures. The estimates of the parameters of the core model for each HD DCC-HEAVY version are reported in Table 3. The results demonstrate that the coefficients in columns III-V noticeably differ for the three models, implying distinct dynamics of the variances of factors. In each case, the average estimate of the \(b_{h}\) parameter is much smaller compared to standard GARCH models, while the average estimates of the \(a_{h}^{+}\) and \(a_{h}^{-}\) parameters are much larger compared to conventional ARCH terms. In line with the findings of Shephard and Sheppard (2010), Noureldin et al. (2012), and Bauwens and Xu (2022), these results suggest that the dynamics of conditional variances are better captured by realized variances than by squared returns. Columns VI-VII present the parameter estimates of the correlations, implying rather responsive series.7 \begin{table} \begin{tabular}{l c c} \hline & **4F-HD DCC-HEAVY** & **sym-HD DCC-HEAVY** \\ \hline LLF\({}^{c}\) & **-17912.52** & -18469.61 \\ AIC & **49.040** & 50.551 \\ BIC & **49.266** & 50.752 \\ LLF\({}^{c,i}\) & **-3509.52** & -3548.47 \\ AIC & **9.627** & 9.728 \\ BIC & **9.715** & 9.803 \\ LLF\({}^{c}\) + LLF\({}^{c,i}\) & **-21422.04** & -22018.08 \\ AIC & **58.667** & 60.279 \\ BIC & **58.981** & 60.555 \\ \hline & **M-HD DCC-HEAVY** & **Real. Beta GARCH** & **cDCC** \\ \hline LLF\({}^{c}_{H}\) + LLF\({}^{c,i}_{H_{i}}\) & **-3672.63** & -3991.02 & -4111.86 \\ AIC & **10.065** & 10.943 & 11.256 \\ BIC & **10.134** & 11.031 & 11.307 \\ \hline & **FF-HD DCC-HEAVY** & **Mult. Real. Beta GARCH** & \\ \hline LLF\({}^{c,i}_{H_{i}}\) & **-1678.84** & -1962.60 & \\ AIC & **4.606** & 5.384 & \\ BIC & **4.650** & 5.434 & \\ \hline LLF\({}^{c}\): total LLF for the core model; LLF\({}^{c,i}\): average (across \(N\) assets) total LLF for the conditional model for individual assets; & \\ LLF\({}^{c}_{H_{i}}\) + LLF\({}^{c,i}_{H_{i}}\): average (across \(N\) assets) total LLF evaluated at the monthly data; & \\ LLF\({}^{c,i}_{H_{i}}\): average (across \(N\) assets) LLF for the conditional model for individual assets evaluated at the monthly data; & \\ For each maximum value of the log-likelihood function (LLF), we report the corresponding Akaike (AIC) and Bayesian information criteria (BIC). The values in bold correspond to the best model of each row. The models are estimated using the dataset of 732 observations described in Section 5.1. & \\ \hline \end{tabular} \end{table} Table 2: Maximum log-likelihood function (LLF), AIC and BIC values of estimated models Footnote 1: The \(\alpha_{ In our empirical analyses, we estimate the conditional models for the cross-section of \(N=20\) individual assets (see Section 5.1). The corresponding estimation results for the "FF-HD DCC-HEAVY" model are reported in Table 4. Again, the effects of the lagged realized variances on the current conditional variances are high, on average. Thus, we confirm the realized measures as more informative about volatility than the squared returns. Correspondingly, the average \(a_{i,h}^{-}\) exceeds \(a_{i,h}^{+}\), indicating the presence of a leverage effect. Ultimately, the coefficients associated with the dynamics of the realized variances of individual stocks are relatively dispersed, implying distinct dynamics of the corresponding series. Figure 4: Annualized realized and fitted conditional variances of the market and HML factors, and the corresponding correlations For HD DCC-HEAVY models, we implicitly assume that the correlations across the selected cross-section of asset returns are explained via either, a single, three, or four sources of the systematic risk, i.e., market, size, value, and momentum. In this regard, the vector of model-implied betas for each asset given by (2) is obtained by accounting for the information from higher- and lower-frequency data. To present the rich dynamics of estimated betas, we graphically illustrate the "4F-HD DCC-HEAVY" fitted measures for IP in Figure 5. The average market beta is close to 1, implying the IP closely tracks the S&P500 dynamics. Conversely, the means of the value and mo \begin{table} \begin{tabular}{l c c c c c c c} \hline Coeff. & \(c_{i,h}\) & \(a_{i,h}^{+}\) & \(a_{i,h}^{-}\) & \(b_{i,h}\) & \(\phi_{i,R}\) & \(\alpha_{i,R}\) & \(\beta_{i,R}\) \\ \hline mean & 0.000 & 0.159 & 0.222 & 0.755 & 0.017 & 0.022 & 0.944 \\ min & 0.000 & 0.026 & 0.082 & 0.545 & 0.001 & 0.001 & 0.631 \\ [MISSING_PAGE_POST] max & 0.000 & 0.226 & 0.248 & 0.925 & 0.000 & 0.155 & 0.941 \\ \hline \hline \end{tabular} Presented are the estimates of the parameters that appear in the “FF-HD DCC-HEAVY” equations of the conditional model for an individual asset, i.e., conditional variances and conditional correlation vectors (upper panel), and the corresponding realized analogues (lower panel). Estimation period is January 1962 - December 2022, i.e., \(T=732\). \end{table} Table 4: “FF-HD DCC-HEAVY” parameter estimates of the conditional models for individual assets mentum factors lie in the interval 0.5-0.8, while the average SMB beta is around -0.1. The exposure to the size risk factor varies the most. All the betas hit a range of extreme values during the financial crisis episode. The corresponding summary statistics are given in Table 5. ### Out-of-Sample Forecasting We compute the out-of-sample forecasts discussed in Section 4 with regard to all the asymmetric hierarchical-type factor models, which fit the data better compared to the cDCC model, i.e., "4F-HD DCC-HEAVY", "FF-HD DCC-HEAVY", "M-HD DCC-HEAVY", Realized Beta GARCH, and "Multivariate Realized Beta GARCH". Starting from the fitting period from January 1962 to December 2016 (\(T_{e}=660\)), we generate the forecasts by re-estimating the models every year on a rolling window with \(T_{e}\) monthly observations and then producing a sequence of 1-step-ahead predictions based on the updated parameter estimates. We consider the two out-of-sample forecasting periods.8 The first, characterized by the relatively low volatility of returns, includes the years 2017-2019. The second period lasts until the end of 2022, with the volatility at a relatively high level triggered by the COVID pandemic. Footnote 8: The results for a full out-of-sample period are available in Appendix B. #### 5.3.1 Statistical Accuracy In order to assess the statistical accuracy of all models, we adopt the two loss functions that produce the consistent ranking (Patton (2011), Laurent et al. (2013)), i.e., the Euclidean distance (ED) and squared Frobenius norm (FN). The first is based on the \(\operatorname{vech}(\cdot)\)9 transformation of the forecast error matrix, where the prediction errors on variances and covariances are equally weighted: Footnote 9: The operator that stacks the lower triangular part of a symmetric \(N\times N\) matrix argument into a \(N(N+1)/2\times 1\) vector. \[ED_{t}(C_{t+1},\hat{H}_{t+1})=\operatorname{vech}(C_{t+1}-\hat{H}_{t+1})^{ \prime}\operatorname{I}_{N*}\operatorname{vech}(C_{t+1}-\hat{H}_{t+1}), \tag{34}\] where \(\hat{H}_{t+1}\) is the conditional forecast of the covariances of \(r_{t+1}\), \(C_{t+1}\) is a proxy for the unobserved covariance matrix at time \(t+1\), and \(\operatorname{I}_{N*}\) is the identity matrix of order \(N(N+1)/2\). Indeed, the natural proxy for latent covariances is given by \(r_{t}r_{t}^{\prime}\), although others, such as the RC, can be used.10 Footnote 10: The adoption of \(r_{t}r_{t}^{\prime}\) appears more suitable when forecasting the covariances over the entire month. The second loss function is the matrix equivalent of the MSE loss function, where the weights on the covariance forecast errors are doubled compared to the ones on variances: \[FN_{t}(C_{t+1},\hat{H}_{t+1})=\text{trace}[(C_{t+1}-\hat{H}_{t+1})^{\prime}(C_{ t+1}-\hat{H}_{t+1})]=\sum_{i,j}(c_{ij,t+1}-\hat{h}_{ij,t+1})^{2}. \tag{35}\] For assessing the significance of differences in the ED and FN losses across the five models, we rely on the model confidence set (MCS) approach of Hansen et al. (2011). The MCS identifies the model or subset of models with the best forecasting performance, given the pre-specified confidence level. It is computed at the 10% significance level using a block bootstrap (Hansen et al. (2003)) with 10,000 replications and the varying block length to verify the robustness of the results. Table 6 reports the model confidence sets, at the 90% confidence level, using the ED and FN loss functions. The hierarchical models of Hansen et al. (2014) and Archakov et al. (2020) are always excluded from the reported model confidence sets. The "FF-HD DCC-HEAVY" significantly outperforms all the other models during financial turbulence, while during calm times the MCS also incorporates the "M-HD DCC-HEAVY". As follows, when all the hierarchical factor models are compared in statistical terms, the new HD DCC-HEAVY models are superior compared to the models built upon the Realized GARCH framework in all cases. Considering the full out-of-sample period, only the "FF-HD DCC-HEAVY" model enters the MCS in terms of both ED and FN losses (Appendix B, Table B1). #### 5.3.2 Economic Performance In order to perform the economic evaluation of the forecasting performance we rely on the global minimum variance portfolio (GMVP) optimization (e.g., Engle and Kelly (2012), Bauwens and Xu (2022)) since it does not require the estimation of expected returns, providing an essentially clean framework for assessing the merits of distinct covariance forecasting models. Given a covariance matrix forecast \(\hat{H}_{t+1}\), the portfolio weights \(\hat{\omega}_{t+1}\) are obtained by solving the minimization problem: \[\min_{\omega_{t+1}}\omega_{t+1}^{\prime}\hat{H}_{t+1}\omega_{t+1}\quad\text{ s.\,t.}\quad\omega_{t+1}^{\prime}\mathbf{1}=1, \tag{36}\] where \(\mathbf{1}\) is a \(\mathit{N}\times 1\) vector of ones. It follows readily that the optimal GMVP weights are given by: \[\hat{\omega}_{t+1}=\frac{\hat{H}_{t+1}^{-1}\mathbf{1}}{\mathbf{1}^{\prime} \hat{H}_{t+1}^{-1}\mathbf{1}}. \tag{37}\] \begin{table} \begin{tabular}{l|c c|c c} \hline Model & ED & MCS 2017-2019 & ED & MCS 2020-2022 \\ \hline **4F-HD DCC-HEAVY** & 0.080 & 0.006 & 0.828 & 0.002 \\ **FF-HD DCC-HEAVY** & **0.075** & **1.000** & **0.796** & **1.000** \\ **M-HD DCC-HEAVY** & 0.075 & **0.885** & 0.879 & 0.002 \\ **Realized Beta GARCH** & 0.078 & 0.091 & 0.951 & 0.002 \\ **Multivariate Realized Beta GARCH** & 0.093 & 0.000 & 0.954 & 0.002 \\ \hline Model & FN & MCS 2017-2019 & FN & MCS 2020-2022 \\ \hline **4F-HD DCC-HEAVY** & 0.138 & 0.002 & 1.340 & 0.002 \\ **FF-HD DCC-HEAVY** & 0.132 & **0.320** & **1.276** & **1.000** \\ **M-HD DCC-HEAVY** & **0.129** & **1.000** & 1.399 & 0.002 \\ **Realized Beta GARCH** & 0.135 & 0.008 & 1.514 & 0.002 \\ **Multivariate Realized Beta GARCH** & 0.164 & 0.000 & 1.527 & 0.002 \\ \hline \hline \end{tabular} * ED/FN’ columns: the average annualized value of ED/FN losses over the corresponding forecast period; bold values identify the minimum loss over the five models. * MCS 2017-2019’ column: \(p\)-values of the MCS tests over the out-of-sample period including the years 2017-2019; bold values identify the models included in the MCS at the 90% confidence level (i.e., \(p\)-values larger than 0.10). * MCS 2020-2022’ column: the analogous results for the period 2020-2022. \end{table} Table 6: Model confidence sets at 90% level of hierarchical factor models, with ED and FN loss functions In addition, we consider the optimization under a short-selling restriction and compute the weights via numerical optimization, i.e., MATLAB Financial Toolbox, given the absence of a closed-form analytical solution. The results are available in Appendix B (Table B3). Given the main aim to assess the accuracy of distinct covariance matrix estimators, our performance measures do not take into account transaction costs. Initially, we adopt the MCS to select the best-performing models that minimize the standard deviation (SD) of the portfolios obtained by applying the computed weights to the observed returns. The results presented in Table 7 show that the "M-HD DCC-HEAVY" model provides for the lowest out-of-sample SD during the calm periods, whereas only the "4F-HD DCC-HEAVY" enters the MCS when the volatility is at a relatively high level. Considering the entire out-of-sample period, the MCS includes only the "4F-HD DCC-HEAVY" model (Appendix B, Table B2), while the analogous conclusion applies for long-only portfolios (Appendix B, Table B3). Therefore, in contrast to the statistical performance where the "FF-HD DCC-HEAVY" model is superior, the "4F-HD DCC-HEAVY" appears preferable from a variance minimization perspective. In general, the "M-HD DCC-HEAVY" model outperforms the competing market factor-based Realized Beta GARCH of Hansen et al. (2014) in all cases. The same applies for a corresponding comparison between the three-factor "FF-HD DCC-HEAVY" and "Multivariate Realized Beta GARCH" (Archakov et al. (2020)) model (Table 7, B2, B3). In addition, we examine some basic features of the portfolios, including the Average Return (AR), i.e., the average of out-of-sample returns for the corresponding period, Information Ratio (IR), i.e., the ratio AR/SD, portfolio turnover rates (TO), and proportion of short positions (SP).11 Footnote 11: The resulting AR and IR are computed with respect to estimated non-negative weights since short-selling is difficult to implement, thus it is not generally the common practice for most investors. The latter are specified as follows: \[TO_{t}=\sum_{i}^{N}\left|\hat{w}_{i,t}-\hat{w}_{i,t-1}\frac{1+r_{t-1}^{i}}{1+r _{t-1}^{p}}\right|; \tag{38}\] \[SP_{t}=\sum_{i}^{N}\mathbb{1}_{\{\hat{w}_{i,t}<0\}}, \tag{39}\] where \(r_{t}^{p}\) is the total return of the portfolio for the month \(t\), \(\hat{w}_{i,t}\) and \(r_{t}^{i}\) are the weight and return of stock \(i\), respectively, and \(\mathbb{1}_{\{\cdot\}}\) denotes the indicator function.12 Footnote 12: We do not set constraints on the turnover and leverage proportion in the optimization. The results reported in Table B4 again confirm that hierarchical HD DCC-HEAVY models consistently and notably outperform Realized GARCH variants. In particular, the "M-HD DCC-HEAVY" features the highest IR during the turbulent periods and overall. On the other hand, the findings summarized in Table B5 suggest that the propensity of models with respect to short positions is very similar and, in general, moderately increases for HD DCC-HEAVY models during turmoils. The increasing trend of the average monthly turnover \begin{table} \begin{tabular}{l|c c|c c} \hline Model & SD & MCS 2017-2019 & SD & MCS 2020-2022 \\ \hline **4F-HD DCC-HEAVY** & 0.653 & 0.098 & **0.571** & **1.000** \\ **FF-HD DCC-HEAVY** & 0.715 & 0.000 & 0.709 & 0.088 \\ **M-HD DCC-HEAVY** & **0.565** & **1.000** & 0.952 & 0.000 \\ **Realized Beta GARCH** & 0.643 & 0.098 & 0.878 & 0.000 \\ **Multivariate Realized Beta GARCH** & 0.723 & 0.000 & 0.718 & 0.067 \\ \hline \hline \end{tabular} * ‘SD’ columns: the average annualized standard deviation of GMVP returns over the corresponding forecast period; bold values identify the minimum loss over the five models. * ‘MCS 2017-2019’ column: \(p\)-values of the MCS tests over the out-of-sample period including the years 2017-2019; bold values identify the models included in the MCS at the 90\% confidence level (i.e., \(p\)-values larger than 0.10). * ‘MCS 2020-2022’ column: the analogous results for the period 2020-2022. \end{table} Table 7: Model confidence sets at 90% level of hierarchical factor models, with GMVP loss function rates for all models is also visible. Given that the GMVPs aim at minimizing the variance, and thus the SD, rather than maximizing the expected returns or the IR, the most important performance measure is the out-of-sample SD. In this regard, the out-of-sample returns and IR are also beneficial but should be considered of secondary importance. Finally, to assess the economic gains of utilizing distinct HD DCC-HEAVY covariance matrix estimators, following Fleming et al. (2001, 2003), we determine the maximum performance fee a risk-averse investor would be willing to pay to switch from using one model to another. Accordingly, we assume that the investor has quadratic preferences of the form: \[U(r_{t}^{p})=1+r_{t}^{p}-\frac{\gamma}{2(1+\gamma)}(1+r_{t}^{p})^{2}, \tag{40}\] where \(r_{t}^{p}\) is the portfolio return and \(\gamma\) is the investor's relative risk aversion, taking values 1 and 10 (Fleming et al. (2003)). As follows, we determine a fee \(\Delta_{\gamma}\) by equating the average realized utilities from two alternative portfolios: \[\sum_{t=1}^{T}U(r_{t}^{p_{1}})=\sum_{t=1}^{T}U(r_{t}^{p_{2}}-\Delta_{\gamma}), \tag{41}\] where \(r_{t}^{p_{1}}\) and \(r_{t}^{p_{2}}\) are the portfolio returns related to competing HD DCC-HEAVY forecasting strategies. Major observations based on results in Table 8 are as follows. First, by utilizing the "4F-HD DCC-HEAVY" covariance forecasts, a risk-averse investor can achieve notable economic gains that become pronounced during the crisis period. Overall, an investor with low (high) risk aversion would be willing to pay on average 27 (38) bps to switch from the "FF-HD DCC-HEAVY" strategy to the "4F-HD DCC-HEAVY" and 15 (35) bps for switching from the "M-HD DCC-HEAVY". These results provide further support that the "4F-HD DCC-HEAVY" might be a preferable hierarchical factor model from the investor point of view. ## 6 Conclusion In this paper we introduce a class of models for high-dimensional covariance matrices by combining the hierarchical approach of Hansen et al. (2014) and dynamic conditional correlation formulation of a HEAVY model (Noureldin et al. (2012)) recently proposed by Bauwens and Xu (2022)). In this regard, we rely on the evidence to adopt the higher-frequency data to model more accurate realized measures of covariances and employ them to forecast the conditional covariance matrix of lower-frequency returns (i.e., Noureldin et al. (2012), Gorgi et al. (2019), Bauwens and Xu (2022)). An illustrative empirical study for the S&P500 constituents over the period from January 1962 until January 2023, i.e., \(N=20\) and \(T=732\), shows that our method always significantly outperforms the benchmark and existing hierarchical factor models in statistical and economic terms. The findings are robustified under distinct market conditions. Avenues for future research are twofold. First, a promising feature of the framework is the ability to readily extract inherently time-varying factor loadings for a given asset or portfolio, thus conforming to the extensive literature that proves the dynamic nature of betas (e.g., Bollerslev et al. (1988), Jagannathan and Wang (1996), etc.) but also potentially improving the commonly adopted rolling regression approach for their estimation. Second, to verify the relevance of adopted factors, and thus adopt the optimal HD DCC-HEAVY model, the asymptotic theory on estimated loadings and corresponding testing procedures should be derived. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline Period & \multicolumn{2}{c}{2017-2019} & \multicolumn{2}{c}{2020-2022} & \multicolumn{2}{c}{2017-2022} \\ \hline Model & \(\Delta_{1}\) & \(\Delta_{10}\) & \(\Delta_{1}\) & \(\Delta_{10}\) & \(\Delta_{1}\) & \(\Delta_{10}\) \\ \hline **FF-HD DCC-HEAVY** & -6.92 & -6.92 & 61.52 & 83.84 & 27.30 & 38.46 \\ **M-HD DCC-HEAVY** & 28.74 & 28.74 & 0.46 & 40.64 & 14.60 & 34.69 \\ \hline \hline \end{tabular} * ‘\(\Delta_{\gamma}\)’ columns: the basis points fee an investor with quadratic utility and relative risk aversion \(\gamma\) would pay to switch from the covariance matrix estimator indicated in column 1 to the “4F-HD DCC-HEAVY” model over the period indicated in row 1. \end{table} Table 8: BPS fees for switching from simpler HD DCC-HEAVY to the “4F-HD DCC-HEAVY” covariance matrix forecasts
2310.14217
On the Sum Secrecy Rate of Multi-User Holographic MIMO Networks
The emerging concept of extremely-large holographic multiple-input multiple-output (HMIMO), beneficial from compactly and densely packed cost-efficient radiating meta-atoms, has been demonstrated for enhanced degrees of freedom even in pure line-of-sight conditions, enabling tremendous multiplexing gain for the next-generation communication systems. Most of the reported works focus on energy and spectrum efficiency, path loss analyses, and channel modeling. The extension to secure communications remains unexplored. In this paper, we theoretically characterize the secrecy capacity of the HMIMO network with multiple legitimate users and one eavesdropper while taking into consideration artificial noise and max-min fairness. We formulate the power allocation (PA) problem and address it by following successive convex approximation and Taylor expansion. We further study the effect of fixed PA coefficients, imperfect channel state information, inter-element spacing, and the number of Eve's antennas on the sum secrecy rate. Simulation results show that significant performance gain with more than 100\% increment in the high signal-to-noise ratio (SNR) regime for the two-user case is obtained by exploiting adaptive/flexible PA compared to the case with fixed PA coefficients.
Arthur S. de Sena, Jiguang He, Ahmed Al Hammadi, Chongwen Huang, Faouzi Bader, Merouane Debbah, Mathias Fink
2023-10-22T07:49:12Z
http://arxiv.org/abs/2310.14217v1
# On the Sum Secrecy Rate of Multi-User Holographic MIMO Networks ###### Abstract The emerging concept of extremely-large holographic multiple-input multiple-output (HMIMO), beneficial from compactly and densely packed cost-efficient radiating meta-atoms, has been demonstrated for enhanced degrees of freedom even in pure line-of-sight conditions, enabling tremendous multiplexing gain for the next-generation communication systems. Most of the reported works focus on energy and spectrum efficiency, path loss analyses, and channel modeling. The extension to secure communications remains unexplored. In this paper, we theoretically characterize the secrecy capacity of the HMIMO network with multiple legitimate users and one eavesdropper while taking into consideration artificial noise and max-min fairness. We formulate the power allocation (PA) problem and address it by following successive convex approximation and Taylor expansion. We further study the effect of fixed PA coefficients, imperfect channel state information, inter-element spacing, and the number of Eve's antennas on the sum secrecy rate. Simulation results show that significant performance gain with more than 100% increment in the high signal-to-noise ratio (SNR) regime for the two-user case is obtained by exploiting adaptive/flexible PA compared to the case with fixed PA coefficients. HMIMO, secrecy capacity, max-min fairness, power allocation, artificial noise. ## I Introduction Secure transmissions have always been desired in wireless communications. However, due to the broadcast nature of the wireless propagation, challenges arise in secured transmissions. In the literature, researchers focused on physical layer security from the information-theoretic perspective and introduced artificial noise (AN) to guarantee that all the legitimate users have a higher rate than the eavesdroppers, complementary to traditional complex cryptographic approaches [1, 2]. Under the framework of multiple-input multiple-output (MIMO), the design of AN is usually jointly considered with precoder design and resource allocation, such as transmit power allocation (PA). By extending from the reconfigurable intelligent surface (RIS) free MIMO network to the RIS assisted one, RIS was verified to bring more flexibility and obvious performance enhancement [3, 4]. However, in RIS assisted MIMO networks, due to its passive property, channel state information acquisition becomes, if not infeasible, inevitably difficult, which in turn harms secrecy performance. Recently, the active counterpart of RIS, termed as holographic MIMO (HMIMO), serves as a transceiver with a low-cost transformative wireless planar structure comprising of densely packed sub-wavelength metallic or dielectric scattering particles, which is capable of shaping electromagnetic waves according to specific requirements. It is a promising candidate technology for 6G, offering a cost-effective and energy-efficient way of realizing the extremely large-scale MIMO (XL-MIMO) [5]. With the introduction of sub-wavelength inter-element spacing, many good properties can be found, e.g., large degrees of freedom (DoFs) even under the condition of line of sight (LoS) connectivity. HMIMO is in favor of near-field communications, standing out in millimeter wave (mmWave) and Terahertz (THz) communications with vast available bandwidths but short communication range. Regarding HMIMO, many reported works focused on channel modeling, beamforming design, and resource allocation [6, 7, 8, 9, 10]. However, secure communication in HMIMO network has not yet been investigated. In this paper, we analyze the secrecy performance of the multi-user HMIMO networks with the introduction of AN. In the analysis, we simplify the process by decoupling the following three tasks: (i) the design of base station (BS) transmit beamforming, (ii) that of receive filter, and (iii) PA between the information symbol and AN, and accordingly propose a multi-stage approach for secrecy analysis under the assumption of imperfect channel state information (CSI). For the PA, we follow the max-min fairness (MMF), which has already been applied in networking level power control in massive MIMO [11], and formulate the optimization problem, which aims at finding the optimal PA between the desired information signals and AN as to reach the maximal sum secrecy rate. We examine the effect of various system parameters, e.g., imperfectness of the CSI and inter-element spacing, on the sum secrecy rate of the studied system. The proposed PA approach is verified to outperform the case with fixed PA coefficients. _Notations_: Bold lowercase letters denote vectors (e.g., \(\mathbf{a}\)), while bold capital letters represent matrices (e.g., \(\mathbf{A}\)). The operators \((\cdot)^{\mathsf{T}}\) and \((\cdot)^{\mathsf{H}}\) denote transpose and Hermitian transpose, respectively. \(\mathrm{diag}(\mathbf{a})\) denotes a square diagonal matrix with the entries of \(\mathbf{a}\) on its main diagonal, \(\mathbf{0}\) denotes the all-zero vector or matrix, \(\mathbf{I}_{M}\) (\(M\geq 2\)) denotes the \(M\times M\) identity matrix, and \(j=\sqrt{-1}\). \(\|\cdot\|_{2}\) denotes the Euclidean norm of a vector, and \(|\cdot|\) returns the absolute value of a complex number. \([\mathbf{a}]_{m}\) and \([\mathbf{A}]_{:,m}\) denote the \(m\)-th element of \(\mathbf{a}\) and \(m\)-th column of \(\mathbf{A}\). ## II System Model We consider a downlink transmission scenario where one BS (a.k.a. Alice) communicates with multiple legitimate users (a.k.a. Bobs) concurrently in the presence of one eavesdropper (a.k.a. Eve). Specifically, we assume the existence of \(B\) Bobs in the system, indexed by the set \(\mathcal{B}=\{1,\cdots,B\}\), and that all communication nodes are equipped with a holographic uniform planar array (UPA), as illustrated in Fig. 1. The antenna arrays of Alice and Eve comprise \(N_{\mathrm{A}}=N_{\mathrm{A},x}\times N_{\mathrm{A},y}\) and \(N_{\mathrm{E}}=N_{\mathrm{E},x}\times N_{\mathrm{E},y}\) antenna elements, respectively, and without loss of generality all the \(B\) Bobs are equipped with an equal number of antennas \(N_{\mathrm{B}}=N_{\mathrm{B},x}\times N_{\mathrm{B},y}\), i.e., \(N_{b}=N_{\mathrm{B}},\forall b\in\mathcal{B}\), in which \(\{N_{\mathrm{A},x},N_{\mathrm{B},x},N_{\mathrm{E},x}\}\) and \(\{N_{\mathrm{A},y},N_{\mathrm{B},y},N_{\mathrm{E},y}\}\) correspond to the number of elements in the \(x\)-axis and \(y\)-axis directions, respectively. Moreover, the inter-element spacing in all antenna arrays, denoted by \(\delta\), is set to less than half wavelength \(\lambda\), i.e., \(\delta<\lambda/2\). As a result, the lengths of the arrays in the \(x\)-axis and \(y\)-axis directions for the \(i\)th communication node are given by \(L_{i,x}=N_{i,x}\delta\) and \(L_{i,y}=N_{i,y}\delta\), where the coordinates in \(\mathbb{R}^{3}\) of all antenna elements are organized into the matrix \(\mathbf{C}_{i}=[\mathbf{c}_{i,1},\cdots,\mathbf{c}_{i,N_{i}}]\in\mathbb{R}^{3 \times N_{i}}\), and the vector \(\mathbf{c}_{i,n}\in\mathbb{R}^{3}\) corresponds to the three dimensional (3D) position of the \(n\)-th antenna element, for \(n=1,\cdots,N_{i}\), with \(i\in\{\mathrm{A},\mathcal{B},\mathrm{E}\}\). ### _Channel Model_ We employ the electromagnetic compliant channel model for HMIMO communications from [6], which accurately approximates the HMIMO electromagnetic multi-path propagation through an asymptotic Fourier transform-based Karhunen-Loeve channel expansion. More specifically, the wireless channel between Alice and the \(u\)-th user, for \(u\in\{\mathcal{B},\mathrm{E}\}\), i.e., valid for all the Bobs and Eve, can be given by \[\mathbf{H}_{u}=\mathbf{\Phi}_{u}\bar{\mathbf{H}}_{u}\mathbf{\Phi}_{\mathrm{A }}^{\mathsf{H}}\in\mathbb{C}^{N_{u}\times N_{\mathrm{A}}}, \tag{1}\] where \(\mathbf{\Phi}_{u}\in\mathbb{C}^{N_{u}\times n_{u}}\) and \(\mathbf{\Phi}_{\mathrm{A}}\in\mathbb{C}^{N_{u}\times n_{\mathrm{A}}}\) are semi-unitary matrices, i.e., \(\mathbf{\Phi}_{u}^{\mathsf{H}}\mathbf{\Phi}_{u}=\mathbf{I}_{n_{u}}\) and \(\mathbf{\Phi}_{u}^{\mathsf{H}}\mathbf{\Phi}_{\mathrm{A}}=\mathbf{I}_{n_{u}}\), comprising the array response vectors \(\boldsymbol{\theta}(l_{u,x},l_{u,y},\mathbf{C}_{u})\in\mathbb{C}^{N_{u}}\) and \(\boldsymbol{\theta}(l_{\mathrm{A},x},l_{\mathrm{A},y},\mathbf{C}_{\mathrm{A}}) \in\mathbb{C}^{N_{\mathrm{A}}}\) of the \(u\)-th user and Alice, respectively, in which the \(n\)-th entry of \(\boldsymbol{\theta}(l_{i,x},l_{i,y},\mathbf{C}_{i})\), for \(n=1,\cdots,N_{i}\), and \(i\in\{\mathrm{A},\mathcal{B},\mathrm{E}\}\), can be computed by \[[\boldsymbol{\theta}(l_{i,x},l_{i,y},\mathbf{C}_{i})]_{n}=\frac{1}{\sqrt{N_{i}}} e^{j\big{(}\big{[}\frac{2\pi}{l_{i,x}}l_{i,x},\frac{2\pi}{L_{i,y}}l_{i,y}, \gamma(l_{i,x},l_{i,y})\big{]}\mathbf{c}_{i,n}\big{)}},\] where \(\gamma(l_{i,x},l_{i,y})=\sqrt{\kappa^{2}-\Big{(}\frac{2\pi}{L_{i,x}}l_{i,x} \Big{)}^{2}-\Big{(}\frac{2\pi}{L_{i,y}}l_{i,y}\Big{)}^{2}}\) with \(\kappa=\frac{2\pi}{\lambda}\) denoting the wavenumber of the system, and \(l_{i,x}\) and \(l_{i,y}\) are the sampling points in the wavenumber domain, which lead to non-zero angular responses only when the points are within the lattice ellipse [6]\(\mathcal{E}_{i}=\bigg{\{}\Big{(}l_{i,x},l_{i,y}\Big{)}\in\mathbb{Z}^{2}:\Big{(} \frac{\lambda}{L_{i,x}}l_{i,x}\Big{)}^{2}+\Big{(}\frac{\lambda}{L_{i,y}}l_{i,y }\Big{)}^{2}\leq 1\bigg{\}}\). In particular, with a uniform sampling, these points can be obtained through \(l_{i,x}\in\mathcal{E}_{i,x}=\Big{\{}\Big{\lceil}\frac{-L_{i,x}+(q_{i,x}-1) \lambda}{\lambda}\Big{\rceil}\Big{\}}\), for \(q_{i,x}=1,\cdots,\Big{\lceil}\frac{L_{i,x}}{\lambda}\Big{\rceil}\), and \(l_{i,y}\in\mathcal{E}_{i,y}=\Big{\{}\Big{\lceil}\frac{-L_{i,y}+(q_{i,y}-1) \lambda}{\lambda}\Big{\rceil}\Big{\}}\), for \(q_{i,y}=1,\cdots,\Big{\lceil}\frac{L_{i,y}}{\lambda}\Big{\rceil}\), which results in \(n_{i}=4\left\lceil\frac{L_{i,x}L_{i,y}}{\lambda^{2}}\right\rceil\), for \(u\in\{\mathrm{A},\mathcal{B},\mathrm{E}\}\)[7]. Moreover, the matrix \(\bar{\mathbf{H}}_{u}\) collects the small-scale fading coefficients in the angular domain, which can be structured as \(\tilde{\mathbf{H}}_{u}=\mathbf{\Sigma}_{u}\odot\mathbf{G}_{u}\in\mathbb{C}^{n_{u }\times n_{\mathrm{A}}}\), where \(\mathbf{G}_{u}\in\mathbb{C}^{n_{u}\times n_{\mathrm{A}}}\) is a random matrix with entries following the complex Gaussian distribution with zero mean and unity variance, and \(\mathbf{\Sigma}_{u}\in\mathbb{R}^{n_{u}\times n_{\mathrm{A}}}\) is a matrix that collects \(n_{u}\times n_{\mathrm{A}}\) scaled standard deviations \(\{\sqrt{N_{\mathrm{A}}N_{u}}\sigma(l_{u,x},l_{u,y},l_{\mathrm{A},x},l_{\mathrm{A },y})\}\) of the channel, where the variances \(\sigma^{2}(\cdot)^{\mathsf{S}}\) describe the power transferred from Alice to the \(u\)-th receiver in the corresponding wavenumber sampling points, with \(u\in\{\mathcal{B},\mathrm{E}\}\). Under the assumption of isotropic scattering, the variances observed at Alice and receivers can be decoupled, i.e., \(\sigma^{2}(l_{u,x},l_{u,y},l_{\mathrm{A},x},l_{\mathrm{A},y})\approx\sigma^{2}(l_{u,x},l_{u,y})\sigma^{2}(l_{\mathrm{A},x},l_{\mathrm{A},y})\). Thus, \(\sigma^{2}(l_{i,x},l_{i,y})\), for \(i\in\{\mathrm{A},\mathcal{B},\mathrm{E}\}\), can be calculated as follows [8] \[\sigma^{2}(l_{i,x},l_{i,y})\!\!=\!\!\frac{1}{4\pi}\!\!\int_{\frac{\lambda}{L_{i,x}} l_{i,x}}^{\frac{\lambda}{L_{i,x}}(l_{i,x}+1)}\!\!\int_{\frac{\lambda}{L_{i,y}}l_{i,y}}^{\frac{ \lambda}{L_{i,y}}(l_{i,y}+1)}\!\!\frac{\mathds{1}_{\mathcal{D}}(x,y)}{ \sqrt{1-x^{2}-y^{2}}}dxdy, \tag{2}\] for \(l_{i,x}\in\mathcal{E}_{i,x}\) and \(l_{i,y}\in\mathcal{E}_{i,y}\), where \(\mathcal{D}=\{(x,y)\in\mathbb{R}^{2}:x^{2}+y^{2}\leq 1\}\) is a disk of radius \(1\) centered at the origin Recall that \(\mathbf{\Phi}_{u}\) and \(\mathbf{\Phi}_{\text{A}}\) are deterministic, depending only on the structure of the antenna arrays. Also, the entries of \(\mathbf{\Sigma}_{u}\) change slowly compared to the coherence interval of the fast-dating channel coefficients. Given these facts, we assume that \(\mathbf{\Phi}_{u}\), \(\mathbf{\Phi}_{A}\), and \(\mathbf{\Sigma}_{u}\) are perfectly known in the system. However, we introduce imperfectness on \(\mathbf{G}_{u}\), which is modeled by a first-order Gauss-Markov process \[\hat{\mathbf{G}}_{u}=\sqrt{1-\xi^{2}}\mathbf{G}_{u}+\xi\mathbf{E}_{u}, \tag{4}\] where \(\mathbf{E}_{u}\) is a complex standard Gaussian distributed error matrix, and \(\xi^{2}\) represents the variance of the channel estimation error. The effect of imperfect \(\mathbf{G}_{u}\) on secrecy performance will be evaluated comprehensively in Section IV. ### _Signal Model_ Under the above channel model, Alice transmits an information symbol \(s_{b}\) to the \(b\)-th Bob, \(\forall b\in\mathcal{B}\). We assume that Alice does not have any knowledge of the channel or location information of Eve. As a result, it becomes challenging to avoid information leakage to Eve through beamforming only. To mitigate this security threat, Alice superimposes a random AN \(w_{b}\in\mathbb{C}\) onto the information symbol of each Bob, satisfying \(\mathrm{E}\{|w_{b}|^{2}\}=1\). More specifically, Alice transmits the following beamformed data stream \[\mathbf{s}=\sum_{b=1}^{B}\mathbf{f}_{b}(\sqrt{\alpha_{b}}s_{b}+\sqrt{\beta_{b} }w_{b})\in\mathbb{C}^{N_{\text{A}}}, \tag{5}\] where \(\mathbf{f}_{b}\in\mathbb{C}^{N_{\text{A}}}\) is the beamforming vector for the \(b\)-th Bob, such that \(\|\mathbf{f}_{b}\|_{2}^{2}=1\), \(\alpha_{b}\) and \(\beta_{b}\) are the PA coefficients for the information symbol and AN, respectively, with a total transmit power constraint \(P_{T}=\sum_{b=1}^{B}\alpha_{b}+\beta_{b}\). Furthermore, the information symbols \(s_{b}\)'s are assumed to have zero mean and unity variance, i.e., \(\mathrm{E}\{|s_{b}|^{2}\}=1\). With these assumptions, the signals received by the \(b\)-th user and Eve can be written, respectively, as \[\mathbf{y}_{b} =\mathbf{H}_{b}\sqrt{\zeta_{b}}\sum_{k\in\mathcal{B}}\mathbf{f}_{ k}(\sqrt{\alpha_{k}}s_{k}+\sqrt{\beta_{k}}w_{k})+\mathbf{z}_{b}\in\mathbb{C}^{N_{ \text{B}}}, \tag{6}\] \[\mathbf{y}_{E} =\mathbf{H}_{\text{E}}\sqrt{\zeta_{\text{E}}}\sum_{k\in\mathcal{B }}\mathbf{f}_{k}(\sqrt{\alpha_{k}}s_{k}+\sqrt{\beta_{k}}w_{k})+\mathbf{z}_{ \text{E}}\in\mathbb{C}^{N_{\text{E}}}, \tag{7}\] where \(\zeta_{b}=d_{b}^{-\eta}\Lambda\) and \(\zeta_{\text{E}}=d_{\text{E}}^{-\eta}\Lambda\) model the large-scale fading coefficients, in which \(d_{b}\) and \(d_{\text{E}}\) denote the distances from Alice to the \(b\)-th Bob and Eve, respectively, \(\eta\) represents the path-loss exponent, and \(\Lambda\) is the array gain parameter. Moreover, \(\mathbf{z}_{b}\) and \(\mathbf{z}_{\text{E}}\) are the corresponding additive noise vectors, whose entries follow the complex Gaussian distribution with zero mean and variance \(\sigma_{z}^{2}\). ### _Transmit Beamformer Design_ In this subsection, we focus on the design of \(\mathbf{f}_{b}\in\mathbb{C}^{N_{\text{A}}}\), \(\forall b\in\mathcal{B}\). Specifically, we wish to avoid information leakage to non-intended Bobs. Before introducing the beamforming design, we expand the HMIMO channel model in Eq. (1) as follows \[\mathbf{H}_{b} =\mathbf{\Phi}_{b}\left(\mathbf{\Sigma}_{b}\odot\mathbf{G}_{b} \right)\mathbf{\Phi}_{\text{A}}^{\text{H}}=\mathbf{\Phi}_{b}\left(\left[ \mathbf{\sigma}_{b}\mathbf{\sigma}_{\text{A}}^{\text{T}}\right]\odot\mathbf{G }_{b}\right)\mathbf{\Phi}_{\text{A}}^{\text{H}}\] \[=\mathbf{\Phi}_{b}\mathrm{diag}(\mathbf{\sigma}_{b})\mathbf{G}_{ b}\mathrm{diag}(\mathbf{\sigma}_{\text{A}})\mathbf{\Phi}_{\text{A}}^{ \text{H}}=\mathbf{\Phi}_{b}\mathbf{\Delta}_{b}\mathbf{G}_{b}\mathbf{\Delta}_ {\text{A}}\mathbf{\Phi}_{\text{A}}^{\text{H}}, \tag{8}\] where \(\mathbf{\Delta}_{b}\triangleq\mathrm{diag}(\mathbf{\sigma}_{b})\) and \(\mathbf{\Delta}_{\text{A}}\triangleq\mathrm{diag}(\mathbf{\sigma}_{\text{A}})\). Given the expansion in Eq. (8) and the aforementioned property \(\mathbf{\Phi}_{\text{A}}^{\text{H}}\mathbf{\Phi}_{\text{A}}=\mathbf{I}_{n_{ \text{A}}}\), we can design the desired beamforming vector with the following structure \(\mathbf{f}_{b}=\mathbf{\Phi}_{\text{A}}\mathbf{p}_{b}\), where \(\mathbf{p}_{b}\in\mathbb{C}^{n_{\text{A}}}\) is an inner beamformer computed based on the null space spanned by the reduced-dimension effective matrices of unintended users given by \(\mathbf{\Phi}_{\text{A}}^{\text{H}}\mathbf{H}_{b^{\prime}}^{\text{H}}=\mathbf{ \Delta}_{\text{A}}^{\text{H}}\mathbf{G}_{b^{\prime}}^{\text{H}}\mathbf{\Delta} _{b^{\prime}}^{\text{H}}\mathbf{\Phi}_{b^{\prime}}^{\text{H}}\in\mathbb{C}^{n_{ \text{A}}\times N_{\text{B}}}\), with rank denoted by \(r_{b^{\prime}}\), \(\forall b^{\prime}\neq b\). More specifically, we collect all the reduced-dimension effective matrices of unintended users and stack them in a column-wise fashion as \[\mathbf{\Xi}_{b}=\left[\mathbf{\Phi}_{\text{A}}^{\text{H}}\mathbf{H}_{1}^{ \text{H}},\cdots,\mathbf{\Phi}_{\text{A}}^{\text{H}}\mathbf{H}_{b-1}^{\text{ H}},\mathbf{\Phi}_{\text{A}}^{\text{H}}\mathbf{H}_{b+1}^{\text{H}},\cdots,\mathbf{\Phi}_{ \text{A}}^{\text{H}}\mathbf{H}_{b}^{\text{H}}\right]\!, \tag{9}\] for \(b\in\mathcal{B}\), with a rank \(\bar{r}_{b}=\sum\limits_{b^{\prime}\in\mathcal{B},b^{\prime}\neq b}r_{b^{ \prime}}\). Then, given that \(\bar{r}_{b}<BN_{\text{B}},\forall b\in\mathcal{B}\) due to the correlated entries of \(\mathbf{\Phi}_{\text{A}}^{\text{H}}\mathbf{H}_{b}^{\text{H}}\), the beamformer \(\mathbf{p}_{b}\in\mathbb{C}^{n_{\text{A}}}\) can be obtained from the orthonormal basis of the nontrivial null space of \(\mathbf{\Xi}_{b}\), which we can choose from the left singular vectors of \(\mathbf{\Xi}_{b}\) that are associated with zero singular values. To this end, we perform singular value decomposition (SVD) and write \[\mathbf{\Xi}_{b}=\begin{bmatrix}\mathbf{U}_{b}^{(1)}&\mathbf{U}_{b}^{(0)} \end{bmatrix}\begin{bmatrix}\mathbf{\Omega}_{b}^{(1)}&\mathbf{0}\\ \mathbf{0}&\mathbf{\Omega}_{b}^{(0)}\end{bmatrix}\mathbf{V}_{b}^{\text{H}}, \tag{10}\] where \(\mathbf{\Omega}_{b}^{(1)}\) and \(\mathbf{\Omega}_{b}^{(0)}\) are diagonal matrices that comprise the nonzero and zero singular values of \(\mathbf{\Xi}_{b}\), respectively, \(\mathbf{U}_{b}^{(1)}\) and \(\mathbf{U}_{b}^{(0)}\) are semi-unitary matrices that comprise the corresponding left singular vectors, and \(\mathbf{V}_{b}\) comprises the right singular vectors of \(\mathbf{\Xi}_{b}\). More specifically, given that the matrix \(\mathbf{U}_{b}^{(0)}\in\mathbb{C}^{n_{\text{A}}\times(n_{\text{A}}-\bar{r}_{b})}\) comprises \(n_{\text{A}}-\bar{r}_{b}\) orthonormal basis vectors of the null space of \(\mathbf{\Xi}_{b}\), the desired inner beamformer can be \[\mathbf{p}_{b}=\begin{bmatrix}\mathbf{U}_{b}^{(0)}\end{bmatrix}_{:,1}\in \mathbb{C}^{n_{\text{A}}}, \tag{11}\] which satisfies \(\|\mathbf{p}_{b}\|_{2}^{2}=1\) and \(\mathbf{H}_{b^{\prime}}\mathbf{\Phi}_{\text{A}}\mathbf{p}_{b}=\mathbf{0}, \forall b\neq b^{\prime}\in\mathcal{B}\), as long as the rank \(\bar{r}_{b}<BN_{\text{B}}\) and the constraints \(n_{\text{A}}>\bar{r}_{b}\) and \(n_{\text{A}}-\bar{r}_{b}\geq 1\) are met. ### _Receive Filter Design_ With the beamformer design presented in the previous subsection, all inter-user interference among the Bobs can be eliminated. This fact allows the \(b\)-th Bob to exploit its effective channel \(\mathbf{H}_{b}\mathbf{f}_{b}=\mathbf{\Phi}_{b}\mathbf{\Delta}_{b}\mathbf{G}_{b} \mathbf{\Delta}_{\text{A}}\mathbf{p}_{b}\in\mathbb{C}^{n_{\text{A}}}\) for computing its reception combining vector, as follows \[\mathbf{ Eq. (12), the \(b\)-th legitimate user will have the post-processed signal as \[y_{b}=\underbrace{\mathbf{q}_{b}^{\mathsf{H}}\mathbf{H}_{\mathsf{B}}\mathbf{f}_{b }\sqrt{\zeta_{b}\alpha_{b}}s_{b}}_{\text{Signal of interest}}+\underbrace{\mathbf{q}_{b}^{ \mathsf{H}}\mathbf{H}_{\mathsf{B}}\mathbf{f}_{b}\sqrt{\zeta_{b}\beta_{b}}w_{b} }_{\text{Artificial noise}}+\underbrace{\mathbf{q}_{b}^{\mathsf{H}}\mathbf{z}_{b }}_{\text{Additive noise}}. \tag{13}\] On the other hand, we assume that Eve infiltrates into the system and gets access to the effective channels \(\mathbf{H}_{\mathsf{E}}\mathbf{f}_{b},\forall b\in\mathcal{B}\) of the legitimate users. Note, however, that because \(\mathbf{f}_{b},\forall b\in\mathcal{B}\), is computed based on the channels of legitimate users only, the signals intended for other Bobs, i.e., \(\forall b^{*}\in\mathcal{B},b^{*}\neq b\), will cause interference to Eve when Eve eavesdrops on the \(b\)-th Bob. In addition, Eve is not aware that Alice is transmitting AN. Under these assumptions, Eve computes its reception vector \(\mathbf{q}_{\mathsf{E}}\) following the same approach as in Eq. (12) but based on Eve's effective channel matrix \(\mathbf{H}_{\mathsf{E}}\mathbf{f}_{b}\in\mathbb{C}^{n_{\text{A}}}\) associated with the target user \(b\). More specifically, Eve's receive combining vector is obtained as \[\mathbf{q}_{\mathsf{E}}=\frac{\mathbf{\Phi}_{\mathsf{E}}\mathbf{\Delta}_{ \mathsf{E}}\mathbf{G}_{\mathsf{E}}\mathbf{\Delta}_{\mathsf{A}}\mathbf{p}_{b} }{\left\|\mathbf{\Phi}_{\mathsf{E}}\mathbf{\Delta}_{\mathsf{E}}\mathbf{G}_{ \mathsf{E}}\mathbf{\Delta}_{\mathsf{A}}\mathbf{p}_{b}\right\|_{2}}\in\mathbb{ C}^{n_{\text{A}}}. \tag{14}\] Then, after filtering the eavesdropped signal of user \(b\) through \(\mathbf{q}_{\mathsf{E}}\), Eve has the post-processed signal as \[y_{E} =\mathbf{q}_{\mathsf{E}}^{\mathsf{H}}\mathbf{H}_{\mathsf{E}} \sqrt{\zeta_{\mathsf{E}}}\bigg{(}\underbrace{\mathbf{f}_{b}\sqrt{\alpha_{b}}s_ {b}}_{\text{Signal of interest}}+\underbrace{\mathbf{f}_{b}\sqrt{\beta_{b}}w_{b} }_{\text{Artificial noise}}\] \[+\underbrace{\sum_{b^{*}\in\mathcal{B},b^{*}\neq b}\mathbf{f}_{b^ {*}}(\sqrt{\alpha_{b^{*}}}s_{b^{*}}+\sqrt{\beta_{b^{*}}}w_{b^{*}})}_{\text{ Inter-user interference}}\bigg{)}+\underbrace{\mathbf{q}_{\mathsf{E}}^{\mathsf{H}} \mathbf{z}_{E}}_{\text{Additive noise}}.\] The corresponding signal-to-interference-plus-noise ratios (SINRs) as well as the secrecy capacity experienced in the system are investigated in the sequel. ## III Secrecy Analysis and Power Allocation ### _SINR Expressions_ Alice informs all legitimate users of the exploitation of AN. Therefore, we assume that \(w_{b}\) can be successfully subtracted from the signal in Eq. (13) with the aid of the successive interference cancellation (SIC) technique. As a result, the SINR observed by the \(b\)-th Bob when recovering its information symbol, for \(\forall b\in\mathcal{B}\), can be given by \[\gamma_{b}=\frac{|\mathbf{q}_{b}^{\mathsf{H}}\mathbf{H}_{\mathsf{B}}\mathbf{f} _{b}\sqrt{\zeta_{\mathsf{A}}\alpha_{b}}|^{2}}{|\mathbf{q}_{b}^{\mathsf{H}} \mathbf{z}_{b}|^{2}}=\frac{|\mathbf{q}_{b}^{\mathsf{H}}\mathbf{H}_{\mathsf{B}} \mathbf{f}_{b}|^{2}\zeta_{b}\alpha_{b}}{\sigma_{z}^{2}}. \tag{15}\] In contrast to Bobs, Eve cannot decode the AN \(w_{b}\) and, thus, it will be able to eavesdrop only on a noisy version of the transmitted information symbol, which is also corrupted by inter-user interference. To be specific, when detecting the symbol of the target user \(b\in\mathcal{B}\), Eve observes the following SINR \[\gamma_{\mathrm{E}}^{b}=\frac{|\mathbf{q}_{\mathsf{E}}^{\mathsf{H }}\mathbf{H}_{\mathsf{E}}\mathbf{f}_{b}|^{2}\zeta_{\mathsf{E}}\alpha_{b}}{ \left(|\mathbf{q}_{\mathsf{E}}^{\mathsf{H}}\mathbf{H}_{\mathsf{E}}\mathbf{f}_ {b}|^{2}\zeta_{\mathsf{E}}\beta_{b}+\sum_{b^{*}\in\mathcal{B},b^{*}\neq b}| \mathbf{q}_{\mathsf{E}}^{\mathsf{H}}\mathbf{H}_{\mathsf{E}}\mathbf{f}_{b^{*}}| ^{2}\zeta_{\mathsf{E}}\alpha_{b^{*}}\right.}\] \[\qquad\qquad\qquad\left.+\sum_{b^{*}\in\mathcal{B},b^{*}\neq b}| \mathbf{q}_{\mathsf{E}}^{\mathsf{H}}\mathbf{H}_{\mathsf{E}}\mathbf{f}_{b^{*}}| ^{2}\zeta_{\mathsf{E}}\beta_{b^{*}}+\sigma_{z}^{2}\right) \tag{16}\] where the numerator \(|\mathbf{q}_{\mathsf{E}}^{\mathsf{H}}\mathbf{H}_{\mathsf{E}}\mathbf{f}_{b}|^{2 }\zeta_{\mathsf{E}}\alpha_{b}\) represents the received power of the signal of interest. The denominator, on the other hand, represents the total interference and noise power observed by Eve. It consists of four components: (i) the power of the AN intended for Bob \(b\), \(|\mathbf{q}_{\mathsf{E}}^{\mathsf{H}}\mathbf{H}_{\mathsf{E}}\mathbf{f}_{b}|^{2 }\zeta_{\mathsf{E}}\beta_{b}\), (ii) the sum powers of the signals intended for all the other legitimate users, \(\sum_{b^{*}\in\mathcal{B},b^{*}\neq b}|\mathbf{q}_{\mathsf{E}}^{\mathsf{H}} \mathbf{H}_{\mathsf{E}}\mathbf{f}_{b^{*}}|^{2}\zeta_{\mathsf{E}}\alpha_{b^{*}}\), (iii) the sum powers of AN intended for all the other legitimate users, \(\sum_{b^{*}\in\mathcal{B},b^{*}\neq b}|\mathbf{q}_{\mathsf{E}}^{\mathsf{H}} \mathbf{H}_{\mathsf{E}}\mathbf{f}_{b^{*}}|^{2}\zeta_{\mathsf{E}}\beta_{b^{*}}\), and (iv) the noise power \(\sigma_{z}^{2}\). ### _Secrecy Capacity_ With the above derivations of SINRs, the rates achieved by the \(b\)-th Bob and Eve are given by \(R_{b}=\log_{2}\left(1+\gamma_{b}\right)\), and \(R_{\mathsf{E}}^{b}=\log_{2}\left(1+\gamma_{\mathrm{E}}^{b}\right)\), respectively. As a result, the secrecy capacity in bits per channel use (bpcu) observed for the legitimate user \(b\in\mathcal{B}\) can be computed by \[S_{b}=\Big{[}R_{b}-R_{\mathsf{E}}^{b}\Big{]}^{+}, \tag{17}\] where \([a]^{+}=\max\{a,0\}\). ### _PA Formulation and Solution_ By following the MMF tradition, we aim to maximize the minimum of the secrecy rates of Bobs. The associated optimization problem can be formulated as follows: \[\mathcal{P}_{1}: \max_{\alpha_{b},\beta_{b}}\;\min_{\forall b\in\mathcal{B}}\{S_{b}\}\] (18a) s.t. \[\sum_{b=1}^{B}\alpha_{b}+\beta_{b}=P_{T}, \tag{18b}\] \[\alpha_{b}\geq 0,\beta_{b}\geq 0, \tag{18c}\] under the constraint of sum transmit power in (18b). We conduct PA based on the instantaneous CSI, either perfect or imperfect. It is noted that the MMF form in the objective function makes the problem \(\mathcal{P}_{1}\) intractable. To address this issue, we reformulate the optimization problem \(\mathcal{P}_{1}\) as \[\mathcal{P}_{2}:\;\max_{\alpha_{b},\beta_{b}}\;\tau\] (19a) s.t. \[\eqref{eq:p_1},\eqref{eq:p_2},\] \[R_{b}-R_{E}^{b}\geq\tau, \tag{19b}\] by introducing the auxiliary variable \(\tau\). However, the non-convexity of constraint (19b) remains an obstacle for solving problem \(\mathcal{P}_{2}\). To address this issue, we introduce an auxiliary variable \(C_{E}^{b}\) to transform (19b) into the following two constraints: i.e., \(R_{b}-C_{E}^{b}\geq\tau\) and \(C_{E}^{b}-R_{E}^{b}\geq 0\). The latter is non-convex, and can be further transformed into \(2^{C_{E}^{b}}-1\geq|\mathbf{q}_{E}^{\mathsf{H}}\mathbf{H}_{E}\mathbf{f}_{b}|^{2 }\zeta_{E}\alpha_{b}^{\mathsf{H}}\) and \(|\mathbf{q}_{E}^{\mathsf{H}}\mathbf{H}_{E}\mathbf{f}_{b}|^{2}\zeta_{E}\beta_{b}+ \sum\limits_{b^{*}=1,b^{*}\neq b}^{B}|\mathbf{q}_{E}^{\mathsf{H}}\mathbf{H}_{E} \mathbf{f}_{b^{*}}|^{2}\zeta_{E}\beta_{b^{*}}+\sigma_{z}^{2}\geq I_{E}^{b}\), where \(I_{E}^{b}\) is another newly introduced auxiliary variable. The tight coupling of optimization variables in the constraint \(2^{C_{E}^{b}}-1\geq|\mathbf{q}_{E}^{\mathsf{H}}\mathbf{H}_{E}\mathbf{f}_{b}|^{2 }\zeta_{E}\frac{\alpha_{b}}{I_{E}^{b}}\) introduces further challenges in solving the optimization problem for \(b\in\mathcal{B}\), we can further transform it into four constraints, i.e., \(\exp(Z)\geq|\mathbf{q}_{E}^{\text{H}}\mathbf{H}_{E}\mathbf{f}_{b}|^{2}\zeta_{E}\mathrm{exp}( X_{b}-Y_{b})\), \(\alpha_{b}\leq\exp(X_{b})\), \(I_{E}^{b}\geq\exp(Y_{b})\), and \(2^{C_{E}^{b}}-1\geq\exp(Z_{b})\). Summarizing the above steps, the optimization problem \(\mathcal{P}2\) becomes \[\mathcal{P}_{3}: \max_{\alpha_{b},\beta_{b},C_{E}^{b},I_{E}^{b},X_{b},Y_{b},Z_{b}}\tau\] (20a) s.t. ( 18 ) ( 18 ) \[R_{b}-C_{E}^{b}\geq\tau, \tag{20b}\] \[\exp(Z_{b})\geq|\mathbf{q}_{E}^{\text{H}}\mathbf{H}_{E}\mathbf{f}_{b}|^{2} \zeta_{E}\mathrm{exp}(X_{b}-Y_{b}),\] (20c) \[\sum_{i=1,i\neq b}^{B}|\mathbf{q}_{E}^{\text{H}}\mathbf{H}_{E}\mathbf{f}_{i}| ^{2}\zeta_{E}\alpha_{i}{+}\sum_{i=1}^{B}|\mathbf{q}_{E}^{\text{H}}\mathbf{H}_{E}\mathbf{f }_{i}|^{2}\zeta_{E}\beta_{i}{+}\sigma_{z}^{2}\geq I_{E}^{b},\] (20d) \[\alpha_{b}\leq\exp(X_{b}),\] (20e) \[I_{E}^{b}\geq\exp(Y_{b}),\] (20f) \[2^{C_{E}^{b}}-1\geq\exp(Z_{b}). \tag{20g}\] Although the problem \(\mathcal{P}_{3}\) becomes more tractable than the original problem \(\mathcal{P}_{1}\), constraints (20e) and (20g) remain non-convex. To address this, we employ the successive convex approximation (SCA) method with first-order Taylor expansion to tackle them. In particular, the problem \(\mathcal{P}_{3}\) can be finally rewritten as \[\mathcal{P}_{4}: \max_{\alpha_{b},\beta_{b},C_{E}^{b},I_{E}^{b},X_{b},Y_{b},Z_{b}}\tau\] (21a) s.t. ( 18 ), ( 20b ) ( 20c ), ( 20d ), \[\alpha_{b}\leq\exp(\bar{X}_{b}[n])(X_{b}-\bar{X}_{b}[n]+1), \tag{21b}\] \[2^{\bar{C}_{E}^{b}[n]}(\ln 2(C_{E}^{b}-\bar{C}_{E}^{b}[n])+1)-1 \geq\exp(Z_{b}). \tag{21c}\] The right-hand side of (21b) and the left-hand side of (21c) are the first-order approximations of \(\exp(X_{b})\) and \(2^{C_{E}^{b}}-1\) at points \(\bar{X}_{b}[n]\) and \(\bar{C}_{E}^{b}[n]\), respectively, which are the solutions of \(X_{b}\) and \(C_{E}^{b}\) from the \(n\)-th iteration. Obviously, \(\mathcal{P}_{4}\) is a convex problem that can be solved by using the well-known Matlab CVX toolbox [12]. ## IV Simulation Results In this section, we present the results of the analysis and optimization of the sum secrecy rate of HMIMO with PA. The baseline scheme with fixed PA coefficients is introduced and compared. To this end, we implement a scenario in which \(B=2\) Bobs and one Eve receive information from one Alice. Unless otherwise stated, Alice and the Bobs employ, respectively, \(N_{\text{A}}=20\times 20\) and \(N_{\text{B}}=10\times 10\) antenna elements. On the other hand, we test the effect of different numbers of antennas for Eve on the secrecy performance in the sequel. The first antenna element of Alice is located in the origin of the 3D plane, i.e., its coordinate being \(\mathbf{p}_{\text{A},1}=[0,0,0]\), whereas the 3D coordinates for the first antennas of Bobs \(1\) and \(2\) are \(\mathbf{p}_{1,1}=[40,-20,0]\) and \(\mathbf{p}_{2,1}=[60,30,0]\), respectively. We assume that Eve is close to Bob \(2\) with its first antenna located at \(\mathbf{p}_{E,1}=[60,25,0]\). Such a simulation setup is depicted in Fig. 2. For the channel parameters, we set \(\eta=2.7\) and \(\Lambda=1000\). Unless otherwise stated, the inter-element spacing of all arrays is set as \(\delta=\lambda/4\), with antennas indexed by \(\mathbf{p}_{i,n}=[[\mathbf{p}_{i,1}]_{1}+\delta\cdot\mathrm{mod}(n-1,N_{i,x}),[ \mathbf{p}_{i,1}]_{2}+\delta\cdot\lfloor(n-1)/N_{i,x}\rfloor,0]\), for \(n=2,\cdots,N_{i}\), with \(i\in\{\text{A},\text{B},\text{E}\}\). The sum transmit power is configured as \(P_{T}=\sum_{b=1}^{2}\alpha_{b}+\beta_{b}=2\). The signal-to-noise ratio (SNR) is defined as \(1/\sigma_{z}^{2}\) and the number of trials is set to \(1000\). Fig. 3 compares the proposed PA scheme with fixed PA coefficients in terms of sum secrecy rate, showing the significant performance enhancement (up to two fold) with the aid Fig. 4: Effect of CSI imperfectness on sum secrecy capacity (\(N_{\text{B}}=N_{\text{E}}=10\times 10\), various channel error variances \(\xi\)’s). Fig. 3: Sum secrecy capacity with the proposed PA (\(N_{\text{B}}=N_{\text{E}}=10\times 10\), channel error variance \(\xi=0\), and \(\delta=\lambda/4\)). Fig. 2: Simulation setup. of PA, especially in the high SNR regime. When the SNR value is \(20\) dB, the proposed PA approach with \(\delta=\lambda/8\) achieves more than \(50\) bpcu while the fixed PA scheme with \(\alpha_{b}=\beta_{b}=0.5\) and \(\delta=\lambda/8\) achieves about \(24\) bpcu. For the fixed PA coefficients, the performance becomes better when \(\alpha_{b}\) increases. However, this does not mean that AN fails to play a essential role in secrecy performance. With the aid of proposed PA, we are able to turn AN from foe to friend. Fig. 4 studies the effect of CSI imperfectness. For the case of fixed PA coefficients, the performance degradation is obvious when the SNR is large. However, the proposed PA scheme shows great robustness against the imperfectness of CSI. Fig. 5 examines the impact of inter-element spacing. It is noted that we keep the numbers of BS antennas unchanged. When the inter-element spacing reduces from \(\lambda/2\) to \(\lambda/8\), we observe performance gain in our proposed PA scheme. However, with fixed PA coefficients, only a small gain is observed in the low SNR regime, and it vanishes as the SNR increases. Fig. 6 studies the effect of \(N_{\text{E}}\) on the sum secrecy rate. Among the selected setups, e.g., \(N_{\text{E}}\in\{6\times 6,10\times 10,16\times 16\}\), the performance is almost overlapping. In other words, the number of Eve's antennas fails to play an essential role in the sum secrecy rate of the studied multi-user HMIMO systems. The reason lies in that the term related to Eve's combining vector \(\mathbf{q_{\text{E}}}\), i.e., \(|\mathbf{q_{\text{E}}^{\text{H}}}\mathbf{H_{\text{E}}}\mathbf{f_{\text{b}}} |^{2}\), appears in both the denominator and the numerator of (16). In this sense, its effect on Eve's rate will be cancelled out, leaving the major impact from the control of \(\alpha_{b}\) and \(\beta_{b}\), i.e., power allocation. Last, we extend the two-Bob scenario to four-Bob scenario, where the four Bobs are spatially distributed over the \(xy\) plane while fixing \(z=0\). In this experiment, we introduce the heat-map to further illustrate the performance gain introduced by the proposed PA scheme compared to the fixed power allocation coefficient (\(\forall b,\alpha_{b}=\beta_{b}=0.5\)) as a function of Eve's location (varying \(x\) and \(y\) coordinates while fixing \(z=0\)). The simulation results with the SNR being \(-10\) dB are shown in Fig. 7. It is observed from the figure that the performance gain in terms of sum secrecy rate becomes more pronounced when the number of legitimate users increases (by comparing with Fig. 3). This comes from the setup that the sum transmit power \(P_{T}\) increases linearly as the number of legitimate users. The sum secrecy rate of the fixed PA scheme falls within the range \([7.25,7.95]\) bpcu while that of the proposed PA falls within the range \([36,46]\) bpcu. In addition, the proposed PA is insensitive to the location of Eve. In other words, regardless of the distance between Eve and one of the Bobs, the sum secrecy rates surrounding a specific legitimate user with the proposed PA are almost constant with a very small variation. ## V Conclusion In this paper, we have studied the secrecy performance analysis of the multi-user HMIMO network under the max-min fairness, where AN is adopted. We have further addressed the PA problem and studied the effect of multiple system parameters on the sum secrecy rate. It has been demonstrated that with the aid of PA, up to two-fold sum secrecy rate can be achieved compared to the case with fixed PA coefficients in the two-Bob scenario. This becomes more profound when we further increase the number of legitimate users. The obtained heat maps have shown that the sum secrecy rate of the proposed PA scheme falls within \([36,46]\) bpcu compared to \([7.25,7.95]\) bpcu for the fixed PA scheme.
2308.01336
Excited bound states and their role in dark matter production
We explore the impact of highly excited bound states on the evolution of number densities of new physics particles, specifically dark matter, in the early Universe. Focusing on dipole transitions within perturbative, unbroken gauge theories, we develop an efficient method for including around a million bound state formation and bound-to-bound transition processes. This enables us to examine partial-wave unitarity and accurately describe the freeze-out dynamics down to very low temperatures. In the non-Abelian case, we find that highly excited states can prevent the particles from freezing out, supporting a continuous depletion in the regime consistent with perturbativity and unitarity. We apply our formalism to a simplified dark matter model featuring a colored and electrically charged $t$-channel mediator. Our focus is on the regime of superWIMP production which is commonly characterized by a mediator freeze-out followed by its late decay into dark matter. In contrast, we find that excited states render mediator depletion efficient all the way until its decay, introducing a dependence of the dark matter density on the mediator lifetime as a novel feature. The impact of bound states on the viable dark matter mass can amount to an order of magnitude, relaxing constraints from Lyman-$\alpha$ observations.
Tobias Binder, Mathias Garny, Jan Heisig, Stefan Lederer, Kai Urban
2023-08-02T18:00:01Z
http://arxiv.org/abs/2308.01336v2
# Excited bound states and their role in dark matter production ###### Abstract We explore the impact of highly excited bound states on the evolution of number densities of new physics particles, specifically dark matter, in the early Universe. Focusing on dipole transitions within perturbative, unbroken gauge theories, we develop an efficient method for including around a million bound state formation and bound-to-bound transition processes. This enables us to examine partial-wave unitarity and accurately describe the freeze-out dynamics down to very low temperatures. In the non-Abelian case, we find that highly excited states can prevent the particles from freezing out, supporting a continuous depletion in the regime consistent with perturbativity and unitarity. We apply our formalism to a simplified dark matter model featuring a colored and electrically charged \(t\)-channel mediator. Our focus is on the regime of superWIMP production which is commonly characterized by a mediator freeze-out followed by its late decay into dark matter. In contrast, we find that excited states render mediator depletion efficient all the way until its decay, introducing a dependence of the dark matter density on the mediator lifetime as a novel feature. The impact on the viable dark matter mass can amount to an order of magnitude, relaxing constraints from Lyman-\(\alpha\) observations. + Footnote †: preprint: TUM-HEP 1469/23 TTK-23-21 ###### Contents * I Introduction * II Bound-state formation in vacuum * II.1 Matrix elements * II.2 Abelian case * II.3 Non-Abelian case * III Super critical behavior * III.1 Dark QED * III.2 Dark QCD * IV Colored t-channel mediator model * IV.1 Review of production mechanisms * IV.2 Bound state rates and processes * V Results for superWIMP scenario * V.1 Effective cross section * V.2 Relic abundance * V.3 Implications * VI Conclusion * A Evaluation of large \(n\) dipole transitions * A.1 Scattering-to-bound * A.2 Bound-to-bound * B. Cross sections and rates * B.1 Thermal average and Milne relations * B.2 Dark QED * B.3 Dark QCD * B.4 superWIMP scenario * C Relic abundance for dark QED ## I Introduction Understanding the composition of matter in our universe constitutes a major challenge of today's fundamental physics. Notably, explanations of both the observed dark matter density and matter-antimatter asymmetry necessitate the introduction of physics beyond the Standard Model (SM) and therewith the computation of interactions among new - and presumably heavy - particles in the early Universe. If such new particles interact via a light force carrier, a significant contribution to their depletion may be given by the formation and subsequent decay of bound states, which has intriguing consequences for their thermal history. For instance, in the context of electroweakly charged dark matter [1; 2; 3] and colored coannihilation scenarios [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15], it has been shown that the inclusion of bound state effects can strongly alter the prediction for the relic density. Generally, we can classify radiative bound state formation (BSF) processes in terms of its leading multipole contribution: 1. _Monopole:_ Bound-state formation via emission of a _charged scalar field_ can be extremely relevant [16; 17]. As the emission carries away charge, it changes the initial and final two-particle state, leading to non-orthogonal states and ultimately to a non-vanishing monopole contribution. The BSF cross section via monopole transitions has been worked out for arbitrary excited states. However, partial-wave unitarity can be problematic already for capture into the ground state [16]. 2. _Dipole:_ Known examples where radiative BSF is dominated by the contribution of the (dark electric) dipole moment, are \(U(1)\)[18; 19; 20; 21] or \(SU(N_{c})\)[21; 22; 23; 24; 25; 26; 27] gauge symmetry extensions of the SM. In these cases, the emitted particle is a massless gauge _vector field_. Another possibility considers the accompanying dark matter particle to have SM electroweak [1; 2; 3; 28] and/or color charge [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 29]. A famous example of the latter is squark coannihilation in the context of the Minimal Supersymmetric extension of the Standard Model (MSSM), or simplified \(t\)-channel mediator models inspired by it. 3. _Quadrupole:_ One example where quadrupole moments contribute at the leading order, considers the emission of a _real scalar field_ in the radiative bound state formation process [30; 31; 32; 33; 34]. Beyond capture into the ground state, little is known about higher exited states and bound-to-bound transitions. In this work, we will present a more detailed investigation of the second case, where BSF and transitions among bound states in unbroken gauge theories are dominated by the (chromo) electric dipole contribution. While it is indeed the most considered scenario, it still remains unclear by how much the inclusion of highly excited bound states into the chemical network contributes to the depletion of the dark matter relic density. As shown recently in Ref. [14], the impact of higher excitations can be sizeable, in particular, when considering scenarios beyond the paradigm of weakly interacting massive particles (WIMPs), such as conversion-driven freeze-out [35], and for bound states driven by a perturbative, unbroken non-Abelian gauge symmetry. The greatest obstacle is the accurate evaluation of the dipole matrix elements for BSF of and transitions among highly excited states, especially for non-Abelian theories. General recursive formulas to evaluate the dipole matrix elements for all principle and angular momentum quantum numbers, \(n\) and \(\ell\), respectively, have been provided in [14]. Here, we significantly improve on efficiency and numerical stability of their evaluation, which allows us to explore the contribution of up to half a million bound states (all \(n\leq 1000,\ \ell\leq n-1\) states) when considering BSF. For bound-to-bound matrix elements, we are able to include all electric dipole transitions up to \(n\leq 100,\ \ell\leq n-1\), which are about one million transitions in total for the processes allowed by the selection rules. These improvements allow us to address several key scientific questions. First, we consider the velocity dependence of the BSF cross section in vacuum and investigate partial wave unitary properties in Abelian and non-Abelian gauge theories. For the case of \(SU(N_{c})\), the inclusion of a large number of excitations sheds light on the breakdown of our theoretical framework and the necessity of its unitarization. Next, we consider the interplay of BSF, ionization, bound-to-bound transitions and bound state decays in the thermal bath of the early Universe. Following the formalism of [14; 36], we describe their effect on the bound state constituents' abundance via an effective thermally averaged cross section. Focusing on the perturbative coupling regime which turns out to be consistent with unitarity, we pose the important question whether the effective cross section grows slower or faster than the inverse temperature, implying freeze-out or a continuous depletion of the abundance, respectively. The latter case is found for non-abelian interactions and yields important phenomenological implications. We exemplify these implications in detail in the last part of our work, where we apply our numerical framework to the superWIMP scenario [37; 38], considering a simplified model with a colored and electrically charged \(t\)-channel mediator [39; 40]. We showcase the effect of highly excited bound states and find that the combined effect of strong and electromagnetic interactions conspire to reduce the relic density by more than an order of magnitude compared to the case when including Sommerfeld-enhanced annihilations only. Thereby we improve on earlier results within this scenario considering the ground state only [40; 41]. We find that this has important implications on the viable parameter space, in particular to relief Lyman-\(\alpha\) constraints. The remainder of this paper is organized as follows. In Sec. II, we discuss BSF in vacuum and investigate the velocity-dependence of the BSF cross section in view of partial wave unitarity. In Sec. III, we study the scaling of the effective cross section with temperature and discuss the implications for freeze-out. The setup and relevant quantities for the \(t\)-channel model are reviewed in Sec. IV, and our results for the impact of highly excited states on the superWIMP mechanism for dark matter production are discussed in Sec. V. We conclude in Sec. VI. Appendix A details the evaluation of BSF and transition matrix elements and App. B provides expressions for the cross sections and rates used in our analysis. Finally, in App. C, we show implications for the cosmologically viable parameter space of dark QED including highly excited bound states. Bound-state formation in vacuum ### Matrix elements Our starting point for computing radiative BSF in gauge theories is _potential non-relativistic effective field theory_[42; 43; 44; 45]. In this framework, the interaction of two non-relativistic particles with \(SU(N_{c})\) or \(U(1)\) gauge vector fields at the ultra-soft scale can effectively be described by a _(chromo-)electric dipole operator_, \(g\,{\bf r}\cdot{\bf E}\), with gauge coupling \(g\), relative distance \({\bf r}\), and electric field \({\bf E}\). In the two-particle subspace of the non-relativistic particles, this operator leads to matrix elements of the form \[\langle\psi_{f}|\,{\bf r}\,|\psi_{i}\rangle=\int{\rm d}^{3}r\;\psi_{f}^{*}({ \bf r})\;{\bf r}\;\psi_{i}({\bf r}). \tag{1}\] The squared absolute value of these matrix elements is directly related to our physical quantities of interest: bound-state formation cross sections and bound-to-bound rates for electric dipole transitions. For concrete examples related to dark matter, see Refs. [24; 20; 21; 46]. In App. A, we develop an efficient way of evaluating the matrix elements for systems, where the initial state \(\psi_{i}\) and the final state \(\psi_{f}\) are the solutions of two-body Schrodinger equations with corresponding potentials of Coulomb type: \[V_{i,f}(r)=-\frac{\alpha_{i,f}^{\rm eff}}{r}\;. \tag{2}\] The effective coupling strength \(\alpha^{\rm eff}\) of the initial and final state incorporates the details of the underlying particle physics model. We will consider Abelian gauge theories where the effective couplings are equal, and non-Abelian gauge theories where they can be different. We denote the effective couplings by \(\alpha_{b}^{\rm eff}\) and \(\alpha_{s}^{\rm eff}\) when referring to bound and scattering states, respectively. In the following, we explore the contribution of highly excited bound states in concrete realisations. ### Abelian case As a first concrete model, we consider Quantum Electrodynamics (QED) in the non-relativistic regime of the Fermionic particles. The two-particle states of interest then consist of two oppositely charged particles forming gauge singlets. Standard Model examples are hydrogen recombination and positronium formation. In QED, the potential of both the initial and final state is attractive with identical strength. The corresponding BSF cross section, describing the electric dipole transition process of a scattering state into a bound state with quantum numbers \(n\) and \(\ell\), is \[(\sigma v)_{n\ell}=\frac{4\alpha}{3}\Delta E^{3}|\,\langle\psi_{n\ell}|\,{\bf r }\,|\psi_{\bf p}\rangle\,|^{2}, \tag{3}\] where \(\alpha\) is the fine-structure constant. The difference of the initial and final state energy is the positive quantity \(\Delta E=\frac{\mathbf{p}^{2}}{2\mu}+E_{{\cal B}_{n\ell}}\), where \(\mathbf{p}^{2}=\mu^{2}v^{2}\) with \(v\) being the relative velocity and \(\mu\) the reduced mass. Here, the absolute value of the binding energy is given by \(E_{{\cal B}_{n\ell}}=\frac{\mu\alpha^{2}}{2n^{2}}\) as in QED \(\alpha=\alpha_{i}^{\rm eff}=\alpha_{f}^{\rm eff}\). The BSF cross section as defined in Eq. (3) is averaged over initial and summed over final spin degrees of freedom, as well as summed over the magnetic quantum numbers of the bound state (see App. B for details). Since the electric dipole operator is spin conserving, the same equation that applies to the Fermionic case (QED) also applies to, _e.g._, a complex scalar field charged under a \(U(1)\) gauge symmetry [31]. For simplicity, we will commonly refer to both cases as \(U(1)\) in the following, as we are mainly interested in physics beyond the SM and we would like to cover both the Fermionic and complex scalar case simultaneously in our discussion. We numerically evaluate the \(U(1)\) BSF cross section in Eq. (3), as detailed in App. A, for various \(n,\ell\). We present the result in Fig. 1 in a model independent way, _i.e._ we multiply the cross section by \(v\mu^{2}/\alpha^{3}\) to (i) show the remaining dependence on \(\alpha/v\) and (ii) to highlight deviations of the velocity dependence from \(1/v\). Let us begin with the well-known case of capture into the ground state \(n=1\), \(\ell=0\) (blue dotted line). For \(v\ll\alpha\), the cross section for capture into the ground state scales as \(1/v\) so that the shown product, \(v\times(\sigma v)\), approaches a constant (see _e.g._ Ref. [31], also regarding its magnitude relative to annihilation). Similarly, we find the same velocity scaling when fixing the final state \(\ell\) and summing over all \(n\leq 1000\) (gray lines). Specifically, the \(\ell=0\) gray line is larger by a factor \(1.268\) than the \(n=1\), \(\ell=0\) blue dotted line (capture into the ground state). We note that this factor is smaller than the upper bound \(1.6\) derived analytically in Refs. [47; 18]. For \(\ell=0\), we additionally checked that our result for each \(n\leq 5\) coincides with the analytic results available in Ref. [48] and up to \(n=10\) for all \(\ell\) with those of [14]. Interestingly, when summing the BSF cross section both in \(n\) and \(\ell\) (blue lines), the velocity scaling becomes stronger than \(1/v\). Here, we sum all \(n\leq 10\) (dot-dashed), \(100\) (dashed), \(1000\) (solid) and always \(\ell\leq n-1\) accordingly. We compare our summed result to the well-known Kramer's logarithm [49; 50] (red line): \[\sum_{n,\ell}(\sigma v)_{n\ell}\simeq\frac{32\pi}{3\sqrt{3}}\frac{\alpha^{2}} {\mu^{2}}\frac{\alpha}{v}[\log(\alpha/v)+\gamma_{E}]\,,\,\text{for}\,\,v\ll\alpha. \tag{4}\] The ratio of the Kramer's logarithm and our fully summed numerical result is shown in the bottom panel of Fig. 1. For \(v\ll\alpha\) this ratio is expected to approach unity when including a very large number of excited states. We confirm this trend within the range of our numerical limitations, \(n\leq 1000\), \(\ell\leq n-1\), which can also be seen as a non-trivial check of our code.1 Footnote 1: For a given \(\alpha/v\), the amount of excited states needed for a convergent sum can be estimated. To this end, we consider the sum that leads to the Kramer’s logarithm: \[\sum_{n}\frac{1}{n[n^{2}+(\alpha/v)^{2}]}\simeq[\log(\alpha/v)+\gamma_{E}],\, \text{for $v\ll\alpha$}\,. \tag{5}\] From the denominator, one can estimate that for a percentage accuracy, the maximum principle quantum number \(n\) needs to be roughly an order of magnitude larger than a given \(\alpha/v\). This is also what we observe for our summed numerical result in Fig. 1. For instance, summing all bound state contributions up to our numerical limit \(n\leq 1000,\ell\leq n-1\), provides a percentage-level accuracy for \(\alpha/v\lesssim 100\) only, while \(n\leq 100,\ell\leq n-1\) would require \(\alpha/v\lesssim 10\) for the same accuracy. The Kramer's logarithm has also been mentioned in earlier dark matter related works [51; 52; 48; 53]. Although the logarithm leads to a slope steeper than \(1/v\), partial-wave unitarity is not violated here as the sum over different \(\ell\) automatically includes different initial state angular momenta. However, we have checked that each individual angular momentum contribution of the initial state does not violate partial-wave unitarity as it _does_ scale as \(1/v\) for \(v\ll\alpha\). For instance, in this limit, the BSF cross section of the \(\text{s}\to n\text{p}\) processes summed over all \(2\leq n\leq 1000\) is larger by a constant factor 3.8 than the \(\text{s}\to 2\text{p}\). Each angular momentum contribution therefore remains below the partial wave unitarity limit for all \(v\), provided the coupling is sufficiently small. In the non-Abelian case, we will observe a qualitatively different behavior, that implies partial wave unitarity violation even for (in principle) arbitrarily small couplings. ### Non-Abelian case As our second example, we consider a non-Abelian \(SU(N_{c})\) gauge theory, specifically \(SU(3)\). Interestingly, it provides a qualitatively different phenomenology from QED even though in both cases the leading BSF processes are based on dipole transitions. While in QED the initial and final state potentials are both _attractive_ with the same strength, in Quantum Chromodynamics (QCD), the initial state potential of the adjoint pair is _repulsive_. As we shall point out and explore in the following, this feature is accompanied by partial-wave unitarity violation in QCD or in a general \(SU(N_{c})\) for \(N_{c}\geq 2\). In particular, we consider pairs of non-relativistic Fermionic particles in the fundamental and anti-fundamental representation of \(SU(3)\), _i.e._ the two-particle space is spanned by the direct sum of singlet and adjoint pair states: \(3\otimes 3=1\oplus 8\). A SM example is heavy quarkonium formation in SM QCD. The BSF cross section describing the chromo electric dipole transition process of an adjoint scattering state into a singlet bound state is given by \[(\sigma v)_{n\ell}=\frac{C_{F}}{N_{c}^{2}}\frac{4\alpha}{3}\Delta E^{3}|\bra \psi^{[1]}_{n\ell}|\,\mathbf{r}\,|\psi^{[\mathbf{adj}]}_{\mathbf{p}}\rangle |^{2}, \tag{6}\] where an average over color degrees of freedom is performed. Since the final state is necessarily attractive to support bound states and the gluon carries away an octet color charge, the initial scattering state is always in the repulsive adjoint representation. The same equation also holds for the more general \(SU(N_{c})\) gauge group, as well as for a complex scalar field (see _e.g._ Ref. [24; 46] for the Figure 1: _Upper_: Bound-state formation cross section Eq. (3) for \(U(1)\), summed over all possible final bound-state quantum numbers \(n\) and \(\ell\leq n-1\). From the truncation of the sum for \(n\leq 1,10,100,1000\) (blue), one can infer that higher excited states contribute more for decreasing velocity. The red line shows the analytic approximation Eq. (4) when summing over all \(n,\,\ell\), known as Kramer’s logarithm. The gray lines show the contribution to the sum for fixed \(\ell\), all of which approach a constant value at large \(\alpha/v\), respecting partial-wave unitarity. _Lower:_ Ratio of Eq. (4) and the summed result. case of non-fundamental representations). For simplicity, when referring to the case \(SU(N_{c})\) in the following we mean either pairs of Fermions or complex scalars in the fundamental and anti-fundamental representation. In the remainder of this section, we consider a constant coupling in the perturbative regime. We do this to investigate the unitarity violation independently of the effects induced by a running coupling. As long as the beta function is negative, running coupling effects can only enlarge the regime where partial wave unitarity is violated. We will return to the impact of running in the sections below. Adopting this setting, we evaluate the \(SU(3)\) BSF cross section in Eq. (6) for various \(n,\ell\) and present the results in Fig. 2 in analogy to the previous case of \(U(1)\). In the upper panel, one can notice that the velocity dependence of the summed BSF cross section may approach a power law when including as many excited states as our numerical limit allows for (\(n\leq 1000\), \(\ell\leq n-1\)) and considering only the region of \(\alpha/v\lesssim 100\) where \(n\leq 1000\) are expected to be sufficient to capture the full result accurately. The scaling is much stronger than the previous Kramer's logarithm in \(U(1)\). In fact, it is even stronger than \(1/v^{2}\). Specifically, when fitting the summed cross section \(\sum_{n,\ell}(\sigma v)_{n\ell}\propto v^{-\tau}\) for \(v\ll\alpha\) with a power law, we obtain \(\gamma\approx 4.0\). Such a velocity dependence raises concerns regarding the partial-wave unitarity. To investigate this issue, a scattering state with fixed _initial_ angular momentum, denoted by \(\ell^{\prime}\), needs to be considered. From Eq. (6) we separate out this contribution by splitting Eq. (10) into two contributions and denote the corresponding cross section by \((\sigma v)_{n\ell}^{\ell^{\prime}}\), where the superscript denotes the fixed initial scattering state angular momentum. For a given \(\ell^{\prime}\), we then sum \((\sigma v)_{n\ell}^{\ell^{\prime}}\) over all possible final state quantum numbers \(n,\ell\), compatible with the selection rules \(\ell=\ell^{\prime}\pm 1\), \[\left(\sigma v\right)^{\ell^{\prime}}=\sum_{n,\ell}\left(\sigma v\right)_{n \ell}^{\ell^{\prime}}. \tag{7}\] We require that this quantity respects the \(\ell^{\prime}\)th-partial wave unitarity bound. Specifically, as done in Ref. [53; 18; 54], we consider the partial-wave unitarity cross section for \(2\to 2\) total inelastic collisions given by [55] \[\left(\sigma v\right)_{\text{uni.}}^{\ell^{\prime}}=\frac{\pi(2\ell^{\prime}+ 1)}{\mu^{2}v}. \tag{8}\] In the lower panel of Fig. 2, we show the introduced summed \((\sigma v)^{\ell^{\prime}}\) divided by the corresponding \(\ell^{\prime}\)th-partial wave unitarity cross section from Eq. (8). This ratio is multiplied by \(\alpha^{-3}\), leaving only a dependence on \(\alpha/v\). In this way, the partial-wave unitarity limits for a given \(\alpha\) correspond to horizontal lines with values equal to \(\alpha^{-3}\). We show \(\ell^{\prime}=0\) (s-wave) to \(\ell^{\prime}=3\) (f-wave) BSF cross section summed in \(n\leq 1000\) and one particular high initial angular momentum with \(\ell^{\prime}=100\). The maximum principle quantum number is taken as our numerical limit (\(n\leq 1000\)), which is sufficient for the first four partial-wave summed cross sections to converge for \(\alpha/v\lesssim 100\). From these results, one can infer that each summed \((\sigma v)^{\ell^{\prime}}\) grows faster than \(1/v\), _i.e._ the unitarity limit will be exceeded at a finite velocity, respectively, for any \(\ell^{\prime}\) we can resolve. This is independent of the chosen coupling strength. We point out that among all partial waves the s-wave unitarity bound is always violated at the largest velocities, though all curves approach a common behavior for decreasing velocity. We explicitly checked for the s-wave case that partial wave unitarity is even violated when including only a single, large \(n\) bound state, implying that it is not the summation over all possible final states for a given \(\ell^{\prime}\) which is problematic. Moreover, we observe the same situation for various \(SU(N_{c})\). To make an even more general statement, let us consider \(\left(\sigma v\right)^{\ell^{\prime}}\) as a function of the ratio \(\alpha_{s}^{\text{eff}}/\alpha_{b}^{\text{eff}}\). Now, \(SU(N_{c})\) is a special case which lies in the region Figure 2: _Upper_: The adjoint-to-singlet bound-state formation cross section in Eq. (6) shown for \(SU(3)\), summed over all possible final bound-state quantum numbers \(n\) and \(\ell\leq n-1\). From the truncation of the sum for \(n\leq 1,10,100,1000\) (blue), one can infer that higher excited states contribute more for decreasing velocity. _Lower_: Rescaled ratio of BSF cross section for fixed initial angular momentum \(\ell^{\prime}\) and the corresponding partial-wave unitarity limit. For a given \(\alpha\), unitarity is violated when the rescaled ratio is above \(1/\alpha^{3}\), for which two examples are shown by the horizontal lines. \(\alpha_{s}^{\rm eff}/\alpha_{b}^{\rm eff}=[-1/3,0[\), where the lower limit corresponds to \(N_{c}=2\) and the upper to the large \(N_{c}\) limit. The \(U(1)\) case corresponds to \(\alpha_{s}^{\rm eff}/\alpha_{b}^{\rm eff}=1\). Our numerical results suggest that in the range \(\alpha_{s}^{\rm eff}/\alpha_{b}^{\rm eff}<1\), \((\sigma v)^{\ell^{\prime}}\) scales stronger than \(1/v\), while for \(\alpha_{s}^{\rm eff}/\alpha_{b}^{\rm eff}\geq 1\) no evidence of partial wave-unitarity violation is found within our numerical boundaries. Note that the mechanism behind the unitarization of BSF for \(\alpha_{s}^{\rm eff}/\alpha_{b}^{\rm eff}<1\) is still an open problem. While we point out this problem here for BSF via _dipole transitions_, it is worth noting that a similar question has been recently raised for _monopole transitions_ in Ref. [16] where partial-wave unitarity can be violated already for capture into the ground state level \(n=1\). For non-numerical evidence of unitarity violation in non-Abelian gauge theories, a simple analytic expression would be warranted. We managed to get an approximate analytic result by taking two limits in Eq. (6): (i) \(\alpha_{s}^{\rm eff}\to 0\) and subsequently (ii) \(\tilde{\zeta}_{b}\to\infty\) where \(\tilde{\zeta}_{b}=\alpha_{b}^{\rm eff}/(nv)\). Taking these limits, we obtain the result for the s-wave case in \(SU(N_{c})\) \[(\sigma v)_{n,\ell=1}^{\ell^{\prime}=0}\simeq\frac{C_{F}}{N_{c}^{2}}\frac{4 \alpha}{3}\frac{32\pi\alpha_{b}^{\rm eff}}{\mu^{2}}n(n^{2}-1),\ \mbox{for (i) and (ii)}. \tag{9}\] The two limits are justified for relative velocities which fulfill the condition2 Footnote 2: For adjoint-to-singlet BSF in \(SU(N_{c})\)\(\alpha_{s}^{\rm eff}=-\alpha/(2N_{c})\) and \(\alpha_{b}^{\rm eff}=C_{F}\alpha\), where \(C_{F}=(N_{c}^{2}-1)/(2N_{c})\). \[2\pi|\alpha_{s}^{\rm eff}|\ll v\ll\frac{\alpha_{b}^{\rm eff}}{2n^{2}}. \tag{10}\] In this velocity regime, we compared our direct numerical evaluation of Eq. (6) to the analytical result in Eq. (9) for a variety of \(N_{c}\) and \(n\) values and find very good agreement. The fact that the s-wave BSF cross section reaches a constant value for the above velocity regime is another non-trivial check of our numerical implementation also for very large \(n\). However, the velocity regime may be too restricted to analytically proof unitarity violation for contributions of a single \(n\). Namely, while for \(SU(N_{c})\) the s-wave BSF cross section approaches the unitarity limit for increasing \(n\), the velocity regime where the analytic expression is valid becomes smaller and eventually - (very) close to the unitarity bound - the condition in Eq. (10) cannot be met. Nevertheless, _if_ there exists a theory with \(\alpha_{s}^{\rm eff}=0\), then there is no lower bound on \(v\) and violation of s-wave unitarity can be shown with the above formula. Notice that \(\alpha_{s}^{\rm eff}=0\) corresponds to the large \(N_{c}\) limit of \(SU(N_{c})\), which is, however, not justified for all velocities for a finite \(N_{c}\). In the following, we explore phenomenological consequences focussing on the regime compatible with perturbativity and partial wave unitarity bounds. ## III Super critical behavior The impact of a set of bound states on the freeze-out dynamics of some particle species, \(j\), can under very general conditions be described by the Boltzmann equation \[\dot{n}_{j}+3Hn_{j}=-\langle\sigma v\rangle_{\rm eff}[n_{j}^{2}-(n_{j}^{\rm eq })^{2}]\,, \tag{11}\] where \(n_{j}\) is the number density and \(H\) the Hubble expansion rate. The _effective cross section_, \(\langle\sigma v\rangle_{\rm eff}\), includes all the effects of pair annihilation as well as scattering-bound [4; 14; 36]. Here, we investigate whether the inclusion of an increasing number of excited states can lead to an effective cross section that grows sufficiently fast to maintain efficient depletion of the (comoving) particle number density and, hence, prevent the particle species (_e.g._ dark matter) from freezing out. We call this condition a _super critical_ behavior. To obtain the threshold for such a super critical behavior, let us consider a typical scenario where a particle species with mass \(m\) is initially in thermal equilibrium with a heat bath with temperature \(T\) and entropy density \(s\). We assume \(s\propto T^{3}\), \(H\propto T^{2}\), _i.e._ no (significant) change in the relativistic degrees of freedom of the bath. Introducing the yield as \(Y_{j}\equiv n_{j}/s\) and parametrizing time by \(x\equiv m/T\) in Eq. (11), one can estimate the yield evolution as a function of \(x\) as follows. For times where the yield \(Y_{j}(x)\) starts to deviate significantly from its equilibrium value, \(Y_{j}(x)\gg Y_{j}^{\rm eq}(x)\), also known as the time of chemical decoupling, \(x_{\rm cd}\), one can neglect the impact of \(Y_{j}^{\rm eq}(x)\) in the Boltzmann equation. This allows for an analytic solution for the yield evolution after chemical decoupling (see _e.g._ Ref. [56]), which up to constants, can be estimated to scale as \[Y_{j}(x_{0})\propto\frac{1}{\int_{x_{\rm cd}}^{x_{0}}{\rm d}x\,x^{-2}\,\langle \sigma v\rangle_{\rm eff}(x)}. \tag{12}\] The integral converges for \(x_{0}\to\infty\) only if \(\langle\sigma v\rangle_{\rm eff}(x)\) grows slower than \(x\) while for \(\langle\sigma v\rangle_{\rm eff}\propto x^{\gamma}\) with \(\gamma\geq 1\) the integral diverges. Accordingly, the particle species only freezes out for \(\gamma<1\) (typical WIMP) while the particle continues to deplete for \(\gamma\geq 1\). The critical value \(\gamma=1\) leads to logarithmic depletion and sets the threshold for what we define a super critical behavior. Above this threshold, the evolution of the yield approaches the scaling \(Y_{j}\propto x^{1-\gamma}\) for \(x\gg x_{\rm cd}\). In this case, the effective annihilation rate \(\Gamma_{\rm eff}\equiv n_{j}\langle\sigma v\rangle_{\rm eff}\) is dynamically driven to be proportional to the Hubble rate \(\Gamma_{\rm eff}\propto H\). In the presence of bound states, the effective cross section introduced above can be written as [36; 14] \[\left\langle\sigma v\right\rangle_{\rm eff}=\left\langle\sigma v\right\rangle_{ \rm ann}+\sum_{n,\ell}\left\langle\sigma v\right\rangle_{n\ell}R_{n\ell}\,, \tag{13}\] where the first term is the usual pair annihilation cross section, thermally averaged. In all cases considered in this work, it includes the Sommerfeld effect [57; 58]. The second term contains the thermal average of the BSF cross sections, denoted as \(\left\langle\sigma v\right\rangle_{n\ell}\). The summation over all bound-state quantum numbers contains a dimensionless, temperature dependent quantity, which obeys \(0\leq R_{n\ell}\leq 1\).3 Thus, the presence of bound states always increases the value of the effective cross section and could eventually lead to a super critical behavior. Introducing a simpler index to label a specific combination of quantum numbers, \(i=(n\ell)\), \(R_{i}\) can explicitly be written as [36; 14] Footnote 3: Within the electric dipole approximation, bound states with different spin are not directly coupled to each other. We thus leave the spin sum implicit, see App. B for details. \[R_{i} \equiv 1-\sum_{k}(M^{-1})_{ik}\frac{\Gamma_{\text{ion}}^{k}}{\Gamma^{k} }\,, \tag{14}\] \[M_{ik} \equiv \delta_{ik}-\frac{\Gamma_{\text{trans}}^{i\to k}}{\Gamma^{i}}\,,\] (15) \[\Gamma^{i} \equiv \Gamma_{\text{ion}}^{i}+\Gamma_{\text{dec}}^{i}+\sum_{k\neq i} \Gamma_{\text{trans}}^{i\to k}\,. \tag{16}\] The last line defines the total width of a particular bound state. It consists of the ionization rate, the rate of decay (via annihilation of the bound state's constituents), and bound-to-bound transition rates, respectively. The latter contains bound state excitation and de-excitation rates. In practice, we use the Milne relation (_cf_. App. B.1) to obtain the excitation rate from the de-excitation rate, and \(\Gamma_{\text{ion}}^{n\ell}\) from \(\left\langle\sigma v\right\rangle_{n\ell}\). Note that the inclusion of bound-to-bound transition rates _increases_\(R_{n\ell}\)[36]. ### Dark QED We now investigate the behavior of the effective cross section in Eq. (13) for our first example of a concrete model. In particular, we consider dark matter as a Dirac Fermion charged under a \(U(1)\) gauge group, which has been studied _e.g_.in Refs. [18; 19; 21]. We shall call it dark QED in the following, where dark photons set the thermal environment with temperature \(T\). Dark QED has only two parameters, which are the dark matter mass, \(m\), and the dark fine structure constant, \(\alpha\). For our analysis, we consider Sommerfeld enhanced annihilation and s-wave spin-singlet bound state decay into two dark photons. Relevant expressions are listed in App. B.2. We briefly comment on the influence of spin-triplet states below. The electric dipole interaction allows for transitions among the excited states in dark QED. The de-excitation rate is given by \[\Gamma_{\text{de-ex}}^{n^{\prime}\ell^{\prime}\to n\ell}=\frac{4\alpha(2 \ell+1)}{3}\Delta E^{3}|\left\langle\psi_{n\ell}\right|\mathbf{r}\left|\psi_ {n^{\prime}\ell^{\prime}}\right\rangle|^{2}, \tag{17}\] see App. A.2. The excitation rate is related via detailed balance, see App. B. Taking into account all leading processes, we show our results for the dark QED effective cross section in Fig. 3. Let us start by neglecting all bound state contributions, considering the case of Sommerfeld enhanced dark matter pair annihilation into two dark photons only (gray line). As is well known, the Sommerfeld effect in this case introduces a \(1/v\) dependence of the annihilation cross section for \(v\ll\alpha\), leading to \(\left\langle\sigma v\right\rangle_{\text{eff}}\propto x^{1/2}\) for sufficiently low temperatures. Next, we add the contribution of the spin-singlet ground state (blue dotted). Similarly to the Sommerfeld effect, the cross section for capture into the ground state also scales as \(1/v\) for \(v\ll\alpha\) (as seen in Fig. 1). For \(T\) much lower than the binding energy, this leads to \(\left\langle\sigma v\right\rangle_{10}\propto x^{1/2}\). In this regime, the spin-singlet decay rate is much faster than the ionization rate due to Boltzmann suppression and consequently \(R_{10}\to 1\), resulting again in an overall \(\left\langle\sigma v\right\rangle_{\text{eff}}\propto x^{1/2}\) scaling. Compared to the Sommerfeld enhanced pair annihilation only, the effective cross section is larger by a constant factor in the low temperature regime, as expected [18]. Let us finally add many excited spin-singlet states to the system. We include, according to the selection rules, _all_ possible electric dipole transitions among them via Eq. (17) to evaluate the effective cross section in Eq. (13). In Fig. 3, it can be seen that for \(n\leq 10\) and \(\ell\leq n-1\) (dashed blue line) the effective cross section increases strongly until around \(x\sim 10^{3}\), although within this regime also even higher excited states become important as seen by the solid and dashed lines separating. One could therefore not deduce that dark QED does not Figure 3: Effective cross section for heavy Fermions charged under a \(U(1)\) for constant coupling \(\alpha=0.1\), including the contribution from bound states for all \(n\), \(\ell\) up to \(n=1,10\), and \(100\) (blue dotted, dashed, and solid, respectively) and excluding BSF (gray solid). The effective cross section includes Sommerfeld enhanced annihilation, BSF, ionization, and all possible bound-to-bound transitions arising from the electric dipole interaction, as well as spin-singlet s-wave bound state decay. The red long-dashed line displays the slope \(\propto x^{1}\) for comparison. exceed the critical power scaling \(\gamma=1\) (indicated by the dashed red line) when including only \(n\leq 10\). However, the result for \(n\leq 100\) and \(\ell\leq n-1\) (blue solid line), which includes about 5000 bound states and about \(10^{6}\) transitions among them, clearly shows that the effective cross section does not continue a strongly increasing trend but rather converges to a smaller power scaling. When fitting a power law in the regime \(10^{4}\lesssim x\lesssim 10^{5}\), where \(n\leq 100\) is trustworthy, we get a scaling of about \(\langle\sigma v\rangle_{\rm eff}\propto x^{0.6}\). Note that from the Kramer's logarithm in Sec. II, it is not clear that the temperature dependence actually follows a power law. The no-transition limit follows closely, but lies slightly above, the \(n=1\) line as we explicitly checked. From this we conclude that transitions among the bound states are an important effect, which needs to be taken into account for predicting the relic abundance in dark QED precisely. We verified, by varying the included \(n\) between the two shown cases, that this less steep scaling, \(\langle\sigma v\rangle_{\rm eff}\propto x^{0.6}\), is already found for including only \(n\leq 80\), from which we deduce that the inclusion of even higher excited states would not change the scaling. The scaling power is trivially unaffected by the value of the dark matter mass, as the mass cancels out in the shown product \(\langle\sigma v\rangle_{\rm eff}\times m^{2}\). Moreover, one can show analytically that the scaling power is even unaffected when changing the value of \(\alpha\), which we have confirmed numerically in a wide range of \(\alpha\). We also explicitly checked that the inclusion of spin-triplet bound states leaves the scaling power of the effective cross section at low temperatures unaffected. This numerical observation can be understood since they only differ from the spin-singlet bound states by a smaller decay rate which suppresses their contribution at small \(x\) but does not alter the late time scaling of the effective cross section (although it does increase the overall magnitude at late times). From all this, we conclude that dark QED _does not_ reach a critical scaling of the effective cross section in the low temperature regime within the electric dipole approximation. In other words, dark QED indeed freezes out within the approximations made.4 We show the impact of excited states on the relic abundance within dark QED in App. C, refining earlier results on this subject [53; 18]. Footnote 4: Interestingly, when considering processes with infinitely many dark photons (classical picture), other works come to a different conclusion [51; 59]. We only included ultra-soft processes with one dark photon. ### Dark QCD As our second example of a concrete model, we consider dark matter as a Dirac Fermion in the fundamental representation of a new \(SU(3)\) gauge group, see _e.g._[21; 23], which is often called dark QCD. In the following analysis of dark QCD, we include standard expressions for the Sommerfeld enhanced pair annihilation cross section and the decay rate of the color singlet s-wave states, listed in App. B.3 neglecting spin-triplet states. Further, for the octet-to-singlet BSF cross section in the chromo electric dipole approximation we use Eq. (6). No singlet-to-singlet transitions can be mediated via the chromo electric dipole operator. Therefore, the effective cross section in Eq. (13) reduces to the no-transition limit \(\Gamma_{\rm trans}^{i\to j}=0\) (_cf._ App. B.3) when allowing only for the leading chromo electric dipole interactions. This simplification allows us to focus exclusively on s-wave bound states which are the only ones with a non-vanishing decay rate in our approximations. While in dark QED the coupling is frozen, in dark QCD, we take into account the one-loop running effect induced by gluon self-interactions in all considered quantities, as detailed in App. B.3. Yet, \(m\) remains the only dimensionful scale in the theory, implying that \(\langle\sigma v\rangle_{\rm eff}\times m^{2}\) is independent of the choice of \(m\). In Fig. 4, we show \(\langle\sigma v\rangle_{\rm eff}\times m^{2}\) for the specific choice \(\alpha(m)\equiv 0.025\) across the regime \(\alpha(m/x)\lesssim 1\). We checked that the BSF cross sections are compatible with partial wave unitarity bounds for the velocities that give a sizeable contribution to the thermal average, within the perturbative regime. The scaling power in the absence of bound states (Sommerfeld only) is, unsurprisingly, the same as in the dark QED case at low temperatures, _i.e._\(\langle\sigma v\rangle_{\rm eff}\propto x^{1/2}\). The inclusion of s-wave bound states, Figure 4: Effective cross section for heavy Fermion triplets under \(SU(3)\), assuming \(\alpha(m)=0.025\). The critical scaling \(\langle\sigma v\rangle_{\rm eff}\propto x^{1}\) (red dashed) is exceeded when respecting running couplings (darker blue), _i.e._ no freeze out occurs, as opposed to using constant coupling strength (lighter blue). The effective cross section includes Sommerfeld enhanced annihilation, BSF and ionization via chromo electric dipole interactions, as well as spin-singlet s-wave bound state decay. Note that no bound-to-bound transitions occur in dark QCD in dipole approximation. The gray line shows the case without including bound states. however, leads to a much steeper scaling of the effective cross section. Even when omitting the running of the coupling strength, _i.e._\(\alpha=0.025\) at all scales (light blue line, using \(n\leq 1000\)), we get a scaling power of about \(x^{0.9}\) for \(x>10^{5}\), which is close to the critical line. When fully including one-loop running (darker blue lines), the scaling of the effective cross section for dark QCD becomes super critical with a scaling of around \(x^{1.1}\). We also find super critical behavior for other choices than \(\alpha(m)\equiv 0.025\). From this we conclude that dark QCD does not freeze-out even in the perturbative regime. However, since the scaling exceeds the critical one only slightly, we find a moderate effect on the abundance. For instance, assuming the dark sector is in thermal equilibrium with the SM bath, the addition of excitations \(2\leq n\leq 1000\) leads to a reduction of the dark matter abundance by around \(50\%\) at \(x=10^{9}\) for \(m\sim 10^{6}\,\text{GeV}\) and a slightly larger percentage for smaller masses. In the following sections, we consider a model featuring dark matter and an accompanying particle that is charged under SM QED and QCD, and hence subject to bound state effects. The electric charge of that particle will allow for color singlet-to-singlet transitions, implying that the inclusion of \(\ell>0\) states pushes the scaling of the effective cross section further inside the super critical regime (_cf._ Fig. 6). As a consequence, the corrections to the relic abundance in the perturbative regime will be much larger. ## IV Colored T-channel mediator model We consider a singlet Majorana Fermion \(\chi\) being the dark matter candidate, and a scalar mediator \(\tilde{q}\) with gauge quantum numbers identical to those of either an up- or down-type right-handed SM quark \(q_{R}\) (we focus on the latter case in our numerical results for concreteness). The dark matter field \(\chi\) interacts with the SM only via the Yukawa interaction \[\mathcal{L}_{\text{int}}=\lambda_{\chi}\tilde{q}\tilde{q}_{R}\chi+\text{h.c.}\,, \tag{18}\] while the mediator \(\tilde{q}\) has additional interactions with the \(SU(3)_{c}\) and \(U(1)_{Y}\) SM gauge fields given by the usual kinetic term with covariant derivatives. We assume a mass \(m_{\tilde{q}}>m_{\chi}\) such that the mediator can decay into the dark matter particle rendering only the latter stable on cosmological time scales. This model belongs to the class of so-called \(t\)-channel mediator models, see _e.g._[60], that are being actively consideredin the context of LHC dark matter searches, see _e.g._[61] for a recent account on the subject.5 Footnote 5: The term _mediator_ refers to the dark matter-SM interaction, not to be confused with the long-range force carrier which in this case is the gluon (and photon). Occasionally, this class of models has alternatively been dubbed _charged parent particle model_. Dark matter production is governed by the interaction of \(\chi\) with the mediator field \(\tilde{q}\), controlled by \(\lambda_{\chi}\), as well as the dynamics of the mediator itself, largely driven by its gauge interactions. In particular, as the \(\tilde{q}\) and \(\tilde{q}^{\dagger}\) particles are color and electrically charged, they can form bound states, that have an important impact on the freeze-out [14, 15, 16, 1, 9, 10, 11]. Here, we are particularly interested in including excited bound states, following [14, 9]. ### Review of production mechanisms There are three distinct possibilities for how the freeze-out dynamics occurs, known as coannihilation [62], conversion-driven freeze-out [63, 35] and superWIMP production [37, 38], respectively. In addition, the model can also feature freeze-in production [64, 65, 66]. All of them can be described by the following set of coupled Boltzmann equations for the yields \(Y_{j}\), \[\frac{\text{d}Y_{\tilde{q}}}{\text{d}x}=\frac{1}{3H}\frac{\text{d }s}{\text{d}x}\Bigg{[}\frac{1}{2}\big{\langle}\sigma_{\tilde{q}\tilde{q}^{ \dagger}}v\big{\rangle}_{\text{eff}}\left(Y_{\tilde{q}}^{2}-Y_{\tilde{q}}^{ \text{eq}\,2}\right) \tag{19}\] \[+\big{\langle}\sigma_{\chi\tilde{q}}v\big{\rangle}\left(Y_{\chi} Y_{\tilde{q}}-Y_{\chi}^{\text{eq}}Y_{\tilde{q}}^{\text{eq}}\right)+\frac{\Gamma_{ \text{conv}}^{\tilde{q}\to\chi}}{s}\left(Y_{\tilde{q}}-Y_{\chi}\frac{Y_{ \tilde{q}}^{\text{eq}}}{Y_{\chi}^{\text{eq}}}\right)\Bigg{]}\,,\] \[\frac{\text{d}Y_{\chi}}{\text{d}x}=\frac{1}{3H}\frac{\text{d}s} {\text{d}x}\Bigg{[}\big{\langle}\sigma_{\chi\chi}v\big{\rangle}\left(Y_{\chi} ^{2}-Y_{\chi}^{\text{eq}\,2}\right)\] (20) \[+\big{\langle}\sigma_{\chi\tilde{q}}v\big{\rangle}\left(Y_{\chi} Y_{\tilde{q}}-Y_{\chi}^{\text{eq}}Y_{\tilde{q}}^{\text{eq}}\right)-\frac{\Gamma_{ \text{conv}}^{\tilde{q}\to\chi}}{s}\left(Y_{\tilde{q}}-Y_{\chi}\frac{Y_{\tilde {q}}^{\text{eq}}}{Y_{\chi}^{\text{eq}}}\right)\Bigg{]}\,,\] where \(x=m_{\tilde{q}}/T\), \(s\) is the entropy density, \(H\) the Hubble rate, and \[Y_{j}^{\text{eq}}=\frac{g_{j}}{s}\int\frac{\text{d}^{3}p}{(2\pi)^{3}}\text{e} ^{-\sqrt{m_{j}^{2}+p^{2}}/T}, \tag{21}\] all of which depend on \(x\). Here \(g_{j}\) denotes the number of internal degrees of freedom, with \(g_{\tilde{q}}\equiv 2N_{c}=6\) denoting the sum of \(\tilde{q}\) and \(\tilde{q}^{\dagger}\) densities, \(g_{\chi}=2\), and \(g_{\mathcal{B}_{\text{eff}}}=2\ell+1\) for bound states with angular momentum \(\ell\), capturing the degenerate magnetic quantum number. Note that the factor of \(1/2\) in Eq. (19) is due to our convention of including both the \(\tilde{q}\) and \(\tilde{q}^{\dagger}\) density in \(Y_{\tilde{q}}\). Eqs. (19) and (20) contain the following collision terms: 1. The effective cross section \(\big{\langle}\sigma_{\tilde{q}\tilde{q}^{\dagger}}v\big{\rangle}_{\text{eff}}\) includes direct annihilation (including Sommerfeld enhancement following [67, 35]) as well as the impact of bound states, as given by Eq. (13). We discuss the relevant BSF, transition and decay processes within the simplified model below, following and extending [14]. 2. The rate \(\Gamma_{\text{conv}}^{\tilde{q}\to\chi}\) describes the conversion rate of \(\tilde{q}\) into \(\chi\) particles. It is controlled by the Yukawa coupling, \(\Gamma_{\text{conv}}^{\tilde{q}\to\chi}\propto\lambda_{\chi}^{2}\), and its size determines whether the freeze-out happens in the coannihilation, conversion-driven or superWIMP regime (see below). At high temperatures, it is dominated by scatterings \(X\bar{q}\to Y\chi\) with appropriate SM particles \(X,Y\), while at low temperatures the decay process \(\tilde{q}\to q\chi\) dominates. Accordingly, in the low temperature limit - relevant for the superWIMP mechanism considered below - it reads \[\Gamma_{\rm conv}^{\tilde{q}\to\chi}=\Gamma_{\tilde{q}\to q\chi}\frac{K_{1}\left( m_{\tilde{q}}/T\right)}{K_{2}\left(m_{\tilde{q}}/T\right)}\,, \tag{22}\] where \(\Gamma_{\tilde{q}\to q\chi}=\lambda_{\chi}^{2}/(16\pi)\,m_{\tilde{q}}\left(1-m _{\chi}^{2}/m_{\tilde{q}}^{2}\right)^{2}\) is the vacuum decay rate of a single mediator particle in the limit \(m_{q}\to 0\) (not to be confused with the bound state decay rates, that are dominated by the strong interaction). 3. The dark matter pair annihilation rate \(\left\langle\sigma_{\chi\chi}v\right\rangle\propto\lambda_{\chi}^{4}\) and coannihilation rate \(\left\langle\sigma_{\chi\tilde{q}}v\right\rangle\propto\lambda_{\chi}^{4}\) are strongly suppressed for \(\lambda_{\chi}\ll 1\) and practically irrelevant within the conversion-driven and superWIMP regimes. In addition, the Boltzmann equations could be complemented by collision terms for the conversion process \(\tilde{q}\tilde{q}^{\dagger}\to\chi\chi\), which are, however, negligible within the conversion-driven and superWIMP regimes as they are, again, proportional to \(\lambda_{\chi}^{4}\), and irrelevant in the coannihilation limit, and therefore not displayed here. Let us now discuss in more detail the various possible regimes for dark matter genesis. As mentioned above, which regime is realized depends on the size of the conversion rate. More precisely, the most relevant quantity is the conversion rate for \(\chi\) into \(\tilde{q}\) particles, \[\Gamma_{\rm conv}^{\chi\to\tilde{q}}=\Gamma_{\rm conv}^{\tilde{q}\to\chi}\frac {Y_{\tilde{q}}^{\rm eq}}{Y_{\chi}^{\chi}}\to\Gamma_{\tilde{q}\to q\chi}\frac{g_ {\tilde{q}}m_{\tilde{q}}^{3/2}}{g_{\chi}m_{\chi}^{3/2}}e^{-\frac{m_{\tilde{q} }-m_{\chi}}{T}}\,, \tag{23}\] where the last expression is the low temperature limit. Dark matter genesis is qualitatively different depending on the size of this rate relative to the Hubble rate for temperatures around the mediator mass (during the time when the mediator starts to chemically decouple from the SM bath), \[\Gamma_{\rm conv}^{\chi\to\tilde{q}} \gg H(m_{\tilde{q}})\] coannihilation \[, \tag{24a}\] \[\Gamma_{\rm conv}^{\chi\to\tilde{q}} \sim H(m_{\tilde{q}})\] conversion-driven \[,\] (24b) \[\Gamma_{\rm conv}^{\chi\to\tilde{q}} \ll H(m_{\tilde{q}})\] superWIMP/freeze-in \[. \tag{24c}\] 1. In the coannihilation regime the \(\tilde{q}\) and \(\chi\) populations are in mutual chemical equilibrium. The actual size of the conversion rate is irrelevant as long as it is strong enough to maintain chemical equilibrium [62]. Within the coannihilation regime, the dark matter abundance is determined by the cross sections \(\left\langle\sigma_{\tilde{q}\tilde{q}^{\dagger}}v\right\rangle\), \(\left\langle\sigma_{\chi\chi}v\right\rangle\) and \(\left\langle\sigma_{\chi\tilde{q}}v\right\rangle\) (and in addition, as for all cases, the bound state dynamics), and generically here \(\lambda_{\chi}\sim\mathcal{O}(1)\)[15; 16; 60]. 2. In the conversion-driven case, the freeze-out of chemical equilibrium among \(\chi\) and \(\tilde{q}\) drives the dynamics and the size of the conversion rate \(\Gamma_{\rm conv}^{\tilde{q}\to\chi}\) largely influences the dark matter abundance [35; 63]. In addition, the efficiency by which the \(\tilde{q}\) (and \(\tilde{q}^{\dagger}\)) abundance is depleted is relevant [35], controlled by \(\left\langle\sigma_{\tilde{q}\tilde{q}^{\dagger}}v\right\rangle\) and bound state effects [14]. The conversion-driven case occurs for small couplings, typically \(\lambda_{\chi}\sim\mathcal{O}(10^{-6})\), for which the \(\chi\chi\) and \(\chi\tilde{q}\) terms in Eqs. (19) and (20) can be safely neglected. 3. Finally, in the superWIMP scenario [37; 38], the mediator has an even smaller decay rate, and can usually be considered as stable while the freeze-out of \(\tilde{q}\tilde{q}^{\dagger}\) annihilation occurs. The population of remaining \(\tilde{q}\) and \(\tilde{q}^{\dagger}\) particles then decays into \(\chi\) at a temperature \(T\) for which \(H(T)\sim\Gamma_{\rm conv}^{\chi\to\tilde{q}}\), thereby generating the dark matter abundance [39]. Technically, this means that the terms corresponding to inverse decays in Eqs. (19) and (20) can be neglected, in addition to those for \(\chi\chi\) and \(\chi\tilde{q}\) annihilation, while the size of \(\left\langle\sigma_{\tilde{q}\tilde{q}^{\dagger}}v\right\rangle\) and the bound state dynamics are most important [40; 41; 39]. To the extent that decay and freeze-out occur on different time-scales, the dark matter abundance is also insensitive to the size of the conversion (or equivalently decay) rate in that limit, since eventually each \(\tilde{q}\) (and \(\tilde{q}^{\dagger}\)) produces one dark matter particle. Note that in addition to the superWIMP contribution a contribution from freeze-in [64; 65; 66] has to be considered which stems from inefficient decays (or scatterings) around \(x\sim 1\), _i.e._ when the mediator is in thermal equilibrium with the SM bath. The relative importance of superWIMP versus freeze-in contributions depend on the couplings and masses, see _e.g._[40; 39]. However, in our analysis we are particularly interested in regions with a dominant superWIMP contribution. In this work, we re-evaluate the superWIMP regime when including excited bound state effects. In particular, since the mediator is relatively long lived within this regime, its abundance crucially depends on how much of the mediator is depleted due to bound state dynamics. ### Bound state rates and processes The impact of bound states on \(Y_{\tilde{q}}\) is captured by the effective cross section defined in Eq. (13) entering in the Boltzmann equation, Eq. (19). It depends on the set of bound states that are included as well as their formation, transition and decay rates, discussed in the following. In the considered model, the scalar mediator particle \(\tilde{q}\) interacts both via electromagnetic interaction with bottom-like charge \(Q=-1/3\) and strong interactions in the fundamental representation. This leads to differences compared to the case of pure Abelian or non-Abelian interactions discussed in Sec. III. In particular, the poten tials determining the bound state spectrum and wavefunctions as well as BSF and decay are driven by QCD, while QED is relevant for transitions among the various energy levels [14]. In the following, we briefly review the bound state processes included in our analysis, and then comment on the relevance of further extensions. Bound state formation is dominated by the chromo electric dipole transition, \[(\tilde{q}\tilde{q}^{\dagger})^{[8]}\to\mathcal{B}^{[1]}_{n\ell}+g\,, \tag{25}\] going from an octet scattering state to a singlet bound state, and emitting an (ultrasoft) gluon. The effective interaction potentials \(V_{s(b)}=-\alpha^{\rm eff}_{s(b)}/r\) for the scattering (\(s\)) and bound states (\(b\)) are \[\alpha^{\rm eff}_{s}=-\frac{1}{6}\alpha_{s}(\mu_{s}),\quad\alpha^{\rm eff}_{b }=\frac{4}{3}\alpha_{s}(\mu_{b})\,, \tag{26}\] with running strong coupling6 evaluated at \(\overline{\rm MS}\)-scale \(\mu_{s}=m_{\tilde{q}}v_{\rm rel}/2\) for the scattering state, and at the Bohr momentum scale \(\mu_{b}=m_{\tilde{q}}\alpha^{\rm eff}_{b}/2/n\) for the bound state \(\mathcal{B}_{n\ell}\). Note that the latter definition is implicit, but can be easily solved either iteratively or numerically for each level \(n\). The BSF cross section is then given by Eq. (6), with the effective running coupling entering the respective initial and final state wave-functions. Furthermore, we evaluate the coupling in the prefactor of Eq. (3), associated to the gluon emission, at the ultrasoft scale of the gluon energy \(\mu_{\rm BSF}=m_{\tilde{q}}/4\)\(\left(v^{2}+(\alpha^{\rm eff}_{b}/n)^{2}\right)\). Bound state formation is also possible via an electromagnetic dipole transition, but is negligible due to the smaller interaction strength [14] (see also Fig. 5 below). Computational details can be found in App. B.3. Footnote 6: In all numerical computations involving running \(\alpha_{s}\) we used RunDec 3 [68] to evaluate the SM strong coupling (employing 5-loop running). Transitions among bound states are not possible via a single insertion of the QCD dipole interaction due to color conservation. Therefore, we consider electromagnetic dipole interactions as the leading bound-to-bound transitions: \[\mathcal{B}^{[1]}_{n\ell}\leftrightarrow\mathcal{B}^{[1]}_{n^{\prime}\ell\pm 1 }+\gamma\,, \tag{27}\] with rates computed as detailed in App. B.4. We use the wave-functions evaluated with the corresponding effective QCD couplings at their respective Bohr momentum scales for level \(n\) and \(n^{\prime}\) and the electromagnetic fine structure constant \(\alpha_{\rm EM}=1/128.9\) in the coupling prefactor of Eq. (17). For bound state decay we include the process \(\mathcal{B}_{n,\ell=0}\to gg\), see App. B.3. The NLO correction to this decay channel within QCD has been shown to have only a minor effect in [14], and we therefore omit it here. Let us briefly comment on possible further transition and decay processes. By simple power counting, electric quadrupole and magnetic dipole transitions are suppressed relative to electric dipole transitions. Nevertheless, they could potentially have an impact by allowing new transition channels due to the modified selection rules. We checked (up to \(n\leq 6\)) that electric quadrupole transitions have a negligible impact on the effective cross section. Furthermore, the decay of \(\ell=1\) bound states is suppressed by at least a factor \(\alpha_{s}^{3}\) relative to its de-excitation rate when assuming a power counting \(\alpha_{\rm EM}\sim\mathcal{O}(\alpha_{s}^{2})\). This is due to the higher derivatives of the radial wavefunction entering for higher \(\ell\), as well as the fact that decays into two gluons vanish for \(\ell=1\) states at tree-level, leading to additional suppression due to a three-gluon decay or two-gluon decay at one-loop [69]. Lastly, two-gluon transitions in \(SU(N_{c})\) are expected to be suppressed by phase space factors and the repulsive potential of the necessarily adjoint intermediate state. Nevertheless, it would be interesting to investigate their impact in future work. ## V Results for superwimp scenario We now discuss our results and phenomenological implications considering superWIMP production within the model introduced in Sec. IV. In the superWIMP scenario, the standard assumption has been that the freeze-out and decay of \(\tilde{q}\) are well separated in time, such that the late time \(\chi\) abundance depends on the mediator freeze-out abundance only. The resulting dark matter density has thus been considered to be _independent_ of the mediator lifetime \(\tau_{\tilde{q}}=1/\Gamma_{\tilde{q}\to q\chi}\). In our model, we will show that the superWIMP mechanism proceeds in a _qualitatively different_ way as the consequence of the super critical behavior of the effective cross section in non-Abelian gauge theories found in Sec. III.2. We assume that the mediator decays well before the QCD phase transition, such that the entire freeze-out dynamics takes place within the unconfined phase and involves \(\alpha_{s}\) in the perturbative regime (\(T>1\)GeV) only. Numerical results presented throughout this section are shown for the representative benchmark point \(m_{\tilde{q}}=4\times 10^{6}\) GeV. We discuss the mass dependence of our findings in Sec. V.3. ### Effective cross section The abundance of the mediator \(\tilde{q}\) is governed by the effective cross section Eq. (13), that encapsulates the impact of bound states. In Fig. 5, we show the rates that enter this quantity for an exemplary subset with \(n\leq 4\) and for all \(\ell\leq n-1\), for a wide range of \(x=m_{\tilde{q}}/T\) corresponding to \(m_{\tilde{q}}/10\geq T\geq 4\) GeV. Since the mediator is charged under both \(SU(3)_{c}\) and \(U(1)_{Y}\) it combines features of the Abelian and non-Abelian cases discussed in Sec. III. As expected, ionization (and correspondingly BSF) is clearly dominated by the QCD-mediated process, as can be seen by comparing the long-dashed and dotted lines in Fig. 5. Since \(\tilde{q}\tilde{q}^{\dagger}\) bound states exist only for the attractive color singlet configuration, color conservation dictates that bound-to-bound transitions are only contributing via an electric dipole interaction mediated by QED. The total transition rate from a given bound state \((n,\ell)\) into any higher or lower state is shown by the blue lines in Fig. 5. For the ground state \((1,0)\) only excitation occurs such that the rate becomes exponentially Boltzmann suppressed once the temperature drops below the corresponding difference of binding energies. The same is true for \((2,0)\) due to the selection rule \(\Delta\ell=\pm 1\) for dipole Figure 5: Ionization, decay and transition rates for all bound state levels \((n,\ell)\) up to \(n=4\) for \(m_{\tilde{q}}=4\times 10^{6}\,\)GeV. The gray dashed vertical line indicates the temperature that corresponds to the binding energy \(E_{\mathcal{B}}\) of the respective bound state. Note that the transition rates contain all possible excitations and de-excitations (here summed up to \(n^{\prime}=20\)). transitions. For all other levels, the total transition rate approaches a finite value for low temperature (_i.e._ large \(x\)), corresponding to the rate of de-excitation into lower levels. In addition, we include the direct decay of \(\ell=0\) states into a pair of gluons, which is the analog of mediator pair annihilation. This rate is practically temperature independent for \(x\gg 1\). Overall, the total width of any given level is dominated by the QCD-mediated ionization rate at temperatures \(T\) above or around the binding energy. For much lower temperatures, decay dominates for \(\ell=0\), and QED-mediated transitions (to lower levels) for \(\ell\geq 1\). The resulting effective cross section, which combines QCD mediated BSF and QED transitions among bound states, is shown in Fig. 6, where we take excited states \(\mathcal{B}_{n\ell}\) with \(0\leq\ell\leq n-1\) and up to a given maximal \(n\) into account. For the various blue lines, we include all possible electric dipole transitions. For any given temperature, the effective cross section converges when including a sufficient number of excited states. The reason for this convergence is that each excited state contributes in a limited temperature range only. While the underlying velocity dependence of the BSF cross section becomes increasingly complex at large \(n\), a given bound state \(n\), \(\ell\) only starts to contribute significantly once the temperature drops down to roughly its respective binding energy, \(T\sim E_{\mathcal{B}_{n\ell}}\). In fact, this is important for the validity of the dipole approximation, as it ensures that the temperature is well below the typical momenta of bound states that contribute significantly _i.e._ below their respective Bohr momenta. For \(T\ll E_{\mathcal{B}_{n\ell}}\) in contrast, its contribution is negligible due to the repulsive potential in the scattering state [9; 14]. Introducing \(x_{n}\equiv m_{\tilde{q}}/E_{\mathcal{B}_{n\ell}}\), we find \(x_{1}\simeq 7\times 10^{2}\), \(x_{10}\simeq 5\times 10^{4}\), \(x_{100}\simeq 3\times 10^{6}\) for our benchmark \(m_{\tilde{q}}=4\times 10^{6}\,\text{GeV}\), which correspond to the \(x\) values at which the respective excited levels are expected to start contributing significantly to the effective cross section. Overall, the lower the temperature (_i.e._ the higher the \(x\)), the more relevant higher excited levels become for achieving a converged effective cross section. We include states up to \(n=100\), taking all transitions among them into account (blue solid line). We checked that this suffices to reach converged results within the perturbative regime. In particular, the difference between \(n\leq 50\) and \(n\leq 100\) is less than \(0.2\%\) for \(T>1\,\text{GeV}\). As visible in Fig. 6, the effective cross section (blue solid line) clearly shows a super critical behavior, _i.e._ the power scaling is significantly larger than \(\propto x\) (red dashed). We stress that the interplay of bound states formed by the non-Abelian QCD interaction with transitions mediated by QED leads to a significant enhancement of the effective cross section compared to the limit of inefficient transitions, see green line in Fig. 6. The latter shows the result when omitting transition processes. It is similar to the case of dark QCD discussed in Sec. III.2. This shows that excited states play an even more prominent role for a mediator charged under both QCD and QED considered here. Nevertheless, the effective cross section increases more steeply than \(\propto x\) even in the no-transition approximation due to running. All the same, when including transitions but _neglecting_ running, the slope is still steeper than \(\propto x\), _i.e._ the presence of bound-to-bound transitions causes a super-critical behavior in our model even without running coupling effects. ### Relic abundance The evolution of the yields \(Y_{\tilde{q}}(x)\) and \(Y_{\chi}(x)\) for the mediator and the dark matter particle as obtained from solving the coupled Boltzmann equations (19) and (20) are shown in Fig. 7. Let us first discuss the case when including either only mediator annihilation (blue dotted line) or in addition the ground state (blue dot-dashed line, see _e.g._[40; 41]). In these cases, the yield \(Y_{\tilde{q}}(x)\) freezes out at \(x\sim\mathcal{O}(10^{2}-10^{3})\). After freeze-out it remains constant until the age of the universe becomes comparable to the mediator lifetime, _i.e._ for \(H\sim\Gamma_{\tilde{q}\to q\chi}\). At this point, the mediator particles decay into dark matter, such that the final yield \(Y_{\chi}\) is identical to the freeze-out value of \(Y_{\tilde{q}}\) which is set at much earlier times already. Accordingly, in a wide range of lifetimes the dark matter abundance does not depend on the dark matter coupling. This qualitative picture of the superWIMP mechanism has widely been adopted throughout the literature in the past. Intriguingly, when including excited bound states, the super critical effective cross section leads to a continuous depletion such that \(Y_{\tilde{q}}\) never freezes out. This can Figure 6: Effective cross section for the colored and electrically charged mediator \(\tilde{q}\). It includes the contribution from bound state levels \((n,\ell)\) for all \(\ell\) and up to \(n\leq 1,10,100\) (blue lines) accounting for all dipole transitions among them. Also shown is the no-transition limit for \(n\leq 1000\) (green). Both grow more steeply than \(\propto x\) (indicated by a thin black line) when sufficiently high \(n\) are included, implying mediator depletion without freeze-out. Here \(m_{\tilde{q}}=4\times 10^{6}\text{GeV}\). be seen in the solid line in Fig. 7, where bound states up to \(n\leq 100\) are included. The depletion of the number density is dominated by the effective cross section, until the decay of the mediator, \(\tilde{q}\to q\chi\), becomes efficient (here \(x\gtrsim 10^{6}\)). In fact, the effective annihilation rate \(\Gamma_{\rm eff}=n_{\tilde{q}}\langle\sigma v\rangle_{\rm eff}\) is kept on the edge of being efficient, _i.e._\(\Gamma_{\rm eff}\sim H\), over the entire period of bound state induced depletion. The dynamics is qualitatively different to the standard picture as there is no temperature regime where the yield of the mediator has frozen out. The quantitative impact of bound states up to \(n\leq 1,10,100\) on the dark matter abundance is explicitly shown in lower panel normalized by the result including only Sommerfeld enhancement and no bound states. Taking into account the ground state only yields a reduction by a factor \(\sim 2\) in the final abundance. When considering the full bound state effects (\(n\leq 100\)), we find that the dark matter relic abundance is lowered by more than an order of magnitude. Note that the mediator decays before the QCD transition such that dark matter production takes place in the unconfined phase involving \(\alpha_{s}\) in the perturbative regime. We explicitly verified that our results are insensitive to the behavior of the strong coupling at scales below \(1\,\)GeV. To check this we implemented different numerical prescriptions for treating the strong coupling at these low scales, and find that the final abundance is highly insensitive as long as the mediator lifetime ensures a decay before the QCD transition. Furthermore, we checked that our results are not influenced by contributions for which partial-wave unitarity is questionable, as the corresponding velocities are not relevant for the thermally averaged effective cross section in the regime \(T>1\,\)GeV. In Fig. 8, we show the abundance evolution for three different values of the mediator lifetime. (The additional long-dashed line depicts the result when excluding mediator decays, _i.e._\(\Gamma_{\tilde{q}\to q\chi}=0\).) For all decay rates, superWIMP production provides the dominant contribution to the dark matter density. While for the largest decay rate, \(\Gamma_{\tilde{q}\to q\chi}=2.5\times 10^{-14}\,\)GeV, freeze-in still contributes almost \(10\%\), it is fully negligible for the smaller rates chosen. Due to the continuous depletion of \(Y_{\tilde{q}}\) in the presence of excited states, the time of decay does, indeed, have an impact on the final dark matter abundance, as can be seen from the three different values of \(Y_{\chi}\) (red lines) for \(x\to\infty\). In contrast, when neglecting excited states (dotted lines), the final yield is identical for all three mediator decay rates. A similar behavior can be found when including only a small number of bound states. ### Implications In Fig. 9, we finally show the dark matter mass, \(m_{\chi}\) (left axis labeling), for which the final \(\chi\) abundance (displayed using the right axis labeling) yields the observed dark matter relic density, \(\Omega_{\chi}h^{2}\simeq 0.12\)[70], as Figure 8: Evolution of mediator and dark matter abundances when including bound states and transitions among them up to \(n=100\) (solid lines) or no bound states (dotted) for three different decay rates \(\Gamma_{\tilde{q}\to q\chi}=2.5\times 10^{-14}\), \(5\times 10^{-16}\), \(10^{-17}\,\)GeV. The long dashed line shows the limit \(\Gamma_{\tilde{q}\to q\chi}=0\), which decreases due to the large contribution of excited states to the effective cross section. The final value of \(Y_{\chi}\) therefore depends on \(\Gamma_{\tilde{q}\to q\chi}\) even in the superWIMP regime where the \(\chi\) abundance generated via freeze-in at low \(x\) is negligible. Figure 7: _Upper panel:_ Abundance evolution of \(\chi\) (red) and \(\tilde{q}\) (blue) for \(m_{\tilde{q}}=4\times 10^{6}\,\)GeV and \(\Gamma_{\tilde{q}\to q\chi}=10^{-17}\,\)GeV. When including no (SE only) or one (\(n=1\)) bound state, the mediator yield \(Y_{\tilde{q}}\) freezes out and then subsequently transfers its abundance to \(Y_{\chi}\) via \(\tilde{q}\to q\chi\). Taking excited states and transitions among them into account leads to a continuous decrease of \(Y_{\tilde{q}}\) (dashed and solid blue line for \(n\leq 10\) and \(n\leq 100\), respectively) that is only terminated by mediator decay once \(\Gamma_{\tilde{q}\to q\chi}\gtrsim H\). _Lower panel:_ Ratio of \(\chi\) yield when including bound states up to \(n\leq 1,10,100\) over the result without bound state contributions. a function of the mediator decay rate, \(\Gamma_{\tilde{q}\to q\chi}\). For \(\Gamma_{\tilde{q}\to q\chi}\gtrsim 10^{-12}\,\text{GeV}\), dark matter production is dominated by freeze-in, while for lower decay rates, which we focus on in the following, the superWIMP mechanism sets the abundance. As discussed above, within this regime, the mediator abundance has so far usually been assumed to freeze out. In that case, each remaining mediator particle subsequently produces one dark matter particle, such that the precise decay rate is irrelevant for superWIMP production. This is indeed the case in Fig. 9, when taking only mediator pair annihilation (dotted) or in addition the ground state (dot-dashed) into account. When including excited bound states, the mediator abundance continues to deplete until the age of the universe reaches the mediator lifetime. The remaining mediators then produce dark matter via \(\tilde{q}\to q\chi\). This implies that the final dark matter abundance does, in fact, depend on \(\Gamma_{\tilde{q}\to q\chi}\) and, hence, on the dark matter coupling. Therefore also the dark matter mass \(m_{\chi}\) for which \(\Omega_{\chi}h^{2}\simeq 0.12\) does depend on it. This can be seen in the solid and dashed curves in Fig. 9 corresponding to taking into account all excited states (\(\ell\leq n-1\)) with \(n\leq 10\) and \(100\), respectively. The smaller dark matter abundance implies a larger dark matter mass, as compared to the case without excited states. For \(\Gamma_{\tilde{q}\to q\chi}\gtrsim 10^{-17}\,\text{GeV}\), the mediator decay occurs prior to the QCD transition, _i.e._ within the perturbative regime \(T>1\,\text{GeV}\). Note that in this regime the inclusion of bound state up to \(n=100\) appears to be sufficient. While we find significant contributions from bound states with \(n>10\) (_cf._ the difference between the dashed and solid lines in Fig. 9) this contribution is dominated by bound states \(n<50\). We reiterate that \(n=100\) is sufficient to find a convergent effective cross section, hence even higher bound states are expected to not alter our results. For illustration, in Fig. 9, we include decay rates down to around \(10^{-18}\,\text{GeV}\) for which a significant fraction of mediators have not decayed at \(T=1\,\text{GeV}\). In this region, the gray shaded areas conservatively bracket the uncertainty in the effective annihilation rate arising from the impact of confinement, by assuming that all mediators that are still present at \(T=1\,\text{GeV}\) either vanish (upper boundary) or fully decay into dark matter particles (lower boundary). However, in this work, we focus on the perturbative regime, \(\Gamma_{\tilde{q}\to q\chi}\geq 10^{-17}\,\text{GeV}\), for which we find that the difference between the upper and lower boundary is less than \(1\%\). Due to the late decay and large mass difference between the mediator and dark matter, the dark matter momentum distribution for the considered scenario can be significantly harder than the one of cold dark matter. The resulting free-streaming effect impacts structure formation on small scales probed by Lyman-\(\alpha\) forest observations. As shown in Ref. [40], this results in a lower bound on the dark matter mass, \[\frac{m_{\chi}}{\text{keV}}>3.8\times x_{\text{decay}}\left(\frac{106.75}{g_{ *S}(x_{\text{decay}})}\right)^{1/3}\, \tag{28}\] that can easily reach into the GeV range. Here, \(x_{\text{decay}}=(\Gamma_{\tilde{q}\to q\chi}/H(m_{\tilde{q}}))^{-1/2}\) is the temperature parameter at which the decay becomes efficient, \(\Gamma_{\tilde{q}\to q\chi}=H(m_{\tilde{q}}/x_{\text{decay}})\), _cf._ upper axis labeling in Fig. 9. The formula assumes \(\Omega_{\chi}h^{2}=0.12\) and \(m_{q}\ll m_{\tilde{q}}\). In Fig. 9, we display the corresponding \(95\%\) C.L. exclusion as a red shaded area. Interestingly, excited bound state effects have a significant impact on the implications of this constraint. While with Sommerfeld effect (and \(n=1\) bound state) only, a decay rate of around \(10^{-15}\) (\(4\times 10^{-16}\)) GeV would be excluded, the inclusion of excited bound states reveal that the entire region with \(\Gamma_{\tilde{q}\to q\chi}\geq 10^{-17}\,\text{GeV}\) remains unchallenged by the considered Lyman-\(\alpha\) constraint. Figure 9: Dark matter mass \(m_{\chi}\) and decay rate \(\Gamma_{\tilde{q}\to q\chi}\) of the colored \(t\)-channel mediator for which the final yield matches the observed dark matter density \(\Omega_{\chi}h^{2}=0.12\), using \(m_{\tilde{q}}=4\times 10^{6}\,\text{GeV}\). The abundance is set by the superWIMP mechanism for \(\Gamma_{\tilde{q}\to q\chi}\lesssim 10^{-13}\,\text{GeV}\), while freeze-in dominates for larger decay rates. When taking only Sommerfeld-enhanced \(\tilde{q}\tilde{q}^{\dagger}\) annihilation into account (dotted line) the relic density is independent of \(\Gamma_{\tilde{q}\to\chi}\) within the superWIMP regime, _i.e._ the dotted line is horizontal. The same is true when taking the ground state \(n=1\) into account (dot-dashed line). When including excited states, the mediator depletion due to BSF, transitions and decay of bound states continues until eventually \(H\sim\Gamma_{\tilde{q}\to q\chi}\), such that the relic density does depend on the mediator lifetime, corresponding to the black dashed (\(n\leq 10\)) and black solid (\(n\leq 100\)) lines. For \(\Gamma_{\tilde{q}\to q\chi}\gtrsim 10^{-17}\,\text{GeV}\) the mediator decays before the QCD transition, _i.e._ within the perturbative regime. The shaded areas bracket the possible values for even lower decay rate, see text for details. The red shaded region is excluded at \(95\%\) C.L. by Lyman-\(\alpha\) forest observations [40]. So far, we have focused on the benchmark mass \(m_{\tilde{q}}=4\times 10^{6}\,\text{GeV}\). For smaller masses, the effective cross section becomes larger and the mediator abundance decreases. Therefore, for a given \(x_{\text{decay}}\), the dark matter mass that leads to \(\Omega h^{2}=0.12\) increases and the Lyman-\(\alpha\) bound become less constraining. Accordingly, we find that current Lyman-\(\alpha\) forest constraints do not challenge the superWIMP scenario for \(m_{\tilde{q}}<4\times 10^{6}\,\text{GeV}\), if we restrict ourselves to decays within the perturbative regime of couplings, _i.e._ decays that take place before the QCD phase transition, \(x_{\text{decay}}<m_{\tilde{q}}/1\,\text{GeV}\). However, as the so-defined maximal \(x_{\text{decay}}\) becomes smaller with smaller \(m_{\tilde{q}}\), highly excited states become less relevant somewhat diminishing the large effect of bound states towards small \(m_{\tilde{q}}\) in our setup. For masses larger than \(4\times 10^{6}\,\text{GeV}\), the cosmologically viable dark matter mass decreases and Lyman-\(\alpha\) constraints become more restrictive. In particular, starting from masses somewhat above \(m_{\tilde{q}}=4\times 10^{6}\,\text{GeV}\), they impose an upper bound on \(x_{\text{decay}}\) that is more restrictive than the above-mentioned perturbativity condition. As a consequence of the tightening Lyman-\(\alpha\) constraints on the allowed range of mediator decay rates, higher excitations become less important also towards larger masses. Eventually, for \(m_{\tilde{q}}>4\times 10^{8}\,\text{GeV}\), the entire superWIMP region is excluded (assuming thermalization of the mediator). ## VI Conclusion In this work, we studied the impact of excited bound states on dark matter production. We found that they can be highly relevant, focussing especially on setups where unbroken Abelian and/or non-Abelian gauge interactions are responsible for the bound state dynamics. Considering bound state formation/ionization and (de-)excitation processes described by dipole transitions, we developed an efficient way to compute their rates numerically. It allows us to take into account excitations up to a principal quantum number \(n=100\) (\(n=1000\)) in the presence (absence) of transitions among them involving more than 5000 individual BSF and \(\mathcal{O}(10^{6})\) transitions rates. With this numerical tool at our disposal, we investigated several theoretical and phenomenological questions. First, considering an Abelian gauge theory like dark QED, we compared the summed BSF cross section to the well-known Kramer's logarithm, confirming its approximate behavior towards small \(v\) that increases faster than \(\propto 1/v\). However, this behavior results from a summation over different _initial_ state partial waves. We checked that each initial angular momentum contribution is compatible with partial-wave unitarity for all velocities, and at sufficiently weak coupling. In contrast, we found that the total BSF cross section in non-Abelian gauge theories generally does violate partial-wave unitarity bounds even for perturbatively small coupling. While a closer investigation of this feature is left for future work, we focussed on the phenomenologically relevant regime of velocities in our subsequent results for which unitarity bounds are satisfied. We exemplified our results for two particle states with constituents transforming as \(3\) and \(\bar{3}\) under \(SU(3)\). Due to the repulsive potential of the adjoint scattering state, in radiative BSF via gluon emission, each individual bound state cross section only contributes around a characteristic velocity. This renders very high excitations to be the dominant contribution to the thermally averaged effective cross section at very low temperatures. Consequently, we investigated the important question whether the effective cross section increases slower or faster than \(x=m/T\), in which case the particle's number density would freeze-out or would continue to deplete. For dark QED, we found a scaling \(\langle\sigma v\rangle_{\text{eff}}\propto x^{0.6}\) towards large \(x\). For dark QCD, \(\langle\sigma v\rangle_{\text{eff}}\) grows slightly slower (faster) than \(\propto x\), if we exclude (include) the effect of a running coupling (assuming a negative beta function). However, the dependence of this qualitative difference on the effects due to the running coupling is only present in (dark) QCD, _i.e._ in the absence of transitions among the states. When color and electric charge is combined, then singlet-singlet transitions are possible and significantly enhance the effective cross section towards small \(x\) where higher \(n\) dominate, and steepen the effective cross section beyond \(\propto x\) regardless of running effects. Finally, we studied such a case in detail by considering a scenario where an electrically and color charged mediator - which accompanies dark matter - is subject to the formation of bound states. Such a setup is realized in so-called \(t\)-channel mediator models. We focused on the very weak coupling regime for which the dark matter density can be generated through a late decaying mediator particle, _i.e._ by the superWIMP mechanism. The commonly adopted assumption within this paradigm is that mediator freeze-out and decays can be considered independently, thereby rendering the resulting dark matter density to be independent of the the mediator lifetime (and therewith the dark matter coupling it depends on). Here we found that considering excited bound state effects, this picture has to be revised. Due to an interplay of QCD bound states and transitions mediated by QED, the resulting effective cross section grows significantly faster than \(\propto x\), _i.e._ features a strongly super critical scaling, thereby retaining a sizeable depletion of mediator particles throughout its entire presence before it decays. Therefore, the dark matter relic density _does_ depend on the mediator lifetime. In fact, we found that - restricting ourselves to decays that happen well within the unconfined phase - the yield reduction due to bound state effects can amount to an order of magnitude with respect to the result including Sommerfeld enhanced annihilation only. This has important consequences for the cosmologically viable parameter space. For a given mediator mass, bound state effects shift the dark matter mass required to match \(\Omega h^{2}\) towards larger values by up to an order of magnitude, thereby weakening constraints from structure formation through Lyman-\(\alpha\) observations. While we are choosing dark matter physics as an application of the considered bound state effects, we note that they could also play an important role in the context of baryogenesis. For instance, according to [71], the late decay of a color-charged scalar can generate the matter-antimatter asymmetry. A significant enhancement of the effective cross section due to bound states could open up parts of the parameter space otherwise excluded by strong bounds from \(N_{\text{eff}}\). In addition, an analysis of phenomenological consequences within dark sector models, for which dark matter is charged under an unbroken dark gauge symmetry, would be interesting, also in view of phase transitions and associated gravitational wave signatures [23; 25]. On the theoretical side, our results motivate an investigation of unitarization of bound state formation processes mediated by unbroken non-Abelian gauge interactions within the regime of perturbatively small couplings. ###### Acknowledgements. We are thankful to Martin Beneke, Florian Herren, and Gramos Qerimi for discussions. This work was supported by the DFG Collaborative Research Institution Neutrinos and Dark Matter in Astro- and Particle Physics (SFB 1258) and the Collaborative Research Center TRR 257. J.H. acknowledges support by the Alexander von Humboldt foundation via the Feodor Lynen Research Fellowship for Experienced Researchers and Feodor Lynen Return Fellowship. ## Appendix A Evaluation of large \(n\) dipole transitions The evaluation of ultra-soft transitions such as bound-state formation and de-excitation for large \(n\) is a demanding task. Here, we briefly summarize the derivation of the used expressions that enable a numerically stable and sufficiently fast computation of the scattering-to-bound and bound-to-bound dipole transitions up to about \(n=1000\) and \(n=100\), respectively. A Mathematica notebook containing the implemented expressions is available on request. ### Scattering-to-bound We turn now to the evaluation of the scattering-to-bound dipole transitions matrix elements. The angular part can be performed as [14]: \[\sum_{m}|\bra{\psi_{n\ell m}}|\mathbf{r}\ket{\psi_{\mathbf{p}}^{\prime}}|^{2}= \frac{4\pi}{|\mathbf{p}|^{5}}\sum_{\ell^{\prime}}(\ell^{\prime}\delta_{\ell^{ \prime},\ell+1}+\ell\delta_{\ell,\ell^{\prime}+1})|I_{R}|^{2}, \tag{10}\] where we have \(|\mathbf{p}|=mv/2\). Our starting point for the evaluation of the radial integral part is (13) in Ref. [14]: \[I_{R,\text{BSF}}= \frac{2\zeta_{b}^{3/2}(\ell+\ell^{\prime}+3)!}{n^{2}\sqrt{(n-\ell -1)!(n+\ell)!}}\frac{2^{\ell^{\prime}}e^{\pi\zeta_{s}/2}}{|\Gamma(1+\ell^{ \prime}-i\zeta_{s})|}\times\] \[\left(\frac{2\zeta_{b}}{n}\right)^{\ell}\left(\frac{d}{dt}\right) ^{n-\ell-1}\frac{1}{(1-1)^{2\ell+2}}\times\] \[\int_{0}^{1}ds\frac{s^{\ell^{\prime}-i\zeta_{s}}(1-s)^{\ell^{ \prime}+i\zeta_{s}}}{\left(\frac{\zeta_{b}}{n}\frac{1+t}{1-t}+i(2s-1)\right)^{ \ell+\ell^{\prime}+4}}\Big{|}_{t=0}\,, \tag{11}\] where \(\zeta_{b}=\alpha_{b}^{\text{eff}}/v\) and \(\zeta_{s}=\alpha_{s}^{\text{eff}}/v\). To arrive at this expression, the representation of the bound-state wave function in terms of associated Laguerre polynomials and their generating function was used while for the scattering state the Hypergeometric function \({}_{1}F_{1}\) in terms of the integral representation was used. In this work, we directly evaluate the integral first as \[\int_{0}^{1}ds\frac{s^{\ell^{\prime}-a}(1-s)^{\ell^{\prime}+a}}{( -ib+i(2s-1))^{\ell+\ell^{\prime}+4}}=\] \[(-i)^{-\ell-\ell^{\prime}}\frac{\Gamma(1-a+\ell^{\prime})\Gamma( 1+a+\ell^{\prime})}{(1+b)^{4+\ell+\ell^{\prime}}\Gamma(2(1+\ell^{\prime}))}\times\] \[{}_{2}F_{1}\left(1-a+\ell^{\prime},4+\ell+\ell^{\prime},2(1+\ell ^{\prime}),\frac{2}{1+b}\right)\,, \tag{12}\] where \(a=i\zeta_{s},b=i\frac{\zeta_{b}}{n}\frac{1+t}{1-t}\). Applying the \(t\) derivatives in Eq. (11) to the Hypergeometric function \({}_{2}F_{1}\), allows us to obtain the following recursive relations: \[|I_{R,\text{BSF}}^{\ell^{\prime}=\ell+1}|=2n^{3}\sqrt{(\ell^{2}+ \zeta_{s}^{2})((\ell+1)^{2}+\zeta_{s}^{2})}P(n,\ell)\times\] \[\Big{|}((2+\ell)\tilde{\zeta}_{b}+\zeta_{s})R_{n,\ell}(n-\ell-5)\] \[-2((2+\ell)\tilde{\zeta}_{b}+2\zeta_{s})R_{n,\ell}(n-\ell-4)\] \[+6\zeta_{s}R_{n,\ell}(n-\ell-3)\] \[+2((2+\ell)\tilde{\zeta}_{b}-2\zeta_{s})R_{n,\ell}(n-\ell-2)\] \[+(\zeta_{s}-(2+\ell)\tilde{\zeta}_{b})R_{n,\ell}(n-\ell-1)\Big{|}\,, \tag{13}\] and \[|{I_{R,\text{BSF}}^{\ell^{\prime}=\ell-1}}|=n^{3}P(n,\ell)\times\] \[\Big{|}-\big{[}(\ell(1+\ell)\tilde{\zeta_{b}}(-3+(1+2\ell)){\tilde {\zeta_{b}}}^{2})\] \[+(-1-3\ell+3(1+\ell)(1+2\ell){\tilde{\zeta_{b}}}^{2})\zeta_{s}\] \[+6(1+\ell)\tilde{\zeta_{b}}\zeta_{s}^{2}+2\zeta_{s}^{3}\big{]}R_{n,\ell}(n-\ell-5)\] \[-\big{[}2(\ell(1+\ell)\tilde{\zeta_{b}}(3+(1+2\ell){\tilde{\zeta_ {b}}}^{2})+2\zeta_{s}+6\ell\zeta_{s}\] \[-6(1+\ell)\tilde{\zeta_{b}}\zeta_{s}^{2}-4\zeta_{s}^{3}\big{]}R_{ n,\ell}(n-\ell-4)\] \[+\big{[}6\zeta_{s}(1+3\ell+(1+\ell)(1+2\ell){\tilde{\zeta_{b}}}^{ 2}-2\zeta_{s}^{2})\big{]}R_{n,\ell}(n-\ell-3)\] \[+\big{[}2(\ell(1+\ell)\tilde{\zeta_{b}}(3+(1+2\ell){\tilde{\zeta _{b}}}^{2})-2(1+3\ell)\zeta_{s}\] \[-6(1+\ell)\tilde{\zeta_{b}}\zeta_{s}^{2}+4\zeta_{s}^{3}\big{]}R_{ n,\ell}(n-\ell-2)\] \[+\big{[}\ell(1+\ell)\tilde{\zeta_{b}}(-3+(1+2\ell){\tilde{\zeta_ {b}}}^{2})+\zeta_{s}+3\ell\zeta_{s}\] \[-3(1+\ell)(1+2\ell){\tilde{\zeta_{b}}}^{2}\zeta_{s}\] \[+6(1+\ell)\tilde{\zeta_{b}}\zeta_{s}^{2}-2\zeta_{s}^{3}\big{]}R_{ n,\ell}(n-\ell-1)\Big{|}\;, \tag{100}\] where \(\tilde{\zeta_{b}}=\zeta_{b}/n\). The common prefactor is \[P(n,\ell) =\sqrt{\frac{(n-\ell-1)!}{(n+\ell)!}}\frac{2^{2+2\ell}{\tilde{ \zeta_{b}}}^{\ell+\frac{3}{2}}}{n^{\frac{1}{2}}\zeta_{s}(1+{\tilde{\zeta_{b}}} ^{2})^{3+\ell}}\times\] \[\sqrt{\frac{2\pi\zeta_{s}}{1-e^{-2\pi\zeta_{s}}}}e^{-2\zeta_{s} \mathrm{arcoot}(\tilde{\zeta_{b}})}\sqrt{\prod_{j=0}^{\ell-1}{(j^{2}+\zeta_{s} ^{2})}} \tag{101}\] and the common recursion is given by \[R_{n,\ell}(x)=\left\{\begin{array}{ll}0&x<0\\ 1&x=0\\ \frac{-2(3+\ell)({\tilde{\zeta_{b}}}^{2}-1)+4{\tilde{\zeta_{b}}}\zeta_{s}}{1+ {\tilde{\zeta_{b}}}^{2}}&x=1\\ \frac{2(2+\ell+x)(1-{\tilde{\zeta_{b}}}^{2})+4{\tilde{\zeta_{b}}}\zeta_{s}}{x (1+{\tilde{\zeta_{b}}}^{2})}R_{n,\ell}(x-1)&\\ -\frac{(4+2+x)}{x}R_{n,\ell}(x-2)&\mathrm{else}\,.\end{array}\right. \tag{102}\] The expressions allow for a fast generation of the matrix elements entering the BSF cross section, also for large \(n,\ell\). They have been cross-checked against the expressions in Ref. [14] up to \(n=10\) (for all \(\ell\)) for the QCD case, as well as for the simpler QED limit \(\zeta_{s}=\zeta_{b}=\zeta\). To achieve the results presented in this work up to \(n=1000\) (formally for all \(\ell\), though only \(\ell=0\) was needed at \(n>100\)) we split the prefactor (101) into two multiplicative pieces to avoid numerical underflow. ### Bound-to-bound We turn now to the evaluation of the bound-to-bound dipole transition matrix elements. The angular part can be performed as [14]: \[\sum_{m,m^{\prime}}|\bra{\psi_{n\ell m}}\mathbf{r}\ket{\psi_{n^ {\prime}\ell^{\prime}m^{\prime}}}|^{2}=\\ (\ell^{\prime}\delta_{\ell^{\prime},\ell+1}+\ell\delta_{\ell,\ell^{ \prime}+1})|I_{R,\mathrm{trans}}|^{2}\,, \tag{103}\] leaving only the radial integral over the initial and final bound state wave-functions. For convenience, we define \(|\bra{\psi_{n\ell}}\mathbf{r}\ket{\psi_{n^{\prime}\ell^{\prime}}}|^{2}\) as the squared matrix element average over \(m,m^{\prime}\). We start our evaluation, by representing both wave functions in terms of the Hypergeometric functions \({}_{1}F_{1}\).7 Footnote 7: Another evaluation can be made by representing the wave functions of the bound states in terms of the associated Laguerre polynomials: \[L_{n}^{\alpha}(x)=\sum_{i=0}^{n}(-1)^{i}\left(\begin{array}{c}n+\alpha\\ n-i\end{array}\right)\frac{x^{i}}{i!}. \tag{104}\] Performing the radial integral in this representation leads to \[I_{R,\mathrm{trans}}=N_{n\ell}(\kappa)N_{n^{\prime}\ell^{\prime}}( \kappa^{\prime})(\tilde{\kappa}+\tilde{\kappa}^{\prime})^{-4-\ell-\ell^{ \prime}}\times \tag{105}\] \[\sum_{k=0}^{n-\ell-1}\sum_{\ell^{\prime}=0}^{n^{\prime}-\ell^{ \prime}-1}\left(\begin{array}{c}n+\ell\\ n-\ell-k-1\end{array}\right)\left(\begin{array}{c}n^{\prime}+\ell^{\prime}\\ n^{\prime}-\ell^{\prime}-k^{\prime}-1\end{array}\right)\times\] \[\frac{(-2)^{k+k^{\prime}}\Gamma(4+\ell+\ell^{\prime}+k+k^{\prime} )}{k!k^{\prime}!}\frac{\tilde{\kappa}^{k}(\tilde{\kappa}^{\prime})^{k^{ \prime}}}{(\tilde{\kappa}+\tilde{\kappa}^{\prime})^{k+k^{\prime}}}\,. \tag{106}\] With the help of Mathematica, both sums can be performed, resulting in similar recursions as we have obtained for the scattering-to-bound case. In practice, however, it turns out that these bound-to-bound recursions are less numerically stable within standard digit precision than what we have obtained by using the Hypergeometric functions in Eqs. (102),(103) and (105). and \[|I_{R,\mathrm{trans}}^{\ell^{\prime}=\ell-1}|=N_{n,\ell}(\tilde{ \kappa})N_{n^{\prime},\ell-1}(\tilde{\kappa}^{\prime})\times \tag{13}\] \[2^{-1}\ell(2\ell+1)\Gamma(2\ell)(1-z)^{\frac{n^{\prime}-n}{2}} \frac{z^{\ell}}{\tilde{\kappa}^{2}\tilde{\kappa}^{\prime}}\times\] \[\bigg{|}-\frac{\tilde{\kappa}+\tilde{\kappa}n-\tilde{\kappa}^{ \prime}n^{\prime}}{(\tilde{\kappa}-\tilde{\kappa}^{\prime})^{2}}\,_{2}F_{1}(-n +\ell-1;n^{\prime}+\ell;2\ell;z)\] \[-2\frac{\tilde{\kappa}n-\tilde{\kappa}^{\prime}n^{\prime}}{( \tilde{\kappa}^{2}-\tilde{\kappa}^{\prime 2})}\,_{2}F_{1}(-n+\ell;n^{\prime}+\ell;2 \ell;z)\] \[+\frac{\tilde{\kappa}-\tilde{\kappa}n+\tilde{\kappa}^{\prime}n^{ \prime}}{(\tilde{\kappa}+\tilde{\kappa}^{\prime})^{2}}\,_{2}F_{1}(-n+\ell+1;n ^{\prime}+\ell;2\ell;z)\bigg{|}\,,\] with \(z=\frac{4\tilde{\kappa}\tilde{\kappa}^{\prime}}{(\tilde{\kappa}+\tilde{\kappa }^{\prime})^{2}}<1\) and \(\tilde{\kappa}=\alpha\mu/n\), \(\tilde{\kappa}^{\prime}=\alpha^{\prime}\mu^{\prime}/n^{\prime}\) and \(\tilde{\kappa}\neq\tilde{\kappa}^{\prime}\). For the special case \(\alpha^{\prime}\mu^{\prime}=\alpha\mu\) (particularly important for QED), see Ref. [21; 72]. The normalization is given by: \[N_{n,\ell}(\tilde{\kappa})=\frac{\tilde{\kappa}^{3/2}}{\sqrt{n}}\frac{2}{(2 \ell+1)!}\left(\frac{(n+\ell)!}{(n-\ell-1)!}\right)^{1/2}\,. \tag{14}\] For large initial state principle quantum numbers \(n=\mathcal{O}(10)\), the numerical evaluation of the Hypergeometric functions can be problematic for transitions where \(z\to 1\) (\(n^{\prime}\to n\)). To improve the numerical stability in this regime, we use a lengthy expression for the Hypergeometric function expanded around \(1-z\). For the special arguments of the Hypergeometric functions as given above, we can further simplify the expression and finally obtain to all orders in \(1-z\): \[{}_{2}F_{1}(a;b;c;z)\to(-1)^{-a}(-1+z)^{-a-b+c}\times \tag{15}\] \[\frac{\Gamma(1-a)\Gamma(c)}{\Gamma(b)\Gamma(1-a-b+c)}\,_{2}F_{1}(c -a;c-b;c-a-b+1;1-z)\,.\] Notice that this substitution is an identity for all sets of \(\{a;b;c;z\}\) arguments given above. In practice, we use the substitution for \(z>0.7\), which allows us to obtain stable numerical results for all bound-to-bound transitions with \(n\leq 100\). Increasing the number of digits for \(z\) (_e.g._ via MaxExtraPrecision in Mathematica) allows for even larger \(n\) values and also to check the stability, with the cost of loosing efficiency. ## Appendix B Cross sections and rates ### Thermal average and Milne relations In the non-relativistic limit, the thermal average can be written as \[\big{\langle}\sigma v\big{\rangle}_{i}=\Big{(}\frac{\mu}{2\pi T}\Big{)}^{3/2} \int\!\mathrm{d}^{3}v\,\mathrm{e}^{-\frac{\mu\kappa^{2}}{2T}}\left[1+f(\Delta E )\right]\,(\sigma v)_{i}\,, \tag{16}\] where \(f(\Delta E)=1/(\mathrm{e}^{\Delta E/T}-1)\). We note that the thermal averages in non-Abelian gauge theories need to be evaluated carefully due to oscillatory features.8 Footnote 8: For example, for effective coupling \(\alpha_{s}^{\mathrm{eff}}<0<\alpha_{b}^{\mathrm{eff}}\) relevant in \(SU(N_{c})\), \((\sigma v)_{n\ell}\) features \(n-\ell-1\) local minima in its velocity dependence. Fewer local minima arise when \(0<\alpha_{s}^{\mathrm{eff}}<\alpha_{b}^{\mathrm{eff}}\). As usual, the cross section includes an average of initial state degrees of freedom and a sum over final state degrees of freedom. For the case of bound states with Fermionic spin-1/2 constituents, one has to distinguish between bound states in a spin-singlet or spin-triplet configuration. The main difference between the two cases occurs for the bound state decay rate, see below. Since the electric dipole interaction is spin-independent, bound-to-bound transitions do not change the spin within our approximations, _i.e._ transitions occur exclusively among spin-singlet states or spin-triplet states, respectively. The transition rates are identical in both cases. Furthermore, for scattering-to-bound processes, the contribution to the BSF cross section for formation of any single spin degree of freedom of the bound state is the same. Therefore, we account for the formation of spin-singlet or spin-triplet bound states by the prefactor \(\xi=1/4\) or \(\xi=3/4\), respectively, in Eqs. (12) and (12). Furthermore, the BSF cross sections also apply for bound states composed of charged scalars, with \(\xi=1\). The reason is that the factor \(1/4=1/2\times 1/2\) from averaging over initial state spin and the factor \(4=3+1\) from summing over all final state spin configurations cancel out in the Fermionic case. For computational efficiency, we use the following Milne relations based on detailed balance to obtain the inverse processes of BSF, _i.e._ ionization, and bound state de-excitation, _i.e._ excitation. Assuming leading order non-relativistic expansions for the equilibrium yields \(Y_{j}^{\mathrm{eq}}\) (the combined particle and anti-particle yield) and \(Y_{\mathrm{B}_{i}}^{\mathrm{eq}}\), the ionization rate can be expressed as \[\Gamma_{\mathrm{ion}}^{i}=\frac{s}{4}\,\frac{(Y_{j}^{\mathrm{eq}})^{2}}{Y_{ \mathrm{B}_{i}}^{\mathrm{eq}}}\big{\langle}\sigma v\big{\rangle}_{i}\,. \tag{17}\] Note that for bound states with Fermionic constituents, it holds for both spin-singlet and spin-triplet states when using the appropriate BSF cross section as described above and accounting for the factors of spin degrees of freedom contained in the equilibrium yields. The transition rates among the set of bound states are similarly related via \[\Gamma_{\mathrm{trans}}^{i\to j}=\Gamma_{\mathrm{trans}}^{j\to i}\frac{Y_{ \mathrm{B}_{j}}^{\mathrm{eq}}}{Y_{\mathrm{B}_{i}}^{\mathrm{eq}}}\,. \tag{18}\] ### Dark QED In the absence of light Fermions the coupling is not running within dark QED and, hence, the effective coupling strengths are identical for bound and scattering states, _i.e._\(\alpha_{s}^{\rm eff}=\alpha_{b}^{\rm eff}=\alpha\) here. The annihilation cross section into a pair of massless dark photons, including Sommerfeld enhancement, is given by [73; 74] \[(\sigma v)_{\rm ann}=\frac{\pi\alpha^{2}}{m^{2}}\,S_{0}\left(\frac{\alpha}{v} \right)\,, \tag{104}\] where \[S_{0}(\zeta)\equiv\frac{2\pi\zeta}{1-e^{-2\pi\zeta}}\,. \tag{105}\] The BSF cross section, which includes the summation over the degenerate magnetic quantum number of the final state \(n\ell\) and also considers all possible initial angular momenta, is in the case of \(U(1)\) interactions given by \[\left(\sigma v\right)_{n\ell}=\xi\frac{\pi\,\alpha^{2}}{m^{2}}\frac{2^{9}}{3} \,S_{\rm BSF}(n,\ell,\frac{\alpha}{nv},\frac{\alpha}{v})\,, \tag{106}\] where \(\xi\) accounts for spin factors as defined in Sec..2.1, and \[S_{\rm BSF}(n,\ell,\tilde{\zeta}_{b}, \zeta_{s})\equiv\,\frac{1}{2^{6}n\tilde{\zeta}_{b}}\left(1+\tilde{ \zeta}_{b}^{2}\right)^{3}\times\] \[\left[(\ell+1)|I_{R}^{\ell^{\prime}=\ell+1}|^{2}+\ell|I_{R}^{\ell ^{\prime}=\ell-1}|^{2}\right]. \tag{107}\] \(S_{\rm BSF}\) can be evaluated numerically in an efficient way by using the radial integral formulas laid out in App..2.1. Further analytic simplifications due to the absence of scale running of the coupling strength and the identical initial and final state gauge representations are possible, though not used in our numerical evaluation. For spin-singlet bound states, we adopt the decay rate into two dark photons for the s-wave states from Ref. [75; 76]: \[\Gamma_{\rm dec,\,QED}^{n\ell\to\gamma\gamma}=\delta_{\ell,0}\,\frac{m\, \alpha^{5}}{2n^{3}}\,. \tag{108}\] For spin-triplet bound states we adopt the decay rate into three dark photons for the s-wave states from Ref. [77]: \[\Gamma_{\rm dec,\,QED}^{n\ell\to 3\gamma}=\frac{4(\pi^{2}-9)\alpha}{9\pi} \times\Gamma_{\rm dec,\,QED}^{n\ell\to\gamma\gamma}\,. \tag{109}\] Lastly, the de-excitation rate relates to the dipole matrix element via \[\Gamma_{\rm trans,\,QED}^{n^{\prime}\ell^{\prime}\to n\ell} = \frac{4\alpha}{3}(2\ell+1)\left(\frac{m\alpha^{2}}{4}\left|\frac{ 1}{n^{2}}-\frac{1}{n^{\prime 2}}\right|\right)^{3} \tag{110}\] \[\times\,|\left\langle\psi_{n^{\prime}\ell^{\prime}}\,\mathbf{r} \,|\psi_{n\ell}\right\rangle|^{2},\] according to App..2.2. ### Dark QCD In Yang-Mills-theories, gauge boson self interactions give rise to running of the coupling strength, even in the absence of light Fermions. For our numerical benchmark, we defined the value \(\alpha(m)\equiv\,0.025\) and employ one-loop running. Using this choice, the non-perturbative regime \(\alpha(m/x)=1\) starts at \(x\approx 4\times 10^{9}\), which holds for any mass since \(m\) must drop out of dimensionless expressions, being the only mass scale in the theory. The heavy Fermions are from the fundamental representation of \(SU(3)\) and thus can form singlet (\(\mathbf{1}\)) and octet (\(\mathbf{8}\)) two particle states. We evaluate the wave functions of the initial scattering (bound) states, \(s\) (\(b\)), at their respective (Bohr-) momentum scale. The effective couplings are thus given by \[\alpha_{b}^{\rm eff} \equiv C_{F}\,\alpha\left(\frac{m}{2}\frac{\alpha_{b}^{\rm eff}}{n} \right)\,, \tag{111}\] \[\alpha_{s}^{\rm eff} \equiv\frac{2C_{F}-C_{A}}{2}\,\alpha\left(\frac{m}{2}v\right)\,, \tag{112}\] where \(C_{F}=4/3\) and \(C_{A}=3\). Eq. (111) is an implicit definition easily solved for, either numerically or analytically order by order. The annihilation into two gluons is possible from singlet or octet scattering states and takes the form \[(\sigma v)_{\rm ann}=\frac{7}{27}\frac{\pi\alpha(2m)^{2}}{m^{2}}\left(\frac{2 }{7}S_{0}^{[\mathbf{1}]}+\frac{5}{7}S_{0}^{[\mathbf{8}]}\right)\,. \tag{113}\] The Sommerfeld factors are given by \[S_{0}^{[\mathbf{1}]} =\frac{\alpha(mv/2)}{v}\frac{2\pi C_{F}}{1-e^{-2\pi C_{F}\alpha( m\frac{m}{2})/v}}\,, \tag{114}\] \[S_{0}^{[\mathbf{8}]} =\frac{\alpha(mv/2)}{v}\frac{2\pi(C_{F}-C_{A}/2)}{1-e^{-2\pi(C_{F }-C_{A}/2)\alpha(mv/2)/v}}\,. \tag{115}\] The BSF cross section in a general \(SU(N_{c})\) theory takes the form [14] \[\left(\sigma v\right)_{n\ell}=\xi\frac{\pi\,\alpha_{b}^{\rm eff}\alpha_{\rm BSF }}{m^{2}}\frac{2^{9}C_{F}}{3N_{c}^{2}}\,S_{\rm BSF}(n,\ell,\frac{\alpha_{b}^ {\rm eff}}{nv},\frac{\alpha_{s}^{\rm eff}}{v}), \tag{116}\] with \(\alpha_{\rm BSF}=\alpha\left(mv^{2}/4+E_{\mathcal{B}_{n\ell}}\right)\) and \(S_{\rm BSF}\) defined in Eq. (B). Here \(\xi\) accounts for spin factors as defined in Sec..2.1. The spin-singlet bound state decay rate into two gluons is given by [9; 14; 21] \[\Gamma_{\rm dec,\,QED}^{n\ell\to gg}=\delta_{\ell,0}\,\frac{m\,C_{F}}{4n^{3}} \alpha(m)^{2}\left(\alpha_{b}^{\rm eff}\right)^{3}\,. \tag{117}\] We neglect spin-triplet bound states for dark QCD. Dark QCD automatically corresponds to the limiting case of no transitions, _i.e._\(\Gamma_{\rm trans}^{i\to j}=0\), therefore the effective cross section Eq. (13) simplifies to [14; 36] \[\left\langle\sigma v\right\rangle_{\rm eff}\to\left\langle\sigma v\right\rangle _{\rm ann}+\sum_{i}\left\langle\sigma v\right\rangle_{i}\frac{\Gamma_{\rm dec }^{i}}{\Gamma_{\rm ion}^{i}+\Gamma_{\rm dec}^{i}}\,. \tag{118}\] This result can be seen as a straightforward generalization of the single bound state case [4], extended to a sum over individual bound states that do not impact each other. ### superWIMP scenario The expressions for BSF of a colored \(t\)-channel scalar mediator is identical to that for dark QCD in Eq. (111), now using \(m=m_{\tilde{q}}\) and the SM strong coupling strength \(\alpha_{s}\) to 5-loop accuracy as implemented in RunDec 3 [68] in place of \(\alpha\), as well as \(\xi=1\). The annihilation cross section reads \[(\sigma v)_{\rm ann}=\frac{14}{27}\frac{\pi\alpha(2m_{\tilde{q}}) ^{2}}{m_{\tilde{q}}^{2}}\left(\frac{2}{7}S_{0}^{[\mathbf{1}]}+\frac{5}{7}S_{0 }^{[\mathbf{8}]}\right)\,, \tag{112}\] and the decay rate is given by [9; 14] \[\Gamma_{\rm dec}^{n\ell\to gg}=\delta_{\ell,0}\,\frac{m_{\tilde{q}} \,C_{F}}{8n^{3}}\alpha(m_{\tilde{q}})^{2}\left(\alpha_{b}^{\rm eff}\right)^{3}\,. \tag{113}\] Gauge invariance fixes the gauge representations of \(\tilde{q}\) to that of the bottom quark in our model, hence the electromagnetic charge is \(Q=-1/3\). The addition of electromagnetic interactions leads to transitions between bound states in dipole approximation but to no additional relevant decay or BSF channels. The transition matrix elements are computed according to App. A.2 and enter the de-excitation rates as \[\Gamma_{\rm trans,\,SW}^{n^{\prime}\ell^{\prime}\to n\ell}=\frac{4Q_{ \tilde{q}}^{2}\,\alpha_{\rm em}}{3}(2\ell+1)(\Delta E_{nn^{\prime}})^{3}\,| \,\langle\psi_{n^{\prime}\ell^{\prime}}^{[\mathbf{1}]}\,\mathbf{r}\,|\psi_{n \ell}^{[\mathbf{1}]}\rangle\,|^{2}\,, \tag{114}\] where \[\Delta E_{nn^{\prime}}=\frac{m_{\tilde{q}}}{4}\left|\frac{\alpha_{b}^{\rm eff }(n)}{n^{2}}-\frac{\alpha_{b}^{\rm eff}(n^{\prime})}{n^{\prime 2}}\right|\,. \tag{115}\] The fine structure constant is \(\alpha_{\rm EM}=1/128.9\). ## Appendix C Relic abundance for dark QED Here, we explore the parameter space of dark QED consistent with the relic density measurement. We assume that the dark QED sector is in kinetic equilibrium with the SM heat bath. Solving Eq. (11) for a large number of points in the two-dimensional parameter space of the model, we compute the coupling strength \(\alpha\) as a function of the dark matter mass \(m\) that provides \(\Omega_{\chi}h^{2}\simeq 0.12\)[70]. Figure 10 displays the respective results under various approximations. While the known cases of Sommerfeld enhanced annihilation and capture into the ground state further improve the tree-level result, it is visible from the blue solid line that the inclusion of the first \(\sim 10^{2}\) bound states \(\mathcal{B}_{nl}\) and all possible electric dipole transitions among them (about \(10^{6}\) in total) results in corrections that are worth to us to report. To arrive at this result, we have included spin-singlet and spin-triplet decay of the s-wave bound states in the effective cross section, such that all curves other than the blue solid line confirm the earlier result in Ref. [18]. We have observed convergence regarding the number of included bound states and transitions. This completes the discussion within the electric dipole operator picture, in particular, for ultra-soft processes with one dark photon.
2303.12986
Self-sustained deformable rotating liquid He cylinders: The pure normal fluid $^3$He and superfluid $^4$He cases
We have studied self-sustained, deformable, rotating liquid He cylinders of infinite length. In the normal fluid $^3$He case, we have employed a classical model where only surface tension and centrifugal forces are taken into account, as well as the Density Functional Theory (DFT) approach in conjunction with a semi-classical Thomas-Fermi approximation for the kinetic energy. In both approaches, if the angular velocity is sufficiently large, it is energetically favorable for the $^3$He cylinder to undergo a shape transition, acquiring an elliptic-like cross section which eventually becomes two-lobed. In the $^4$He case, we have employed a DFT approach that takes into account its superfluid character, limiting the description to vortex-free configurations where angular momentum is exclusively stored in capillary waves on a deformed cross section cylinder. The calculations allow us to carry out a comparison between the rotational behavior of a normal, rotational fluid ($^3$He) and a superfluid, irrotational fluid ($^4$He).
Martí Pi, Francesco Ancilotto, Manuel Barranco, Samuel L. Butler, José María Escartín
2023-03-23T01:32:42Z
http://arxiv.org/abs/2303.12986v1
Self-sustained deformable rotating liquid He cylinders: The pure normal fluid \({}^{3}\)He and superfluid \({}^{4}\)He cases ###### Abstract We have studied self-sustained, deformable, rotating liquid He cylinders of infinite length. In the normal fluid \({}^{3}\)He case, we have employed a classical model where only surface tension and centrifugal forces are taken into account, as well as the Density Functional Theory (DFT) approach in conjunction with a semi-classical Thomas-Fermi approximation for the kinetic energy. In both approaches, if the angular velocity is sufficiently large, it is energetically favorable for the \({}^{3}\)He cylinder to undergo a shape transition, acquiring an elliptic-like cross section which eventually becomes two-lobed. In the \({}^{1}\)He case, we have employed a DFT approach that takes into account its superfluid character, limiting the description to vortex-free configurations where angular momentum is exclusively stored in capillary waves on a deformed cross section cylinder. The calculations allow us to carry out a comparison between the rotational behavior of a normal, rotational fluid (\({}^{3}\)He) and a superfluid, irrotational fluid (\({}^{4}\)He). ## I Introduction Helium is the only element in nature that may remain liquid at temperatures close to absolute zero. At very low temperatures, liquid helium is the ultimate quantum fluid, able to form nanoscopic droplets and macroscopic samples as well. In particular, \({}^{4}\)He droplets are considered as ideal matrices for spectroscopic studies of embedded atoms and molecules, and for the study of superfluidity at the atomic scale, including the study of quantum vortices. Determining the experimental size and shape of helium droplets is a challenging problem. First determinations focussed on droplets made of up to several thousand atoms produced by the adiabatic expansion of a cold helium gas.[1] The experiments analyzed the scattering cross section of Kr atoms dispersed by a jet of \({}^{4}\)He or \({}^{3}\)He droplets using DFT density profiles as input.[2; 3] More recently, large helium droplets made of \(10^{8}-10^{11}\) atoms have been created by the hydrodynamic instability of a very low temperature (\(T\)) liquid helium jet passing through the nozzle of a molecular beam apparatus, as reviewed in Ref. [4]. These large drops have been analyzed to determine their shape and, in the case of \({}^{4}\)He, whether they host arrays of quantized vortices or not.[5] Two experimental techniques have been used to characterize large helium drops. Coherent diffractive imaging of x-rays from a free electron laser [5; 6; 7; 8] gives access to a model-independent determination of the two-dimensional (2D) projection of the drop density on a plane perpendicular to the x-ray incident direction via iterative phase retrieval algorithms. Irradiation of helium droplets with intense extreme ultraviolet pulses [9; 10] and subsequent measurement of wide-angle diffraction patterns provides access to full three-dimensional information. However, so far the analysis of the densities is model-dependent as their shape has to be parameterized using some guessed solid figure which is then used to produce a diffraction pattern which may be compared to the experimental one. The conclusion drawn from these experiments, which are mostly on \({}^{4}\)He drops, is that helium drops are mainly spherical and that only a small fraction of them (about 7%) [10] are deformed and host some angular momentum, which is likely acquired during their passage through the nozzle of the experimental apparatus. The experimental results have been compared to calculations made for incompressible viscous droplets only subject to surface tension and centrifugal forces, [11; 12; 13; 14; 15] or based on a Density Functional Theory (DFT) approach [16; 17; 18] specifically designed to describe liquid helium.[19; 20; 21; 22] It has been found [6; 10] that spinning \({}^{4}\)He droplets follow the sequence of shapes characteristic of rotating viscous droplets.[11; 14] This unexpected result is due to the presence of vortex arrays in the spinning droplets [17; 18] that confer to them the appearance of rotating rigid-bodies. Large \({}^{3}\)He drops have been detected as well in the coherent diffractive imaging of x-rays experiments.[7; 8] It is worth mentioning that classical and DFT calculations for \({}^{3}\)He droplets have yielded very similar relationships between angular velocity and angular momentum,[23] which is due to the fact that at the experimental temperatures (\(T\sim 0.15\) K) [24] liquid \({}^{3}\)He behaves as a normal fluid with a finite viscosity. So far, the most reliable approach to study spinning helium droplets is the DFT approach. It has, however, the limitation that the complexity of DFT calculations dramatically grows with droplet size, making them prohibitively costly even for a few tens of thousands of atoms, which is significantly below typical experimental sizes. Addressing large droplets is especially needed to study large vortex array configurations in \({}^{4}\)He droplets as well as the recently produced spinning mixed \({}^{3}\)He-\({}^{4}\)He droplets,[25] for which classical[26; 27] and DFT[28] calculations are already available. To circumvent this limitation, it is often resorted to a simpler cylindrical geometry which restricts the calculations to less demanding 2D configurations while the basic physics may still be caught by the model. Indeed, self-sustained \({}^{4}\)He circular and deformed cylinder configurations have been used to describe the density of spinning \({}^{4}\)He droplets on the plane of symmetry perpendicular to the rotation axis.[29; 6] We stress that these are self-sustained configurations, and not rotating cylindrical vessels (circular or elliptic) filled with helium for which there exists a vast literature, see e.g. Refs. [30; 32], and references therein. In this work we describe self-sustained liquid He cylinders of infinite length under rotation. The equilibrium and stability of a rotating column of a viscid fluid subject to planar disturbances have been addressed in detail[33; 27] applying techniques similar to those used to describe rotating viscid droplets.[11; 13] As in the present study, only translationally symmetric (planar) disturbances leading to non-circular cylinder cross sections have been considered there. Axisymmetric Rayleigh instabilities, always present in fluid columns, were set aside. In the case of \({}^{3}\)He, we have employed a classical model for viscous liquids subject to centrifugal and surface tension forces,[13] and a normal liquid DFT[20; 34] plus semiclassical approach, treating the \({}^{3}\)He cylinders in the DFT plus rotating Thomas-Fermi (TF) framework.[35; 23] This semiclassical approach is justified by the large number of atoms per unit length in the cylinder. The DFT-TF method represents a realistic framework allowing to make the calculations affordable. It can be extended to mixed helium systems as well.[21; 28] As for droplets,[36] thermal effects on the energetics and morphology of the cylinder are expected to be negligible at the experimental temperatures, so we shall use a zero temperature method.[23] Zero temperature means here a very low \(T\), but above \(T\sim 2.7\) mK at which \({}^{3}\)He becomes superfluid. In the \({}^{4}\)He case, we have employed a DFT approach which takes into account its superfluid character,[22] limiting the description to vortex-free configurations where angular momentum is exclusively stored in capillary waves on a deformed cross-section cylinder. Under these conditions, the calculations allow us to carry out a sensible comparison between the rotational behavior of a normal fluid (\({}^{3}\)He) and of an irrotational superfluid (\({}^{4}\)He) for fixed values of the atom number and angular momentum per unit length. In the presence of vortices in addition to capillary waves, this comparison is obscured as one compares simply connected configurations for \({}^{3}\)He cylinders with multiply connected configurations of vortex-hosting \({}^{4}\)He cylinders. Let us recall that in the case of droplets, the presence of vortex arrays dramatically changes the appearance of the droplet;[18] at fixed angular momentum and atom number in the droplet, the higher the number of vortices the more compact (i.e., closer to an oblate axisymmetric shape) the droplet becomes. Hence, the universal behavior found for classical drops[11] is lost. At variance, we have found that, independently of their size, the \({}^{4}\)He equilibrium configurations hosting capillary waves alone lay on a line in the scaled angular momentum and angular velocity plane,[11] disclosing a _de facto_ nearly universal behavior. Let us mention that it has recently been found[37] that, under appropriate experimental conditions, moderately deformed vortex-free \({}^{4}\)He drops prevail when the number of atoms is smaller than about \(10^{8}\). This work is organized as follows. In Sec. II we present the methods used to describe He cylinders, thoroughly described in Refs. [13; 17], and [23] in the case of droplets. The results are discussed in Sec. III, and a summary and discussion are presented in Sec. IV. We outline in Appendix A the rationale of how we have defined dimensionless angular velocity and dimensionless angular momentum for the cylinder geometry, and present in Appendices B (\({}^{3}\)He) and C (\({}^{4}\)He) the results of a simple model where the cross section of the deformed cylinder is restricted to be elliptical, treating \({}^{3}\)He (\({}^{4}\)He) as a rotational (irrotational) fluid; we call this model Elliptic Deformations (ED) model. ## II Models ### Classical approach to viscous systems subjected to centrifugal and surface tension forces The incompressible Navier-Stokes equations are solved in a reference frame rotating about a fixed axis perpendicular to the solution domain. An arbitrary Lagrange-Euler technique is employed which allows the solution domain to deform and conform to the evolving shape of the drop. The rate of displacement of the outer boundary is set by the normal velocity at the outer boundary and surface tension effects are modeled as boundary normal stresses that are proportional to the degree of boundary curvature.[38] Models are time dependent and are initialized with elliptical domains, with rotation axis passing through the origin, with semimajor and semiminor axes of \((1+\delta)\) and \((1+\delta)^{-1}\) where \(\delta=0.01\). The small difference from an initial circular shape serves to seed possible non-axisymmetric perturbations. At each time step of the simulation, the moment of inertia of the drop is calculated and used to update the rotation rate of the reference frame, assuming constant angular momentum. Models are run until the drop shapes achieve a steady state. The equations are solved using the commercial finite element modeling package Comsol Multiphysics.[39] Refs. [26] and [27] give more detailed descriptions of the classical numerical model. Classical rotating droplets subject to surface tension and centrifugal forces alone are characterized by two dimensionless variables, angular momentum \(\Lambda\) and angular velocity \(\Omega\), that allow description of the sequence of droplet shapes in a universal phase diagram, independently of the droplet size. [11; 12; 13] For the cylinder geometry, the expressions for \(\Omega\) and \(\Lambda\) are [27] (see also Appendix A) \[\begin{split}\Omega&\equiv\sqrt{\frac{m\,\rho_{0} \,R^{3}}{8\,\gamma}}\;\omega\\ \\ \Lambda&\equiv\frac{\hbar}{\sqrt{8\gamma R^{5}m\rho_{0 }}}\,\mathcal{L}\end{split} \tag{1}\] where \(\mathcal{L}\) is the angular momentum per unit length in \(\hbar\) units, \(\gamma\) and \(\rho_{0}\) are the surface tension and atom density of liquid He at zero temperature and pressure, \(m\) is the mass of the He atom, and \(R\) is the sharp radius of the circular He cylinder at rest. If \(\mathcal{N}\) is the number of He atoms per unit length of the cylinder, \(R=\sqrt{\mathcal{N}/(\pi\rho_{0})}\). For liquid \({}^{3}\)He, \(\gamma=0.1132\) K A\({}^{-2}\) and \(\rho_{0}=0.016342\) A\({}^{-3}\). Besides, \(\hbar^{2}/m=16.0836\) K A\({}^{2}\). For liquid \({}^{4}\)He one has \(\gamma=0.274\) K A\({}^{-2}\), \(\rho_{0}=0.021836\) A\({}^{-3}\), and \(\hbar^{2}/m=12.1194\) K A\({}^{2}\). Similar to He droplets made of \(N\) atoms, which are denoted as He\({}_{N}\), we shall denote as He\({}_{\mathcal{N}}\) helium cylinders with \(\mathcal{N}\) atoms per unit length. ### DFT plus semiclassical Thomas-Fermi approach to normal fluid \({}^{3}\)He We have adapted to the cylindrical geometry the approach used in Ref. [23] to describe rotating \({}^{3}\)He droplets. Within DFT, the total energy \(E\) of a \({}^{3}\)He\({}_{\mathcal{N}}\) cylinder at zero temperature is written as a functional of the \({}^{3}\)He atom density per unit volume \(\rho\), here taken from Ref. [20]: \[E[\rho]=\int d\mathbf{r}\frac{\hbar^{2}}{2m^{*}}\tau+\int d\mathbf{r}\, \mathcal{E}_{c}[\rho]\equiv\int d\mathbf{r}\,\mathcal{E}[\rho] \tag{2}\] The first term is the kinetic energy of \({}^{3}\)He with an effective mass \(m^{*}\), and \(\tau\) is the kinetic energy density per unit volume, both depending on \(\rho\). In the TF approximation of Ref. [20] (see also Ref. [34]), \[\tau=\frac{3}{5}(3\pi^{2})^{2/3}\rho^{5/3}+\frac{1}{18}\frac{(\nabla\rho)^{2} }{\rho} \tag{3}\] The second term in Eq. (3) is a Weizsacker-type gradient correction which is necessary in order to have helium density profiles with an exponential fall-off at the surface. The energy functional, represented by the energy per unit volume \(\mathcal{E}[\rho]\) in Eq. (2) within the TF approximation given by Eq. (3) accurately reproduces the equation of state of the bulk liquid and yields the correct value for the \({}^{3}\)He surface tension. [20] The equilibrium configuration of the cylinder is obtained by solving the Euler-Lagrange (EL) equation arising from functional minimization of Eq. (2) \[\frac{\delta}{\delta\rho}\left\{\frac{\hbar^{2}}{2m^{*}}\tau+\mathcal{E}_{c} \right\}=\mu \tag{4}\] where \(\mu\) is the \({}^{3}\)He chemical potential corresponding to the number of He atoms per unit length of the cylinder. Defining \(\Psi=\sqrt{\rho}\), Eq. (4) can be written as a Schrodinger-like equation [20] \[\mathcal{H}[\rho]\,\Psi=\mu\Psi \tag{5}\] where \(\mathcal{H}\) is the one-body effective Hamiltonian that results from the functional variation. When the rotating cylinder -made of fermions in the normal phase- is addressed in the TF approximation, the Fermi sphere is shifted by the motion of the cylinder as a whole; this adds to its energy density \(\mathcal{E}[\rho]\) a rotational term which has the rigid body appearance [23; 35] \[R[\rho]=\int d\mathbf{r}\,\mathcal{R}[\rho]=\int d\mathbf{r}\,\mathcal{E}[ \rho]+\frac{1}{2}L\omega^{2}=\int d\mathbf{r}\,\mathcal{E}[\rho]+\frac{L^{2}} {2I} \tag{6}\] where \(\mathcal{R}[\rho]\) is the Routhian density of the cylinder, \(L\) is the angular momentum, \(\omega\) is the angular velocity, and \(I\) is the moment of inertia. Due to the translational invariance of the system along the symmetry axis of the cylinder (\(z\) direction), the atom density per unit volume only depends on the \(x\) and \(y\) variables and the integral on \(z\) just yields the length \(\ell\) of the cylinder. Hence, Eq. (5) is a two-dimensional partial differential equation on the \(x\) and \(y\) variables, and from now on the energy, Routhian and moment of inertia, integrated on the \(x\) and \(y\) variables are quantities per unit length. In particular, \[I=m\int dx\,dy\,(x^{2}+y^{2})\rho(x,y) \tag{7}\] Figure 1: Density profile of the \({}^{3}\)He\({}_{\mathcal{N}}\) cylinder with \(\mathcal{N}=1500\) atoms/Å for two circular configurations corresponding to \(\Lambda=0\) (red dashed line) and \(1.5\) (black solid line). The \(\Lambda=1.5\) cylinder is metastable. is the moment of inertia per unit length of the \({}^{3}\)He cylinder around the \(z\)-axis, and \(\hbar{\cal L}=I\omega\) is the angular momentum per unit length. We stress that the rigid-body moment of inertia is not an imposed ingredient within the DFT-TF framework. It arises naturally from the TF approximation.[23; 35] Equations (5) and (14) have been solved adapting the \({}^{4}\)He-DFT-BCN-TLS computing package[40] to the \({}^{3}\)He functional. To take full advantage of the Fast Fourier Transform[41] used to carry out the convolution integrals in the DFT mean field \({\cal H}[\rho]\), we work in Cartesian coordinates and impose periodic boundary conditions (PBC) on the surface of the box where calculations are carried out. In the \(x\) and \(y\) directions, this box has to be large enough to accommodate the cylinder in such a way that the He density is sensibly zero at the box surface, the effective wave function \(\Psi({\bf r})\) being defined at the nodes of a 2D \(N_{x}\times N_{y}\) grid spanning the \(x\) and \(y\) directions. The box is made 3D by adding _one single point_ in the \(z\) direction, \(N_{z}=1\). A space step of 0.8 A has been used. We Figure 4: Two-dimensional densities for the \({}^{3}\)He\({}_{1500}\) cylinder in the DFT approach. From top to bottom, they correspond to \(\Lambda=0.5,1,1.5\), and 2. The color bar represents the \({}^{3}\)He density in Å\({}^{-3}\). Several streamlines are superimposed. Also shown are the outlines of classical shapes (magenta lines). Figure 3: Aspect-ratio _vs._ rescaled angular momentum for the \({}^{3}\)He cylinder in the DFT, classical and ED approaches. \(AR=1\) corresponds to circular configurations. The lines are cubic splines of the calculated points. Also shown are the outlines of classical shapes with angular momenta \(\Lambda=0.5,1,1.5\) and 2. Figure 2: Reduced DFT Routhian per unit length \((R|\rho|-\epsilon_{0}{\cal N}-E_{0})/(8\gamma R)\) as a function of rescaled angular momentum. Black triangles: DFT circular configurations. Red circles: DFT deformed configurations. Solid blue line: classical circular configurations. Dashed blue line: classical deformed configurations. have recalculated several configurations in the circular-to-deformed bifurcation region using a space step of 0.4 A and have found that the values of the magnitudes shown in Table 1 are strictly the same. The imposed PBC thus make \(\Psi({\bf r})\) translationally invariant in the \(z\) direction as required by the cylinder geometry and in practice one still handles \(N_{x}\times N_{y}\) points instead of the \(N_{x}\times N_{y}\times N_{z}\) points needed for droplets. The differential operators in \(\mathcal{H}[\rho]\) are approximated by 13-point formulas. The stationary solution corresponding to given values of \(\mathcal{N}\) and \(\mathcal{L}\) is obtained starting from an initial guess \(\Psi_{0}({\bf r})\) and relaxing it using an imaginary-time step relaxation method.[42] It is worth mentioning that at variance with the classical model for viscous liquids subject to centrifugal and surface tension forces, universality in terms of the scaled \(\Lambda\) and \(\Omega\) variables is lost when droplets or cylinders are described using more refined models that incorporate other effects, e.g., liquid compressibility and surface thickness effects. Yet, these variables have been found to be very useful as they allow us to scale the properties of the calculated droplets, which have a radius of tens of nanometers,[23; 17] to those of the experimental ones which have a radius of hundreds of nanometers.[5; 7] For any stationary configuration obtained by solving the EL equation, a sharp density surface is determined by calculating the locus at which the helium density equals \(\rho_{0}/2\). Two lengths are defined corresponding to the shortest and largest distances from the \(z\) (rotation) axis to the sharp surface. We call \(a_{x}\) the largest distance, and \(b_{y}\) the shortest one. The aspect ratio is defined as \(AR=a_{x}/b_{y}\), being one for cylinders (circular cross section) and lager than one otherwise (deformed cross section). ### DFT approach to superfluid \({}^{4}\)He Within DFT, the energy of the \({}^{4}\)He\({}_{\mathcal{N}}\) cylinder is written as a functional of the atom density per unit volume \(\rho({\bf r})\) as[22] \[E[\rho]=T[\rho]+E_{c}[\rho]=\frac{\hbar^{2}}{2m}\int d{\bf r}|\nabla\Psi({\bf r })|^{2}+\int d{\bf r}\,\mathcal{E}_{c}[\rho] \tag{8}\] where the first term is the kinetic energy, with \(\rho({\bf r})=|\Psi({\bf r})|^{2}\), and the functional \(\mathcal{E}_{c}\) contains the interaction term (in the Hartree approximation) and additional terms which describe non-local correlation effects.[43] The equilibrium configuration of the cylinder is obtained by solving the EL equation resulting from the functional minimization of Eq. (8), \[\left\{-\frac{\hbar^{2}}{2m}\nabla^{2}+\frac{\delta\mathcal{E}_{c}}{\delta \rho}\right\}\Psi({\bf r})\equiv\mathcal{H}[\rho]\,\Psi({\bf r})=\mu\Psi({ \bf r})\;, \tag{9}\] where \(\mu\) is the \({}^{4}\)He chemical potential. Similarly to the case of \({}^{4}\)He droplets, to study spinning \({}^{4}\)He cylinders we work in the corotating frame at angular velocity \(\omega\), \[E^{\prime}[\rho]=E[\rho]-\hbar\omega\,\langle\hat{L}_{z}\rangle \tag{10}\] where \(\hat{L}_{z}\) is the dimensionless angular momentum operator in the \(z\)-direction; one looks for solutions of the EL equation resulting from the functional variation of \(E^{\prime}[\rho]\), \[\left\{\mathcal{H}[\rho]\,-\hbar\omega\hat{L}_{z}\right\}\,\Psi({\bf r})=\, \mu\,\Psi({\bf r})\;. \tag{11}\] The differential operators in \(\mathcal{H}[\rho]\) and the angular momentum operator are approximated by 13-point formulas. As in the \({}^{3}\)He case, the stationary solution corresponding to given values of \(\mathcal{N}\) and \(\mathcal{L}\) is obtained starting from an initial guess \(\Psi_{0}({\bf r})\) and relaxing it using an imaginary-time step relaxation method.[42] Angular momentum can be stored in a superfluid \({}^{4}\)He sample in the form of surface capillary waves and/or quantized vortices.[6; 18] Since we are considering only the contribution of capillary waves, we have used the so-called "imprinting" proced Figure 5: Rescaled angular velocity \(\Omega\)_vs._ rescaled angular momentum \(\Lambda\) for the \({}^{3}\)He\({}_{1500}\) cylinder in the ED, classical and DFT approaches. Black triangles, circular DFT configurations; red circles, deformed DFT configurations; blue asterisks, classical calculations.[27] The DFT configurations to the right of the vertical arrow are two-lobed. The lines are cubic splines of the calculated points. time relaxation from for the effective wave function \[\Psi_{0}({\bf r})=\rho_{0}^{1/2}({\bf r})\,e^{iaxy}\;. \tag{12}\] The complex phase \(e^{iaxy}\) imprints a surface capillary wave with quadrupolar symmetry around the \(z\) axis,[44] and \(\rho_{0}({\bf r})\) is an arbitrary, vortex-free cylinder density. The initial value of \(\alpha\) is guessed, and during the iterative solution of Eq. (11) the shape of the cylinder changes to provide, at convergence, the lowest energy vortex-free configuration for the desired \({\cal L}\) value, which requires adjustment of the value of \(\omega\) every iteration. Writing \(\Psi({\bf r})\equiv\phi({\bf r})\,\exp[i\,{\cal S}({\bf r})]\), the velocity field of the superfluid is \[{\bf v}({\bf r})=\frac{\hbar}{m}{\rm Im}\left\{\frac{\nabla\Psi({\bf r})}{ \Psi({\bf r})}\right\}=\frac{\hbar}{m}\nabla{\cal S}({\bf r}) \tag{13}\] It can be visualized with streamlines of the superfluid flow.[18] In the \({}^{3}\)He case, the velocity field consists of circumferences centered at the rotation axis, with \(v(r)=\omega\,r\), being \(r\) the distance to the axis. Equations (9) and (11) are two-dimensional partial differential equations depending on the \(x\) and \(y\) variables which have been solved using the \({}^{4}\)He-DFT-BCN-TLS computing package.[40] ## III Results We look for solutions of the EL equation resulting from the functional variation of \(R[\rho]\): \[\left\{{\cal H}[\rho]\,-\frac{m}{2}\,\left(\frac{L}{I}\right)^{2}(x^{2}+y^{2} )\right\}\,\Psi(x,y)=\,\mu\,\Psi(x,y)\;. \tag{14}\] We have carried out detailed DFT calculations for a cylinder with \({\cal N}=1500\) atoms/A which has a radius \(R=170.93\) A (\({}^{3}\)He) or \(R=147.87\) A (\({}^{4}\)He) at rest. As illustrated in Fig. 1 for \({}^{3}\)He, liquid helium is fairly incompressible and hence the cross section area of the rotating He\({}_{\cal N}\) cylinder remains sensibly equal to \(\pi R^{2}\) during deformation. ### \({}^{3}\)He cylinders Table 1 collects the DFT results obtained for \({}^{3}\)He cylinders. To determine the circular-to-deformed bifurcation point, one has to compare the Routhian \(R[\rho]\) of the circular cylinder to that of the deformed cylinder for the same \(\Lambda\) and \({\cal N}\) values; the configuration with the smaller \(R[\rho]\) is the equilibrium configuration. One can see from Table 1 that the difference between the Routhians of the circular and deformed cylinders is very small in a wide interval of \(\Lambda\) values between 0.900 and 0.960, which makes rather delicate to determine the bifurcation point within the DFT. We take the angular momentum at which the aspect ratio \(AR=a_{x}/b_{y}\) starts to clearly differ from one as the bifurcation point, having obtained \((\Lambda,\Omega)=(0.90,0.573)\). In the classical model, bifurcation occurs at \((\Lambda,\Omega)=(0.960,0.616)\).[27] To compare the classical and DFT results for the Routhian (total energy including rotation energy) per unit length, one has to remember that in classical models only surface and rotation energy are considered. Consequently, we have to identify first the energies that are implicitly involved in the DFT calculation. To this end, it is convenient to split the energy per unit length of the cylinder \(E/\ell\) into different terms, in a way similar in spirit as how the energy of the atomic nucleus is written as a "mass formula", namely \[E/\ell=\epsilon_{0}{\cal N}+2\pi R\gamma+E_{0}=\epsilon_{0}\pi R^{2}\rho_{0}+2 \pi R\gamma+E_{0} \tag{15}\] Figure 7: Aspect-ratio _vs._ rescaled angular momentum for the \({}^{4}\)He cylinder in the ED and DFT approaches. The lines are cubic splines of the calculated points. Figure 6: DFT irrotational moment of inertia \(I_{trr}\) in units of the rigid-body moment of inertia \(I_{RB}\) for \({}^{4}\)He cylinders as a function of \(\Lambda\). The lines are cubic splines of the calculated points. where \(\epsilon_{0}=-2.49\) K is the energy per atom in liquid \({}^{3}\)He and \(E_{0}\) is a constant term. Let us mention that the presence of a constant term in the nuclear mass formula is common in the most elaborated ones [45] and it appears after leading terms of \(R^{3}\) (volume), \(R^{2}\) (surface), and \(R\) (curvature) type. In the case of the cylinder, it naturally comes after the surface term. It is worth noticing that mass formulas have also been adjusted for \({}^{3}\)He and \({}^{4}\)He droplets which include a fairly large constant term. [21; 34] The parameter \(E_{0}\) can be determined from the DFT values at \(\Lambda=0\). For \(\mathcal{N}=1500\) atoms/A, \(R=170.93\) A and \(\epsilon_{0}\,\mathcal{N}\) is -3735 K/A, yielding \(E_{0}=28.58\) K/A. We thus see that the volume and constant energy contributions have to be subtracted from the DFT Routhian for a sensible comparison with the classical results. Since in the classical calculations energies per unit length are made dimensionless dividing them by \(8\gamma R\), [27] the quantity that can be directly compared with the classical result is the dimensionless reduced DFT Routhian per unit length defined as \[\frac{1}{8\gamma R}\ \{R[\rho]-\epsilon_{0}\mathcal{N}-E_{0}\} \tag{16}\] We have represented it as a function of \(\Lambda\) in Fig. 2 together with the classical result. It can be seen that they agree very well, with some minor differences showing up at large deformations. Figure 3 shows the aspect ratio as a function of the rescaled angular momentum \(\Lambda\) for \({}^{3}\)He cylinders obtained with the DFT, classical and ED approaches. The outlines of several classical shapes are drawn in the inset. We display in Fig. 4 the density of the \({}^{3}\)He cylinder for several values of \(\Lambda\) obtained with the DFT method; superposed to the densities we have plotted several circulation lines. In the DFT approach the cross section of the cylinder becomes two-lobed at \(\Lambda\sim 1.1\). The outlines of classical shapes are superimposed to the two-dimensional DFT densities. It can be seen that they are very similar to the DFT ones except for \(\Lambda=1\), for which the DFT density is more deformed because this configuration is further away from the DFT bifurcation than the classical one is from the classical bifurcation point. This effect diminishes at larger deformations where the differences are minor. Figure 5 shows the \(\Omega(\Lambda)\) equilibrium line for \({}^{3}\)He obtained with the classical and DFT approaches. This line is very similar for both methods. As for \({}^{3}\)He droplets, [23] the minor differences in the deformed branch are attributed to a better description of the droplet surface and to quantum kinetic energy contributions in the DFT approach which, together with compressibility effects, are lacking in classical models. Also shown in Fig. 5 is the result obtained with the ED model as explained in Appendix C. Just away from the bifurcation point the ED approach yields results very different from the two others, indicating that cross section shapes quickly become non-elliptical. ### \({}^{4}\)He cylinders Table 2 collects the DFT results obtained for \({}^{4}\)He cylinders. Since a superfluid system cannot rotate around a symmetry axis, the cross section of rotating (\(\Lambda\neq 0\)) vortex-free \({}^{4}\)He cylinders must necessarily be non-circular. The irrotational moment of inertia, defined as \(I_{irr}=\langle L_{z}\rangle/\omega\), drops to zero as \(\Lambda\to 0\). We have plotted \(I_{irr}\) in Fig. 6 in units of the rigid-body moment of inertia \(I_{RB}\), Eq. (7). It can be seen that \(I_{irr}\) approaches \(I_{RB}\) at large angular momenta, being rather different even for large deformations. For the sake of comparison, we also \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \(\Lambda\) & \(\Omega\) & \(a_{x}\) (Å) & \(b_{y}\) (Å) & \(AR\) & \(I/I_{circ}\) & \(R\) (K/Å) \\ \hline C & 0 & 0 & 171.41 & 171.41 & 1 & 1 & -3641.965 \\ C & 0.20 & 0.12768 & 171.42 & 171.42 & 1 & 1.0001 & -3639.989 \\ C & 0.40 & 0.25525 & 171.45 & 171.45 & 1 & 1.0006 & -3634.064 \\ C & 0.60 & 0.38261 & 171.49 & 171.49 & 1 & 1.0013 & -3624.194 \\ C & 0.80 & 0.50964 & 171.55 & 171.55 & 1 & 1.0023 & -3610.389 \\ C & 0.90 & 0.57300 & 171.59 & 171.59 & 1 & 1.0029 & -3602.013 \\ C & 0.93 & 0.59198 & 171.60 & 171.60 & 1 & 1.0031 & -3599.310 \\ C & 0.94 & 0.59830 & 171.61 & 171.61 & 1 & 1.0031 & -3598.389 \\ C & 0.95 & 0.60463 & 171.61 & 171.61 & 1 & 1.0032 & -3597.458 \\ C & 0.96 & 0.61095 & 171.62 & 171.62 & 1 & 1.0033 & -3596.518 \\ C & 1.00 & 0.63623 & 171.63 & 171.63 & 1 & 1.0036 & -3592.658 \\ C & 1.10 & 0.69933 & 171.68 & 171.68 & 1 & 1.0043 & -3582.326 \\ C & 1.20 & 0.76227 & 171.70 & 171.70 & 1 & 1.0051 & -3571.019 \\ C & 1.50 & 0.95006 & 171.90 & 171.90 & 1 & 1.0081 & -3531.273 \\ \hline D & 0.90 & 0.57297 & 172.43 & 170.78 & 1.010 & 1.0029 & -3602.013 \\ D & 0.93 & 0.59171 & 174.20 & 169.05 & 1.030 & 1.0035 & -3599.305 \\ D & 0.94 & 0.59709 & 177.13 & 166.19 & 1.066 & 1.0052 & -3598.384 \\ D & 0.95 & 0.59563 & 186.72 & 157.02 & 1.189 & 1.0184 & -3597.461 \\ D & 0.96 & 0.59633 & 190.93 & 153.11 & 1.247 & 1.0279 & -3596.539 \\ D & 1.00 & 0.58093 & 210.16 & 136.12 & 1.544 & 1.0991 & -3592.901 \\ D & 1.10 & 0.55159 & 237.00 & 114.55 & 2.069 & 1.2733 & -3584.134 \\ TL & 1.20 & 0.52539 & 256.81 & 100.08 & 2.566 & 1.4583 & -3575.799 \\ TL & 1.30 & 0.50254 & 273.34 & 88.81 & 3.078 & 1.6517 & -3567.855 \\ TL & 1.50 & 0.46269 & 301.70 & 71.08 & 4.245 & 2.0699 & -3552.930 \\ TL & 1.70 & 0.42981 & 326.07 & 57.22 & 5.698 & 2.5254 & -3539.125 \\ TL & 1.80 & 0.41542 & 337.25 & 51.26 & 6.580 & 2.7666 & -3532.583 \\ TL & 2.00 & 0.39071 & 357.80 & 40.72 & 8.788 & 3.2683 & -3520.123 \\ TL & 2.50 & 0.33938 & 403.92 & 18.74 & 21.550 & 4.7035 & -3492.000 \\ \hline \hline \end{tabular} \end{table} Table 1: Configuration characteristics of a rotating \({}^{3}\)He\({}_{\mathcal{N}}\) cylinder with \(\mathcal{N}=1500\) atoms/Å calculated in this work within DFT. C: circular configurations; D: deformed, elliptic-like configurations; TL: two-lobed configurations. \(\Lambda\) and \(\Omega\) are the dimensionless angular momentum and velocity, and \(R\) is the Routhian per unit length. \(AR\) is the aspect ratio (\(AR=1\) for circular configurations), and \(I/I_{circ}\) is the DFT moment of inertia in units of that of the \({}^{3}\)He\({}_{1500}\) circular cylinder of sharp radius at \(\Lambda=0\), \(I_{circ}=m\,\mathcal{N}^{2}/(2\pi\rho_{0})\). display the result of the ED model, that works surprisingly well even for \(\Lambda\) values for which the configuration is no longer elliptic-like but two-lobed. The difference between classical and superfluid moments of inertia, also appearing in \({}^{4}\)He droplets,[16] is a signature of their different response to rotations. Figure 7 shows the aspect ratio as a function of the rescaled angular momentum for \({}^{4}\)He cylinders obtained with the DFT and ED approaches. We display in Fig. 8 the density of the \({}^{4}\)He cylinder for several values of \(\Lambda\) obtained with the DFT method; superposed to the densities we have plotted several circulation lines. The cross section of the cylinder becomes two-lobed at \(\Lambda\sim 0.5\). The outlines of classical shapes are superimposed to the two-dimensional DFT densities. The difference between classical and DFT densities is apparent up to large \(\Lambda\) values, reflecting the distinct rotational response of a normal fluid from that of a superfluid. Figure 9 shows the \(\Omega(\Lambda)\) equilibrium line for \({}^{4}\)He obtained with the ED and DFT methods. As for \({}^{3}\)He cylinders, the ED approximation quickly becomes inadequate. The finite value of \(\Omega\) at very small values of \(\Lambda\) -also found for \({}^{4}\)He droplets[17]\(-\) is the equivalent of the "rotational Meissner effect" occurring when liquid helium in a rotating cylinder is cooled through the lambda point: at sufficiently slow rotational speeds the superfluid forms in a state of zero total angular momentum, causing the container to rotate faster.[46] ## IV Discussion and Outlook As for helium droplets,[23] we have expectedly found that the rotating behavior of normal fluid \({}^{3}\)He cylinders is very similar to rotating incompressible, viscous cylinders only subject to surface tension and centrifugal forces. Even for fine details such as the aspect ratio as a function of rescaled angular momentum or the \(\Omega(\Lambda)\) equilibrium line, we have found a good agreement between classical and DFT results. Figures 4 and 8 clearly illustrate how different is instead the response to rotation of the normal fluid \({}^{3}\)He cylinder from that of superfluid \({}^{4}\)He, especially at moderate angular momentum values, i.e., at aspect ratios not \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \(\Lambda\) & \(\Omega\) & \(a_{x}\) (Å) & \(b_{y}\) (Å) & \(AR\) & \(I_{irr}/I_{RB}\) & \(R\) (K/Å) \\ \hline D & 0.001 & 0.43503 & 150.96 & 145.31 & 1.039 & 0.0015 & -10466.31 \\ D & 0.01 & 0.43537 & 157.12 & 139.25 & 1.128 & 0.0145 & -10465.04 \\ D & 0.02 & 0.43561 & 160.89 & 135.58 & 1.187 & 0.0289 & -10463.63 \\ D & 0.04 & 0.43606 & 166.24 & 130.43 & 1.275 & 0.0568 & -10460.80 \\ D & 0.05 & 0.43627 & 168.42 & 128.35 & 1.312 & 0.0705 & -10459.39 \\ D & 0.10 & 0.43772 & 177.09 & 120.26 & 1.473 & 0.1355 & -10452.30 \\ D & 0.20 & 0.44012 & 189.77 & 108.94 & 1.742 & 0.2510 & -10438.06 \\ D & 0.30 & 0.44188 & 200.00 & 100.37 & 1.993 & 0.3498 & -10423.77 \\ D & 0.40 & 0.44295 & 209.06 & 93.28 & 2.241 & 0.4344 & -10409.43 \\ TL & 0.50 & 0.44335 & 217.45 & 87.16 & 2.495 & 0.5069 & -10395.07 \\ TL & 0.60 & 0.44304 & 225.44 & 81.72 & 2.759 & 0.5691 & -10380.70 \\ TL & 0.80 & 0.44136 & 240.38 & 72.21 & 3.329 & 0.6680 & -10352.24 \\ TL & 1.00 & 0.43536 & 253.38 & 64.13 & 3.951 & 0.7502 & -10323.78 \\ TL & 1.20 & 0.42614 & 269.77 & 56.73 & 4.755 & 0.7980 & -10295.91 \\ TL & 1.50 & 0.40932 & 291.30 & 46.74 & 6.233 & 0.8558 & -10255.26 \\ TL & 2.00 & 0.37564 & 326.57 & 31.68 & 10.310 & 0.9096 & -10191.55 \\ \hline \hline \end{tabular} \end{table} Table 2: Configuration characteristics of a rotating \({}^{4}\)He\({}_{N}\) cylinder with \(\mathcal{N}=1500\) atoms/Å calculated in this work within DFT. \(\Lambda\) and \(\Omega\) are the dimensionless angular momentum and velocity, and \(R\) is the Routhian per unit length. \(AR\) is the aspect ratio, and \(I_{irr}/I_{RB}\) is the DFT moment of inertia in units of that of a rigid-body \({}^{4}\)He cylinder for the same \(\Lambda\) value. D: deformed elliptic-like configurations; TL: two-lobed configurations. Figure 8: Two-dimensional densities for the \({}^{4}\)He\({}_{1500}\) cylinder in the DFT approach. From top to bottom, they correspond to \(\Lambda=0.5,1,1.5\), and 2. The color bar represents the \({}^{4}\)He density in Å\({}^{-3}\). Several streamlines are superimposed. Also shown are the outlines of classical shapes (magenta lines). very different from one. Only when \(\Lambda\) is large do the density profiles become similar. It is worth recalling that the number of droplets having large deformations is found to be very small in the experiments.[10] It is also worth seeing that even for these very deformed cylinders, the moment of inertia of superfluid \({}^{4}\)He cylinders is 10-20% smaller that the rigid-body value (Fig. 6). A close look at the appearance of the streamlines in Figs. 4 and 8 shows that while \({}^{3}\)He cylinders do rotate and streamlines are circumferences around the rotation axis, \({}^{4}\)He cylinders do not; the streamlines do not correspond to a rigid rotation, but to an irrotational flow. Lets us remember that the fluid motion is a combination of translation, rotation and deformation of the fluid elements, and only when vorticity (defined as \(\nabla\times\mathbf{v}\))[47] is nonzero, may one speak of a true rotation. Vorticity is distributed inside the cylinder in the normal phase \({}^{3}\)He and it equals \(2\omega\) as for a rotating rigid body in steady rotation. Since the superfluid flow is irrotational, \(\nabla\times\mathbf{v}=0\) for \({}^{4}\)He. In this case, fluid elements translate and deform, but do not rotate. An illuminating discussion of this behavior, based on a paper by Tritton,[48] can be found in Ref. [32]. The different behavior of \({}^{3}\)He and vortex-free \({}^{4}\)He cylinders in rotation is also apparent in the \(\Omega(\Lambda)\) equilibrium line and in the moment of inertia as a function of the angular momentum, Figs. 5, 6 and 9. When the superfluid \({}^{4}\)He sample hosts linear vortices, the situation dramatically changes, as the vortex array tends to confer to the droplet or cylinder the appearance of a rotating rigid body.[49; 17] To exemplify this key issue, we have redone the calculation of the rotating \({}^{4}\)He cylinder at \(\Lambda=0.5\) when it hosts a large vortex array. One single vortex along the axis of the circular cylinder has an angular momentum per unit length \(\mathcal{L}=\mathcal{N}\).[50] From the definition of \(\Lambda\), Eq. (1), it corresponds to a rather small value if \(\mathcal{N}\)=1500 atoms/A, namely \(\Lambda=0.0898\). We have imprinted a nine vortex array to the cylinder by using the wave function[18] \[\Psi_{0}(\mathbf{r})=\rho_{0}^{1/2}(\mathbf{r})\,\prod_{j=1}^{n_{v}}\frac{(x- x_{j})+i(y-y_{j})}{\sqrt{(x-x_{j})^{2}+(y-y_{j})^{2}}} \tag{17}\] with \(n_{v}=9\), where \((x_{j},y_{j})\) is the initial position of the \(j\)-vortex core. We have started from a deformed cylinder and have chosen an initial vortex array with a centered vortex and eight vortices around it at the same distance \(d\), which is the lowest energy configuration in the classical case.[51] During the iterative solution of Eq. (11) both the vortex core structure and positions, and the shape of the cylinder may change to provide at convergence the lowest Routhian configuration. The effect caused by the presence of the vortex array on the morphology of the free-standing cylinder can be appreciated in Fig. 10: the very deformed vortex-free cylinder, with \(AR=2.50\) (see Table 2), has become circular (\(AR=1.001\)), coinciding with the classical result. For smaller \(n_{v}\) values the cylinder may still be deformed, but not as much as the vortex-free cylinder. For instance, when \(n_{v}=6\), \(AR=1.25\). In this case, vortices and capillary waves coexist.[6; 18] We have found that for the \(\Lambda=0.5\) value used here, the \(n_{v}=9\) configuration is more stable than the vortex-free one. It is also worth noticing that for a \(n_{v}\) vortex array in a rotating cylinder of radius \(R\), the total angular momentum can be written as[52] \[\mathcal{L}=\mathcal{N}\sum_{j=1}^{n_{v}}\left[1-\left(\frac{d_{j}}{R}\right)^ {2}\right] \tag{18}\] Figure 10: Two-dimensional density for the \({}^{4}\)He\({}_{1500}\) cylinder hosting nine vortex lines at \(\Lambda=0.5\) in the DFT approach. Several streamlines are superimposed. Also shown is the outline of classical shape (magenta line). Figure 9: Rescaled angular velocity \(\Omega\)_vs._ rescaled angular momentum \(\Lambda\) for the \({}^{4}\)He\({}_{1500}\) cylinder in the ED and DFT approaches. The DFT configurations to the right of the vertical arrow are two-lobed. The lines are cubic splines of the calculated points. where \(d_{j}\) is the distance of the \(j\)-vortex to the symmetry axis, which in our case reduces to \(\mathcal{L}=\mathcal{N}+8\,\mathcal{N}[1-(d/R)^{2}]\). This expression yields \(d=96.8\) A for \(R=147.87\) A and \(\Lambda=0.5\), in perfect agreement with the averaged DFT result. All configurations obtained in this work, in particular those displayed in Figs. 4 and 8, are stationary in the co-rotating frame -the framework that rotates with angular velocity \(\omega\) with respect to the laboratory frame. Consequently, they would be seen from the laboratory as if they were rotating like a rigid body with the angular frequency \(\omega\) imposed to obtain them.[53] The \({}^{3}\)He cylinder would undergo a true rotation, but this rotation would only be apparent for the \({}^{4}\)He cylinder. Examples of apparent rotations in the case of \({}^{4}\)He droplets, obtained within time-dependent DFT, can be found in Refs. [18] and [44]. Ongoing experiments on mixed helium droplets,[25] which exhibit a core-shell structure with a crust made of \({}^{3}\)He atoms in the normal state and a superfluid core mostly made of \({}^{4}\)He atoms, and calculations on mixed droplets made of immiscible viscous fluids[27] call for extending the DFT calculations carried out for mixed helium droplets[23] to larger sizes, also relaxing the constraint that \({}^{3}\)He and \({}^{4}\)He moieties are concentric. When the \({}^{4}\)He core displaces with respect to the center of mass of the droplet, the moment of inertia increases from that of centered drops, influencing how angular momentum is stored in the mixed droplet. In particular, it might affect quantum vortex nucleation in the \({}^{4}\)He core, which could be hindered. The cross section of deformable cylinders is a reasonable representation of the density of a large rotating helium droplet on the plane of symmetry perpendicular to the rotation axis.[6] The use of cylindrical geometry would allow the extension of DFT calculations to helium drops of larger cross section that would otherwise be computationally prohibitive. ###### Acknowledgements. This work has been performed under Grant No. PID2020-114626GB-I00 from the MICIN/AEI/10.13039/501100011033 and benefitted from COST Action CA21101 "Confined molecular systems: form a new generation of materials to the stars" (COSY) supported by COST (European Cooperation in Science and Technology). J. M. E. acknowledges support from the Spanish Research Agency MCIN/AEI/10.13039/501100011033 through the Severo Ochoa Centres of Excellence programme (grant SEV-2017-0706). ## Appendix A Rotating \({}^{3}\)He cylinders held together by surface tension: the general case In this Appendix we outline how dimensionless angular momentum \(\Lambda\) and dimensionless angular velocity \(\Omega\) can be introduced in the case of an incompressible cylinder of length \(\ell\) and radius \(R\), modeled by viscous cylinders subject to surface tension and centrifugal forces alone. To connect with the 2D DFT results, the length \(\ell\) will be taken as infinite, and eventually all extensive quantities will be referred to per unit length. We closely follow the procedure by Brown and Scriven in the case of droplets.[11] In cylindrical coordinates \((r,\phi,z)\), the radial vector to the surface is described by \(\mathbf{r}=Rf(\phi)\mathbf{\hat{r}}\), where \(\mathbf{\hat{r}}\) is the unit vector in the radial direction and \(\phi\) is the azimuthal angle. This representation handles circular, elliptic-like and multi-lobed cylinders, with the limitation that \(\mathbf{r}\) must not intersect the surface of the cylinder more than once. If the cylinder rotates around its axis (\(z\)-axis) at angular velocity \(\omega\), the energy per unit length is given by \[E=\gamma L_{c}+\frac{1}{2}I\omega^{2}\;, \tag{10}\] where \(\gamma\) is the surface tension, \(L_{c}\) is the perimeter of the cross section of the cylinder \[L_{c}=2R\int_{0}^{\pi}d\phi\,\sqrt{f^{2}(\phi)+\left(\frac{\partial f}{ \partial\phi}\right)^{2}}\;, \tag{11}\] and \(I\) is the moment of inertia per unit length \[I=\frac{1}{2}m\rho_{0}R^{4}\int_{0}^{\pi}d\phi\,f^{4}(\phi)\equiv m\rho_{0}\, R^{4}\,\mathcal{I} \tag{12}\] where \(\mathcal{I}\) is the dimensionless moment of inertia per unit length. Writing the energy per unit length in units of \(2\gamma R\), we have \[\frac{E}{2\gamma R}=\int_{0}^{\pi}d\phi\,\left\{\sqrt{f^{2}(\phi)+\left( \frac{\partial f}{\partial\phi}\right)^{2}}+\frac{m\rho_{0}\omega^{2}R^{3}}{8 \gamma}\,f^{4}(\phi)\right\} \tag{13}\] The ratio \[\Sigma\equiv\Omega^{2}=\frac{m\rho_{0}\,\omega^{2}\,R^{3}}{8\gamma} \tag{14}\] is called rotational Bond number,[11; 33; 54] and is the dimensionless measure of the square of angular velocity \(\Omega\). A dimensionless angular momentum \(\Lambda\) is introduced such that \(\Lambda=\mathcal{I}\,\Omega\). This yields Eqs. (1) in the main text. Eq. (13) shows that within this model the solution is universal and can be obtained once for all rotating cylinders. We want to comment that our definition of \(\Omega\) coincides with that of Ref. [33]; unfortunately, the definition of \(\Lambda\) is not given in that reference. ## Appendix B Rotating \({}^{3}\)He cylinders held together by surface tension: elliptic deformations It is illustrative to address the classical rotating cylinder when deformations are restricted to be elliptic, as it is nearly analytical and it is expected to be a fair approximation to describe the circular to deformed bifurcation. Proceeding as in previous Appendix A, the surface energy per unit length of the cylinder is written as \(E_{s}=\gamma L_{c}\), where \[L_{c}=4a\int_{0}^{\pi/2}d\phi\sqrt{1-e^{2}\sin^{2}\phi}\equiv 4a\,{\bf E}(e) \tag{10}\] is the perimeter of the ellipse \[\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1 \tag{11}\] with eccentricity \(e=\sqrt{a^{2}-b^{2}}/a\) (we take \(a\geq b\)). If the fluid is incompressible, \(\pi ab=\pi R^{2}\). In Eq. (10), \({\bf E}(e)\) is the complete elliptic integral of the second kind [55] \[{\bf E}(e)=\int_{0}^{\pi/2}d\phi\,\sqrt{1-e^{2}\sin^{2}\phi} \tag{12}\] Defining \(\xi=a/R=(1-e^{2})^{-1/4}\), where \(R\) is the radius of the cylinder at rest, the moment of inertia per unit length can be expressed as \[I=\frac{\pi}{4}m\rho_{0}\,R^{4}\,\frac{\xi^{4}+1}{\xi^{2}} \tag{13}\] The energy per unit length in units of \(8\gamma R\) is \[\frac{E}{8\gamma R}=\frac{1}{2}\xi\,{\bf E}(e)+\frac{2}{\pi}\Lambda^{2}\frac{ \xi^{2}}{\xi^{4}+1} \tag{14}\] which for the circular cylinder reduces to \[\frac{E}{8\gamma R}=\frac{\pi}{4}+\frac{1}{\pi}\Lambda^{2} \tag{15}\] Determining the equilibrium configuration for a given \(\Lambda\) amounts to solving for \(e\) (or \(\xi\)) the algebraic equation \(d{\cal E}/d\xi=0\): \[{\bf E}(e)+\frac{2}{\xi^{4}-1}[{\bf E}(e)-{\bf K}(e)]+\frac{8}{\pi}\Lambda^{2 }\,\frac{\xi(1-\xi^{4})}{(\xi^{4}+1)^{2}}=0 \tag{16}\] where \({\bf K}(e)\) is the complete elliptic integral of the first kind [55] \[{\bf K}(e)=\int_{0}^{\pi/2}d\phi\,\frac{1}{\sqrt{1-e^{2}\sin^{2}\phi}} \tag{17}\] The determination of the equilibrium configuration is facilitated by the existence of accurate easy-to-use approximations for \({\bf K}(e)\) and \({\bf E}(e)\). [56] The dimensionless angular velocity is obtained from the \(\xi\) value of the equilibrium configuration at the given \(\Lambda\) as \[\Omega=\frac{4}{\pi}\frac{\xi^{2}}{\xi^{4}+1}\,\Lambda \tag{18}\] which reduces to \(\Omega=2\Lambda/\pi\) for the circular cylinder. We have found that the circular-to-elliptical shape transition occurs at \(\Lambda=0.966\), i.e., at \((\Lambda,\Omega)=(0.966,0.612)\). Notice that \(\Sigma\) at the bifurcation point is \(\Sigma=3/8\), [33] which yields \(\Omega=\sqrt{3/8}=0.612\). At bifurcation, \(\Omega=2\Lambda/\pi\) and one has \(\Lambda=0.966\) instead of the value close to 2 shown in Fig. 4a of Ref. [33] (see also Ref. [27]). For this reason, we have inferred that the definition of \(\Lambda\) in that reference likely is a factor of two larger than ours. As shown in Fig. 5, the elliptic deformations model is unrealistic for \(\Lambda\gtrsim 1.1\) and misses the appearance of two-lobed configurations. We conclude that deformations after bifurcation quickly become complex and representing the cross section of the deformed cylinder by an ellipse is a rough approximation. ## Appendix C Rotating \({}^{4}\)He cylinders held together by surface tension: elliptic deformations It is also illustrative to consider the case of a vortex-free superfluid \({}^{4}\)He elliptic cylinder in which angular momentum is stored only in capillary waves. The flow is irrotational and its velocity derives from a velocity potential \(\chi(x,y)=\alpha xy\), which yields [57; 44; 50] \[{\bf v}=\nabla\chi(x,y)=\omega\,\frac{a^{2}-b^{2}}{a^{2}+b^{2}}\,(y,x,0) \tag{19}\] The \(z\)-component of \[m\rho_{0}\int dx\,dy\,{\bf r}\times{\bf v} \tag{20}\] is the angular momentum per unit length \[{\cal L}=\frac{\pi}{4}m\rho_{0}\,\frac{(a^{2}-b^{2})^{2}}{a^{2}+b^{2}}\,ab\, \omega\equiv I_{irr}\,\omega \tag{21}\] where \(I_{irr}\) is the irrotational moment of inertia per unit length. Writing it in terms of \(R\) and \(\xi\) we have \[I_{irr}=\frac{\pi}{4}m\rho_{0}\,R^{4}\,\frac{(\xi^{4}-1)^{2}}{\xi^{2}(\xi^{4}+ 1)} \tag{22}\] Notice that the ratio between irrotational (Eq. (22)) and rotational, rigid-body (Eq. (13)) moments of inertia is \[\frac{I_{irr}}{I_{RB}}=\left(\frac{\xi^{4}-1}{\xi^{4}+1}\right)^{2} \tag{23}\] which is zero for a circular cylinder (\(\xi=1\)). The energy per unit length in units of \(8\gamma R\) is \[\frac{E}{8\gamma R}=\frac{1}{2}\xi\,{\bf E}(e)+\frac{2}{\pi}\Lambda^{2}\, \frac{\xi^{2}(\xi^{4}+1)}{(\xi^{4}-1)^{2}} \tag{24}\] As for the rotational \({}^{3}\)He fluid, determining the equilibrium configuration for a given \(\Lambda\) amounts to solving for \(e\) (or \(\xi\)) the algebraic equation \(d\mathcal{E}/d\xi=0\): \[\mathbf{E}(e)+\frac{2}{\xi^{4}-1}[\mathbf{E}(e)-\mathbf{K}(e)] \tag{10}\] \[-\frac{8}{\pi}\Lambda^{2}\,\frac{\xi}{(\xi^{4}-1)^{3}}\,[\xi^{8}+ 6\xi^{4}+1]=0\] The dimensionless angular velocity \(\Omega\) is obtained from the \(\xi\) value of the equilibrium configuration at the given \(\Lambda\) as \[\Omega=\frac{4}{\pi}\,\frac{\xi^{2}(\xi^{4}+1)}{(\xi^{4}-1)^{2}}\,\Lambda \tag{11}\]
2307.14800
Empirical analysis of congestion spreading in Seoul traffic network
Understanding how local traffic congestion spreads in urban traffic networks is fundamental to solving congestion problems in cities. In this work, by analyzing the high resolution data of traffic velocity in Seoul, we empirically investigate the spreading patterns and cluster formation of traffic congestion in a real-world urban traffic network. To do this, we propose a congestion identification method suitable for various types of interacting traffic flows in urban traffic networks. Our method reveals that congestion spreading in Seoul may be characterized by a tree-like structure during the morning rush hour but a more persistent loop structure during the evening rush hour. Our findings suggest that diffusion and stacking processes of local congestion play a major role in the formation of urban traffic congestion.
Jung-Hoon Jung, Young-Ho Eom
2023-07-27T12:08:56Z
http://arxiv.org/abs/2307.14800v2
# Empirical analysis of congestion spreading in Seoul traffic network ###### Abstract Understanding how local traffic congestion spreads in urban traffic networks is fundamental to solving congestion problems in cities. In this work, by analyzing the high resolution data of traffic velocity in Seoul, we empirically investigate the spreading patterns and cluster formation of traffic congestion in a real-world urban traffic network. To do this, we propose a congestion identification method suitable for various types of interacting traffic flows in urban traffic networks. Our method reveals that congestion spreading in Seoul may be characterized by a tree-like structure during the morning rush hour but a more persistent loop structure during the evening rush hour. Our findings suggest that diffusion and stacking processes of local congestion play a major role in the formation of urban traffic congestion. ## I Introduction Understanding the functionality and congestion of urban traffic networks is a crucial problem as these networks serve as the blood vessel of cities [1; 2; 3; 4]. Since an urban traffic network is a connected network of local traffic flows on individual roads in a city, the functionality of the network relies on not only these flows but also the interactions between the flows. A remarkable phenomenon owing to such interactions is the spreading of local traffic congestion across the network, creating macroscopic congestion such as clusters of congested traffic flows [5; 6; 7]. A percolation-based approach is recently proposed to investigate how such congested clusters form as the number of congested traffic flows increases [8; 9; 10; 11; 12; 13]. This approach revealed that the ways that congested clusters form (or functional clusters break up) during rush hour and non-rush hour can be qualitatively different [9; 10; 11]. Other studies used models of cascading failure or epidemic spreading to identify the patterns of congestion spreading in urban traffic networks [14; 15; 16]. A recent work [15] showed that traffic congestion, using the Motter-Lai cascading failure model [17], in Beijing spreads radially from the center of the initial congestion with an approximately constant velocity. Another work [16], using the susceptible-infected-recovered model of epidemic spreading [18], showed that the growth and decay patterns in the number of congested roads in several cities are well described by this simple epidemic model. However, to get deeper insight into the development and unfolding of urban traffic congestion, we need to ask how local congestion actually spreads and how this leads to the formation of macroscopic congestion in urban traffic networks. To empirically address these questions, we need to resolve the following two issues about congestion identification in urban traffic networks. First, we need to determine consistently whether a given traffic flow is congested or not, regardless of the various types of roads that exist in the networks. Many existing studies use a global threshold value of flow velocity for congestion identification. But a global threshold may not be effective when each flow has its own characteristics such as average velocity, velocity variance, or velocity distribution. For example, a single threshold value suitable for flows on highways may not be suitable for flows on other types of roads. Alternatively, the fundamental diagram [6; 19; 20] may identify the functional state of traffic flows but it requires not only flow velocity data but also vehicle density data, which are usually quite difficult to obtain. Second, we need to take into account the fact that the functionality of a traffic flow depends on not only the quality of the flow itself but also the quality of the flows on the neighboring roads, as urban traffic flows are not just an ideal gas of traffic flows but a network of traffic flows connected by the underlying road network. Considering neighboring flows is also helpful in a practical sense because most urban traffic data, collected from floating vehicles by the Global Positioning System (GPS), are error-prone [21; 22; 23]. In this paper, we propose a congestion identification method suitable for various types of interacting traffic flows in urban traffic networks to resolve the above two issues. The proposed method allows us to determine the state of traffic flows by collapsing their behavior onto a single statistical distribution and considering the states of their neighboring flows. With the proposed method, we analyze high-resolution traffic velocity data in Seoul to empirically investigate how local congestion spreads and forms congestion clusters in the Seoul traffic network. We revealed that congestion spreading in Seoul is characterized by a tree-like structure during the morning rush hour but a more persistent loop structure during the evening rush hour, indicating that urban traffic congestion arises through the diffusion and stacking processes of local congestion. ## II Data and Methods ### Data and traffic network construction First, we prepared the set of roads and the averaged velocities of traffic flows on these roads in Seoul, which are provided by the authorities of Seoul Transport Operation and Information Service (TOPIS) [24]. The traffic system of Seoul provides traffic services for more than 8 million commuters in the Seoul metropolitan area, which suffers from severe congestion. The velocity of the traffic flow on each road was estimated from taxi GPS data and averaged over 5-minute intervals, with a total of 288 (data points/day) \(\times\) 60 (workdays) = 17,280 data points from December 2020 to February 2021. Fig. 1(a) shows an example of the velocity data at 8:30 A.M. on Dec. 1st. 2020 by assigning each flow a raw value of velocity. Next, we built a flow-to-flow network where traffic flows on individual roads correspond to nodes. A directional edge from flow \(i\) to flow \(j\) is created if these flows are directly connected by the underlying road network and vehicles can travel from flow \(i\) to flow \(j\) given the direction of travel of the flows. This network construction is equivalent to the traditional dual network construction, except that it additionally considers the direction of travel of the vehicles [2, 25, 26, 27]. Every connection in this network represents a real-world interaction between different traffic flows, so the resulting network does not simply mimic the appearance of the underlying road network, but represents the actual organization of traffic flows. Furthermore, when we extract a subgraph of traffic congestion, such a subgraph shows the organization of congested flows in terms of connected components. This not only makes the results easier to interpret than conventional methods, but also makes it convenient to consider the influence of neighboring flows. To make the flow-to-flow network a connected system, we extracted the weakly connected components from the network without missing data points, and filtered out traffic flows that were not included in the largest connected component. This filtering is negligible and does not affect the subsequent results. Finally, we obtained the Seoul traffic flow network covering the entire city with 4,728 flows (nodes) and 10,747 connections (edges). ### Congestion as an anomalously low functional state Many researchers have tried to determine whether a traffic flow is congested with various traffic indices [5, 6, 8, 9, 16]. A typical traffic index in previous studies is the velocity ratio with the daily maximum velocity [8, 9, 16], which is a simple and powerful method for normalization between different traffic flows. However, this method has a crucial limitation, which is that the determination of congestion only relies on the ratio of instant velocity to the maximum velocity not on the velocity distribution. Thus congestion identification based on this index may be biased or inconsistent as the index loses significant information about flow velocity during the day. We suggest a more consistent way of the congestion identification for each flow by leveraging its velocity distribution. We regard the congestion of a traffic flow as its failure, which is an anomalously low velocity state. We assume that its velocity distribution obtained from data has two parts: a part that is disturbed by congestion (and therefore skewed toward low velocities) and a Figure 1: Spatial representation of traffic flows on the Seoul road network. The time of the plot is 08:30 A.M. on Dec. 1st. 2020, which shows the pattern of the morning rush hour. Each road is drawn with the geometry provided by the TOPIS metadata and colored according to its properties. (a) The spatial distribution of traffic flows with their velocity. Because different flows have different velocity limits and different properties, it is difficult to compare the state of different traffic flows using the raw value of their velocity. (b) The spatial distribution of traffic flows with the resulting state vector that is calculated and calibrated by the state propagation algorithm. Each flow is colored blue and red if it is in a free-flow state and a congested state, respectively. The clearer color means the clearer state of that flow. part that is not disturbed [28; 29; 30; 31]. First, we estimate the velocity distribution of each traffic flow undisturbed by its congestion in order to distinguish these two parts. We assumed that the undisturbed velocity distribution of a traffic flow follows a lognormal distribution, considering that velocity fluctuations may affect the ratio rather than just the value. Because the congestion of a traffic flow affects only its lower velocity, one would expect the right side of its velocity distribution to be undisturbed by congestion. In practice, we effectively estimated the mean \(\mu_{i}^{eff}\) and standard deviation \(\sigma_{i}^{eff}\) of the undisturbed velocity distribution of a given traffic flow \(i\) from its velocity sequence data \(v_{i}(t)\) (i.e., data of its velocity time series) as below, \[\mu_{i}^{eff}=\log m_{i},\quad\sigma_{i}^{eff}=\frac{\log P95_{i}-\mu_{i}}{2}, \tag{1}\] where \(m_{i}\) denotes the median (i.e., the 50 percentile) of the velocity sequence of traffic flow \(i\) and \(P95_{i}\) means the 95 percentile, regarded as the maximum velocity. Note that the effective standard deviation is approximated as half the log difference between \(P95_{i}\) and \(m_{i}\). We define the effective z-score \(z_{i}(t)\) of velocity sequences \(v_{i}(t)\) for each traffic flow \(i\) as below, \[z_{i}(t)=\frac{\log v_{i}(t)-\mu_{i}^{eff}}{\sigma_{i}^{eff}}. \tag{2}\] We use this normalized index to define the congestion of flow \(i\) at time \(t\) as its low functional state such that \(z_{i}(t)\) is lower than a given threshold (i.e., the state with an anomalously low velocity that would be difficult to observe in the undisturbed velocity distribution). Fig. 2 shows a sample of velocities represented by flows on highways (high average velocity) and flows on local roads (low average velocity), and the results of several normalization methods of the sampled data. As shown in Fig. 2(a), each traffic flow has its own velocity distribution with distinguishable fluctuation and average velocity, so a direct comparison between the raw velocity data is not meaningful. Fig. 2(b) represents the relative velocity (\(r=v/v_{max}\)). With this simplest normalization, all the velocity sequences are scaled to the range of \([0,1]\) so that the sequences can be compared, but you can see that the resulting distributions still have different means and standard deviations. Thus, if one determines whether a traffic flow is congested by comparing a single threshold value with its relative velocity, the threshold Figure 2: Samples of traffic velocities and their normalized values in various ways. We sampled two traffic flows on highways and two traffic flows on local roads, and colored each flow the same color in each panel. Each row on the left shows the pattern over time of each traffic index for the sampled traffic flow: (a) velocity, (b) relative velocity, (c) z-score, and (d) effective z-score. The histogram of each row on the left represents the probability density function of each traffic index for the sampled traffic flow. Figures (e)-(h) on the right show the effective z-score distribution of each traffic flow and the normal distribution \(\mathcal{N}(0,1)\) as a guide. Each row is equivalent to the same colored histogram in (d). The criterion for identifying congestion is indicated by the dashed black line, which is \(-1\sigma\) from an undisturbed normal distribution. value suitable for flows on highways may not be suitable for flows on other types of roads as the bias depending on the road type (e.g., highway or local road) still remains in the relative velocity. Another traffic index is the z-score, which is obtained by dividing the differences between a given sequence and its mean by its standard deviation (Fig. 2(c)). Note that in this case the mean values of all sequences are almost identical as they are close to 0, but the magnitude of the variation still depends on the road type, which affects the identification of urban congestion with a fixed threshold. This is because the calculation of the mean and standard deviation was disturbed by congestion, suggesting that a typical z-score is not free from the effects of congestion. Finally, in the case of the effective z-score we proposed (Fig. 2(d)), one can see that not only the mean but also the variance are well aligned, meaning that all the data are well described by a single distribution. These results can be seen as validating our assumptions of the velocity distribution of each traffic flow as well as the identification of congestion using a specific threshold. Therefore, we adopt the effective z-score to estimate the performance (i.e., quality of service) of each traffic flow and use it to identify congestion with a given threshold. For a given congestion threshold \(h\), the state \(s_{i}^{(0)}(t)\) of traffic flow \(i\) is initially estimated by the tangent hyperbolic function as below, \[s_{i}^{(0)}(t)=\tanh{(z_{i}(t)+h)}, \tag{3}\] where \(z_{i}(t)\) denotes the effective z-score of the velocity of traffic flow \(i\) at time \(t\). The negative and positive indicator represent a congested state and a free-flowing state, respectively. We set the congestion threshold \(h\) as 1, which means that a traffic flow which shows the performance lower than one standard deviation of the daily typical performance considered as congested (Figs. 2(e)-(f)). Because this identification is originated from the estimated undisturbed distribution of each traffic flow, the resulting vector is less affected by statistical properties of each flow and thus represents its dynamical state well. This kind of nonlinear activation is inspired by deep learning algorithm [32; 33; 34], which preserves the information about the state of each flow into binary as well, so it is useful to calibrate the state of each traffic flow with its neighboring flows. ### Congestion identification with neighboring flows: State Propagation In terms of traffic capacity, congested flows are in a state where they are unable to handle the loaded traffic, so they can make all routes that include them worse [35]. This means that the impact of a congested traffic flow is not confined to itself, but also affects the wider cluster of flows that are connected by the underlying road network. Therefore, when determining the state of a traffic flow, we should consider also the states of the neighboring flows. For example, if a given traffic flow is in a free-flowing state but the connected flows are all congested, then the flow can be considered congested. Conversely, even if the current performance of a traffic flow has dropped slightly, it should still be considered in a free-flowing state if its neighboring flows are in a good condition. Estimating the state of nodes (i.e., flows) in this way facilitates to track congestion spreading and bring us more robust results from the noise in the velocity data. We implemented the above approach in an algorithm we call the _state propagation algorithm_. In detail, it updates the state vector \(s_{i}^{(n+1)}(t)\) by calibrating the performance of node \(i\) in the flow-to-flow network using all the states \(s_{j}^{(n)}(t)\) of its outgoing neighbors, which is written as follows, \[s_{i}^{(n+1)}(t)=\tanh\left(JA_{ij}s_{j}^{(n)}(t)+z_{i}(t)+h\right), \tag{4}\] where \(A\) denotes the adjacency matrix of the flow-to-flow network, \(J\) is the overall strength of the calibration by the state propagation, \(z_{i}(t)\) is the effective z-score introduced above, and \(h\) represents the congestion threshold. We set \(J\) as 0.5 and \(h\) as 1, which means the propagation affects a calibration of a half standard deviation to the neighboring flows as maximum, and a traffic flow which shows the performance lower than one standard deviation of the daily typical performance considered as congested. Note that the flow-to-flow network is a unidirectional graph, so the state propagation is also unidirectional. We repeat this process for sufficiently large \(n\) and use the converged state \(\mathbf{s}^{*}\) as a flow state. Finally, we calculated the congestion indicator \(c_{i}(t)\) using the converged state \(\mathbf{s}^{*}\), \[c_{i}(t)=\Theta(s_{i}^{*}(t)), \tag{5}\] where \(\Theta(\cdot)\) denotes the Heaviside step function. If \(s_{i}^{*}\) is positive, the state of flow \(i\) is identified as free-flowing, if not, congested. Fig. 1(b) shows the congestion identification result \(s_{i}^{*}(t)\) of the state propagation algorithm for the velocity data represented in Fig. 1(a). The blue and red flows indicate free-flowing and congested state, respectively, which show the clear structural patterns of urban congestion. This is because the gradual propagation of the information of local traffic state reinforces the structure of the underlying road network. We believe that the result shows robust structures in the temporal evolution of urban congestion, specifically, even when there are so many cars on all the roads that they start to slow down, but congestion has not yet occurred. For the local flow level identification in Eq. (3), very small noise can make a big difference in the pattern because the traffic index is close to the decision boundary of congestion. However, in the state propagation algorithm, these small noises could be ignored due to the propagation effect of neighboring states. In this sense, the state propagation algorithm provides adequate results for analyzing the evolutionary pattern of congestion in urban traffic networks. ## III Spreading patterns of urban traffic congestion To study congestion propagation in urban traffic networks, we identified the congested traffic flows in the Seoul traffic network. We examined all 17,280 data points to get a set of congested flows in each snapshot of the Seoul traffic network. Fig. 3 shows examples of spatial representations of congested flows and their largest weakly connected component for some representative times of Dec. 1st. 2020. To understand the quantitative patterns of congestion spreading, we first check the evolution pattern of the number of congested flows \(C(t)(=\sum_{i}c_{i}(t))\) in the Seoul traffic network (Fig. 4). Each thin line represents the daily pattern of congestion evolution among 60 workdays, which shows a significant congestion growth during the morning rush hour and more severe congestion during the evening rush hour. We find a crucial structural pattern of evolving congestion, which is the exponential increase in the number of congested flows during the morning rush hour from 6 A.M. to 9 A.M. This exponential increase suggests that the spread of congestion during the morning rush hour has a tree-like structure as reported in other works [14; 16]. The spatial representation of congested flows during the morning rush hour also shows a tree-like structure (cf. Fig. 3(a)). Due to construction costs, urban highways are often based on a tree structure, occasionally with a city-level ring structure [36]. But, as observed in other works [9; 11], urban Figure 3: Spatial representation of congested flows and their largest weakly connected component. The date of example data is Dec. 1st. 2020. We plotted the original Seoul road network as a guideline (thin blue), congested flows (orange) and the largest connected components (green). Each plot represents a representative time of day: (a) morning rush hour (07:00 A.M.), (b) lunch time (01:00 P.M.), (c) before the evening rush hour peak (03:00 P.M.), and (d) after the evening rush hour peak (08:00 P.M.). highways are vulnerable to congestion during rush hour. Therefore, the observed tree-structure of congested flows during the morning rush hour is likely to stem from the congestion of flows on urban highways. To address congestion spreading patterns in terms of connected clusters, we analyzed the weakly connected component (WCC) consisting of only congested flows in the Seoul traffic network. In particular, we traces not only the largest connected component (LCC) of congested flows but also its outer boundary which is a set of free-flowing flows connected to the LCC in the flow-to-flow network. We defined the boundary \(\partial N\) of a given cluster \(N\) which is a set of flows as below, \[\partial N=\{i|i\notin N,j\in N,A_{ij}=1\}, \tag{6}\] where \(A_{ij}\) denotes the adjacency matrix of Seoul traffic network. This definition traces the candidates of flows which can be influenced by a given cluster \(N\), meaning the total number of incoming neighbors of the cluster \(N\). Fig. 5 shows the relation between the sizes (i.e., number of flows) of the congested LCC and its boundary for each data point. One can see that the number of free-flowing flows that are connected to the LCC (i.e., the size of the boundary of the LCC) is proportional to the size of the LCC during the morning rush hour (green color in Fig. 5). This result is not only the evidence for the tree-like structure of urban traffic congestion during the morning rush hour, but also an explanation of the exponential behavior of a growing pattern of urban congestion. One can notice the separation of growth and relaxation patterns of the evening congestion, which are colored blue and purple in Fig 5, respectively. These decoupled patterns of congestion LCCs reveal a hysteresis-like pattern in the evolution of urban congestion from growth to relaxation of congestion during the evening rush hour (cf. from Fig. 3(c) to (d)). We can see that the growth pattern of the evening rush hour is sublinear (blue in Fig. 5), whereas that of the morning rush hour and the relaxation pattern of the evening rush hour are near-linear (green and purple in Fig. 5). Because of the difference in the number of neighbors of the LCC, these hysteresis patterns suggest the structural shift of urban traffic congestion between the morning rush hour and the evening rush hour, and also the stability of such a structure in terms of the positive-feedback effect [37, 38]. Patterns of the morning rush hour (green in Fig. 5) and the relaxation process of the evening rush hour (purple in Fig. 5) are not much different in terms of the number of neighboring free-flowing flows of the congestion LCC (cf. Figs. 3(a) and (d)). While such patterns of congestion can be explained by tree-like structure, the emerging clusters of congested flows during the evening rush hour have fewer neighbors on their boundaries than during its relaxation process or the morning rush hour. To summarize, these results indicate that there is a topological shift between the morning and evening rush hours in the largest cluster of congested flows and hysteresis in the evolutionary Figure 4: Daily evolution patterns of congested flows in the Seoul traffic network. Each thin line represents a pattern of daily congestion ratio for different workdays, and thick line shows the average. Blue and black colors represent the flow state determined by the Eq. (3) (i.e., by considering only the flow itself) and congestion determined by the Eq. (4) (i.e., by considering the neighbor flows together), respectively. The lower plot is the same data with upper one but log scale on y-axis. Figure 5: The relation between the sizes of the LCC and its boundary. Each point represents each time snapshot (5 min interval), color coded by hour. One can see the difference between two mainstreams of the daily pattern of the LCC, an increase and a decrease in congestion during the evening hours, which are colored as the blue and the purple, respectively. pattern of urban congestion. However, these results are not sufficient to explain why congestion is much larger in the evening or to explain where the structural differences between the morning and evening rush hours come from. In order to answer those questions, we focused on a special topological feature, _loops_, consisting of congested flows. The loop (or cycle) is the set of flows which make up the closed path in the network. These loops, especially small loops, are very important structure in the network dynamics that determine how one's influence is reflected back to oneself. For example, the most important characteristic that determines a tree structure is the absence of the loop structure in the network. Moreover, when we look at congestion as a failure, these loops can be seen as a cycle of failures, which represents a kind of feedback effect where the effects of one's own failure cascade back to oneself. Therefore, we investigated how urban congestion is structurally different between morning and evening through these loops. We have found the set of \(k\)-loops \(L_{k}\) which are made up with \(k\) flows in the traffic network, and calculated the congestion indicator \(c_{l}(t)\) of a loop \(l_{i}\) at time \(t\) as below, \[c_{l}(t)=\prod_{j\in\{l_{i}\}}^{k}\Theta(-s_{j}^{*}(t)), \tag{7}\] where \(\Theta(\cdot)\) denotes the Heaviside step function. This congestion indicator for loops is 1 only if all the consisting flows are in a congested state (otherwise 0). We calculated the above indicator only for the loops with 3, 4, and 5 flows, because the larger loops are less important in terms of the feedback effect. We investigated all 3-, 4-, and 5-flows loops in the Seoul traffic network with a brute-force manner. Fig. 6 shows the time evolution of the number of \(k\)-loops \(C_{k}(t)(=\sum_{l\in L_{k}}c_{l}(t))\) over a day. As we expected above, only less than 5% of loops are congested in the morning periods from 6 A.M. to 9 A.M., while the congestion ratio of individual flows is over 20%. This absence of small loops is clear evidence for the tree structure in the evolution pattern of urban congestion during the morning rush hour. However, after roughly 1 P.M., small-size loops of congested flows emerge drastically so that the congestion ratio for each loop reaches nearly 40% or more about 7 P.M., which is not shown by the congestion identification based on a single flow (dashed lines in Fig. 6). These results suggest that, despite the fact that the overall traffic volumes during the morning and evening commutes are not significantly different [24], there are significant differences in traffic flow dynamics between the two time periods, and that the difference in the congested loops is responsible for the differences in traffic flow dynamics. On the other hand, after the number of congested flows reaches a peak, these loops decrease sharply, so that the relaxation pattern in the evening commute shows a tree structure similar to the spreading pattern in the morning commute, as expected above. To understand the impact of the loop structure on urban congestion propagation, the distribution of congestion duration \(d\) was obtained by calculating the consecutive time of congested states for each traffic object (i.e., flows and loops) \(i\), which is described as follow. First, let us consider the congestion starting point indicator \(o_{i}(t)\) as below, \[o_{i}(t)=c_{i}(t)\cdot(1-c_{i}(t-1)), \tag{8}\] where \(c_{i}(t)\) denotes the congestion indicator of a traffic object \(i\) defined in Eq. (7). This indicator \(o_{i}(t)\) shows 1 if congestion emerges at time \(t\), else 0. After that, we considered the congestion length \(l_{i}(t)\) to be the farthest time shift \(\tau\) that represents the continuous congestion of Figure 6: Daily evolution patterns of congested loops in the Seoul traffic network. Each solid line represents the workday average of temporal evolution of congestion ratio of each flow and loop of 3, 4, and 5 flows combined. The results of congestion identification based on a local flow (\(s_{0}\)) is represented as the dashed line. As a guideline, each dotted line shows the probability that each k-flows loop get congested with a given congestion ratio of single flows, which is calculated as \(p^{k}\) where \(p\) and \(k\) denotes the overall congestion probability of \(s_{0}\) and the number of traffic flows in loops, respectively. The lower plot is the same data with upper one but log scale on y-axis. traffic object \(i\) from time \(t\), \[l_{i}(t)=\sum_{\tau=0}^{T-t}\prod_{\Delta t=0}^{\tau}c_{i}(t+\Delta t), \tag{9}\] where \(T\) denotes the total number of data points. If \(c_{i}(t+\tau)=0\) once, then for any \(\tau\) greater than that, the term inside of a sum will always be zero. So, this calculation allow us to know the maximum length of consecutive congestion from time \(t\). With above indicators, we calculated the congestion duration \(d_{i}(t)\), which can be written as below, \[d_{i}(t)=\begin{cases}0,&(\text{if}\quad o_{i}(t)=0)\\ l_{i}(t),&(\text{otherwise},)\end{cases} \tag{10}\] Finally, we collected all the positive congestion duration that occurred in a certain set \(S\) of traffic objects, which can be formulated as follows, \[D(S)=\{d_{i}(t)|d_{i}(t)>0,i\in S\}. \tag{11}\] We prepared a set of traffic objects based on various categories (e.g., loops by the number of flows, flows by associated loops) to investigate the impact of loop structure on congestion. In addition, to remove spatial correlation, we shuffled only the flow configuration, leaving the structure of the road network intact, and examined the congestion on the loops of that network. In this way, we can obtain a congestion duration distribution and its characteristic decay time, which represents the persistence of urban congestion for each traffic object. Fig. 7 shows the complementary cumulative density function of congestion duration distribution of each traffic object. In general, all loop structures which are represented as solid lines were found to be more persistent than the shuffled one which are represented as dotted lines. Especially, even though the congestion of a loop is less probable than that of a single flow, for 4-flows loops (green line), the characteristic time is similar to single flow. This result suggests that the positive feedback of the loop structure makes the structure more persistent. Once such a cycle of congestion in traffic networks is created, it would not disappear easily and make things worse by disrupting neighboring traffic flows. This is even more pronounced when we separate the distributions based on which loop a flow belongs to, and find that flows belonging to a loop of four flows (green dashed line) experience significantly longer congestion than those that do not. In addition, traffic flows that do not form small loops (purple dashed line) show a shorter characteristic time than flows in 4- or 5-flows loops (green and red dashed lines, respectively), meaning that congested flows in a tree structure experience shorter congestion than congested flows in a loop structure. All these results indicate that the essential patterns of urban congestion propagation are the tree and loop structure. The tree structure that emerges during the morning rush hour and relaxation part of the evening rush hour tells us the basic spreading pattern of urban congestion, which is the diffusion-like (or contagion-like) process. We expect that this pattern stemmed from the structure of the highway network, which serves as a long-range connection through the city and is designed for efficiency. The other can be explained by the stacking process of urban congestion, the small loop structure in which congestion resulting from the morning rush hour is not relaxed during the midday, thus exacerbating urban congestion during the evening rush hour. Flows that make up a small loop composed of 4 or 5 flows are identified as having a longer congestion duration than flows that are not part of the loop. Figure 7: The distribution of congestion duration of loops and individual flows. Each line represents the decay pattern of each type of the congested loops. The blue solid line indicates the congestion that was appeared in every single traffic flow, and its tendency shown with grey solid line as a reference. The yellow, green and red line indicate the congestion duration of loops with 3, 4 and 5 flows, respectively. To investigate the impact of congested loops on a single flow, we categorized flows by the loop they belong to. The dashed line describes the congestion duration on traffic flows that are classified as associated loops, and the purple line shows exceptional flows which are not consisting loops. For a comparison, we generate random congestion in each loop by shuffling only the spatial organization of the flows, leaving the structure of the network unchanged. Each dotted line represents the congestion duration distribution for each loop with the random flow configuration. Discussion In summary, we developed a systematic framework to analyze congestion spreading in empirical data by viewing traffic flows in cities as network flows, defining congestion as an anomalously low functional state of these flows. Our framework enables us to determine the functional state of traffic flows in urban traffic networks by collapsing the behavior of various types of traffic flows onto a single statistical distribution and taking into account the functional states of their neighboring flows. As a result, we found the tree structure in congestion evolution patterns during the morning rush hour observed in the exponential growth in the number of congested flows, the near-linear relation between the size of the largest connected component of congested flows and the size of its boundary, and the lack of small loops during the morning rush hour. On the other hand, we observed a significant increase in the number of small-size loops of congested flows during the evening rush hour. We observed that these loops are quite persistent as they represent the feedback effect of urban congestion. Our findings suggest that evaluating the dynamical state of nodes in networks by taking into account the state of their neighboring nodes is helpful to provide a clearer picture of dynamical processes on networks. In the case of traffic dynamics, by propagating the information of each flow's functional state, we are able to reconstruct the structural patterns of congestion spreading, such as trees and loops. Although our framework provides an effective tool for understanding urban congestion, it also has some limitations. We only identify congestion as the failure of each traffic flow based on an estimation of a undisturbed velocity distribution. So, if the velocity distribution of a traffic flow is already too slow that congestion cannot be distinguished by our estimation, we cannot identify the impact of this congestion in the data. Moreover, our findings are still limited to the phenomenon in one city, Seoul, and need to be further validated with data from various cities. Despite these limitations, it can be seen that the algorithm is powerful in revealing various aspects of traffic dynamics in urban traffic networks. As an immediate follow-up study, we will investigate urban congestion in other cities to identify universal patterns in the spreading process of urban traffic congestion. We will also apply this algorithm to solve other collective phenomena that occur in networked systems, and understand the spatio-temporal patterns of these phenomena to reveal the relationship between structure and dynamics. We hope that understanding the circular effects of urban traffic congestion will help traffic engineers and road network designers to solve the socioeconomic problems of urban congestion by alleviating severe traffic congestion in large cities. ###### Acknowledgements. This work was supported by the 2021 Research Fund of the University of Seoul. The authors thank the Seoul Metropolitan Government for sharing the Seoul road traffic data. The authors also acknowledge the Urban Big data and AI Institute of the University of Seoul supercomputing resources ([http://ubai.uos.ac.kr](http://ubai.uos.ac.kr)) made available for conducting the research reported in this paper.